Post a photo of your board with the esp chip visable. Not a link to where you bought it. Show the connections as well . Then we can tell what you have.
Oh… I noticed a problem… instead of using GPIO2 I was not paying attention and connected it to pin12. So by swapping that to pin 02 I got it up and running…or at least it is listening, reacting to okay nabu, and responding according the device page. I can’t hear anything because I’m missing that speaker.
Not sure if those pins 6,7,8 were a problem at all but changed those to 14,15,16. If it matters.
Here’s couple of photos:
Led is not responding. Led’s white is connected to GRD, red to 5V on board, and green to pin nr 17. But hmm…what does this actually mean:
" pin: GPIO17 #GPIO48 # On board light"
light:
- platform: esp32_rmt_led_strip
rgb_order: GRB
pin: GPIO17 #GPIO48 # On board light
num_leds: 3
chipset: WS2812
name: "Status LED"
id: led_strip
disabled_by_default: False
entity_category: config
icon: mdi:led-on
default_transition_length: 0s
It means that I have 3 rgb leds (ledstrip) connected to gpio17. As a reminder if you do not want to use extra rgb leds and only the onboard rgb led then you have you use gpio48 instead. That is internaly wired.
As of the pins mentioned in the yaml. When you connect everything according those pins then it should work. Do you connect things diffently then you must change the pin number in the yaml. But… some pins like those use for i2c are standard pins to use. They can be changed to other pins. You can output i2c to non default pins but why getting into more trouble when you already have your hands full with this.
Do I understand led thing right.
You have ledstrip with total 3 rgb leds in it right? I’m trying to figure out why in that youtube video I posted before that guy uses num_led: 1 and he have most likely the exact same WS2812B COB led strip as I have. There is 30leds on a 0,5m strip. 5v pin cannot handle that amount of leds I suppose or I might be really lost at the moment ![]()
EDIT: FOrgot to mention that pin48 works normally so I’m able to control board led. That’s cool. Also forgot to mention that I’m grateful that you guys helped me out with this. I really appreciate it.
It is mentioned before. Beter start with a simple project first. I am getting the idea that you are missing a lot basic knowledge. No offense, you have to learn it but you are taking to big leaps here.
About num_led: 1
In short. You have to define the number of leds you want to use in your ledstrip. I am not going to look at youtube clips but if i guess the he us using platform: fastled. If so or not. You have to look up the github page for that library/platform to see for yourself what these settings do.
So if you use the onboard rgb led then that is just 1 led that you need to define
Now some electronics basics for you. Also some knowledge you should have looked for on the net. To turn an average led fully on it takes about 20mA however 10mA is often enough to get the brightness you want. But keep 20mA to be save. Now do the math yourself if you turn on 30 leds fully on. Do you got that number? Okay, now take the datasheet for your esp32-s3. You got that, right? Well what will that tell you if you pull 600mA from the 5V pin? Again I want to help you to think about what you are trying to do and not to lecture you. Is this 5V pin before or after the onboard regulator. If before (it is) then you are pulling 600mA extra that from the usb connector. If the esp32-s3 starts up and uses wifi it can pull up to 1,5A. Adding your leds to it mights put your device in a constant reboot.
But… if you solder de pad on the board you can power the board also via the 5V pin. Then you can use a beter power source directly on that pin and bypass the usb connector.
Having said all this. If you keep the 30 leds off at boot you might get it working stable with a 5v 2A usb powerplug.
I just looked at your pictures and noticed that your board does not have a solderpad for the 5v pin. So probaly it is already connected directly and you can use it also to power the board with 5V
Yep, it definitely feels a bit like I jumped straight into the big leagues. On the other hand, I’ve already learned a lot here, but at your expense of course.
Thanks for the clarification on the LEDs.
A little update on the situation. I got the voice assistant working and actually everything works pretty well. Sometimes I notice that during longer conversations, ESP suddenly restarts. It’s probably due to this long conversation or possibly the wifi network coverage. I still have to test it in a place where the dB is good.
I haven’t been able to get the external LEDs to work no matter what I do. The WD2812B doesn’t light up even if I try pins 17, 48 or even 10. The LEDs are powered by an external 5V3A power supply, whose GRD is the same as the ESP and the same as the LED strip (white wire). The control wire (green, middle) is connected to the pin according to YAML. And the red+ power is indeed taken from the power supply. The number of LEDs is defined as 30 for the entire strip, or 5 for the smaller cut strip. Neither choice lights up the lights. ChatGPT suggests using Neopixeplus, because I’m told I should have a level shifter between the data lines if I use esp32_rmt_led_strip.
So if this is a beginner’s problem, you can laugh. If not, suggestions for solving the situation are welcome.
You need to define the number of leds to the number of leds you are using it has nothing to do with the length of the original strip. Even with 5 leds in your strip you can set the number to 1 and only the first led will work, or set it to 30 and all 5 will work.
The Leds should work if the GRD from the PSU is connected to the GRD on the esp. But also try powering one led directly from the board
If you connect a ledstrip to a pin it is adviced to place a 100 ohm resistor in the data line. Pin → 100 ohm → data IN of led strip.
It will work without the resistor but it gives some protection to the data pin and protect it from pulling to much current.
You got this seciion in your yaml
light:
- platform: esp32_rmt_led_strip
rgb_order: GRB
pin: GPIO17 #GPIO48 # On board light
num_leds: 3
chipset: WS2812
Make sure that the dataline of the ledstrip is connected to this pin. So connect it to pin gpio17 or change the pin number in the yaml
Please note that pin X as written on the board is not always the same as gpio X. To make sure that pin X is also GPIO X you have to look up the pins on the net
Now change to num_leds: to the max number of leds in the led strip or less.
After compiling go to your device in HA esphome intergration. Not the add ON but to the actual entity. There you now have a button or slider to turn it in. There are there more options for color and choosing effects.
If it still not works then please post your yaml here.
Good luck and you are doing well!
Hi,
Here’s the YAML down below.
I have changed the pins in my config, and everything else is working except the external LED strip. At the moment, with this YAML, I’m using GPIO10, which should be just as fine as GPIO17. I’ve tested both of them, but no success.
If I open the LED entity in Home Assistant or go directly to the ESP device management, I can turn the LED light on. However, with the current configuration, it only turns the ESP’s onboard LED on. I can change the color and brightness, but the external LED strip does not respond.
Right now, I have a complete LED strip connected to the ESP with 30 LEDs. The YAML is currently set to use only 5.
Wiring details:
White wire is ground, connected to the ESP’s ground, the power adapter’s ground, and other component grounds (mic and amplifier). Both the mic and speaker work fine.
Green wire is data, connected to GPIO10 on the ESP.
Red wire is +5V, powered from the power adapter.
If you check my photo and wonder about the two white USB cables and the yellow heat shrink tube → I have two 5V 3A power adapters, each with a red wire if needed. I’m currently using only one of these adapters, which powers the external LED strip. Grounds from both adapters are connected together, and they are also connected to the other grounds.
# ESP Voice Assistant
# 07 Oct 2024 (v 3.0.0)
# - Based on the ESP32-S3_BOX version
# adapted to work without a screen but with I2s amp and mic
# Added some LED's for interacting
# Request and response sensors in HA
# Made to work in contineous mode
#
# by A.A. van Zoelen
# Based on the work of Giants
# Voice puck location: WOONKAMER
substitutions:
# Phases of the Voice Assistant
# IDLE: The voice assistant is ready to be triggered by a wake-word
voice_assist_idle_phase_id: '1'
# LISTENING: The voice assistant is ready to listen to a voice command (after being triggered by the wake word)
voice_assist_listening_phase_id: '2'
# THINKING: The voice assistant is currently processing the command
voice_assist_thinking_phase_id: '3'
# REPLYING: The voice assistant is replying to the command
voice_assist_replying_phase_id: '4'
# NOT_READY: The voice assistant is not ready
voice_assist_not_ready_phase_id: '10'
# ERROR: The voice assistant encountered an error
voice_assist_error_phase_id: '11'
# MUTED: The voice assistant is muted and will not reply to a wake-word
voice_assist_muted_phase_id: '12'
esphome:
name: "esp-assistant-vp"
friendly_name: ESP Assistant VP
project:
name: AA_van_Zoelen.VoicePuck
version: '4.0.0'
on_boot:
priority: 600
then:
- light.turn_on:
id: led_strip
blue: 0%
red: 0%
green: 100%
brightness: 50%
effect: "scanning"
- delay: 30s
- if:
condition:
lambda: return id(init_in_progress);
then:
- lambda: id(init_in_progress) = false;
- light.turn_off:
id: led_strip
esp32:
board: esp32-s3-devkitc-1
cpu_frequency: 240MHz
variant: esp32s3
flash_size: 16MB
framework:
type: esp-idf
version: recommended
sdkconfig_options:
CONFIG_ESP32S3_DATA_CACHE_64KB: "y"
CONFIG_ESP32S3_DATA_CACHE_LINE_64B: "y"
CONFIG_ESP32S3_INSTRUCTION_CACHE_32KB: "y"
CONFIG_SPIRAM_RODATA: "y"
CONFIG_SPIRAM_FETCH_INSTRUCTIONS: "y"
CONFIG_BT_ALLOCATION_FROM_SPIRAM_FIRST: "y"
CONFIG_BT_BLE_DYNAMIC_ENV_MEMORY: "y"
CONFIG_MBEDTLS_EXTERNAL_MEM_ALLOC: "y"
CONFIG_MBEDTLS_SSL_PROTO_TLS1_3: "y"
#network:
# enable_ipv6: true
# Enable logging
logger:
# Enable Home Assistant API
api:
encryption:
key: "XXXXXXXXXXXXXXXX"
actions:
- action: start_va
then:
- voice_assistant.start
- action: stop_va
then:
- voice_assistant.stop
ota:
- platform: esphome
password: "XXXXXXXXXXXXX"
wifi:
ssid: "XXXXXXXXX"
password: "XXXXXXXXXX"
output_power: 8.5dB
manual_ip:
static_ip: 192.168.86.150
gateway: 192.168.86.1
subnet: 255.255.255.0
dns1: 192.168.86.1
dns2: 8.8.8.8
# If the device connects, or disconnects, to the Wifi: Run the script to refresh the LED status
on_connect:
# - script.execute: led_off
- light.turn_off:
id: led_strip
on_disconnect:
# - script.execute: control_led
- light.turn_on:
id: led_strip
blue: 0%
red: 100%
green: 0%
brightness: 98%
effect: "Fast Pulse"
# Enable fallback hotspot (captive portal) in case wifi connection fails
ap:
ssid: "XXXXXXXXXX"
password: "XXXXXXXXXXX"
psram:
mode: octal
speed: 80MHz
i2s_audio:
- id: i2s_in
i2s_lrclk_pin: GPIO18 #WS
i2s_bclk_pin: GPIO2 #SCK
- id: i2s_out
i2s_lrclk_pin: GPIO15
i2s_bclk_pin: GPIO16
microphone:
- platform: i2s_audio
id: mic_id
adc_type: external
i2s_audio_id: i2s_in
i2s_din_pin: GPIO4 #SD
channel: left
speaker:
- platform: i2s_audio
id: speaker_id
i2s_audio_id: i2s_out
dac_type: external
i2s_dout_pin:
number: GPIO14 #DIN Pin of the MAX98357A Audio Amplifier
sample_rate: 48000
buffer_duration: 90ms
- platform: mixer
id: mixer_speaker_id
output_speaker: speaker_id
source_speakers:
- id: announcement_spk_mixer_input
- id: media_spk_mixer_input
- platform: resampler
id: media_spk_resampling_input
output_speaker: media_spk_mixer_input
- platform: resampler
id: announcement_spk_resampling_input
output_speaker: announcement_spk_mixer_input
globals:
# Global initialisation variable. Initialized to true and set to false once everything is connected. Only used to have a smooth "plugging" experience
- id: init_in_progress
type: bool
restore_value: no
initial_value: 'true'
# Global variable tracking the phase of the voice assistant (defined above). Initialized to not_ready
- id: voice_assistant_phase
type: int
restore_value: no
initial_value: ${voice_assist_not_ready_phase_id}
# Variable for tracking TTS triggering
- id: is_tts_active
type: bool
restore_value: no
initial_value: 'false'
# Variable for tracking built-in continued conversations
- id: question_flag
type: bool
restore_value: no
initial_value: 'false'
# Variable for tracking ww
- id: last_wake_word
type: std::string
restore_value: no
initial_value: '""'
light:
- platform: esp32_rmt_led_strip
rgb_order: GRB
pin: GPIO10 #GPIO48 # On board light
num_leds: 5
chipset: WS2812
name: "Status LED"
id: led_strip
disabled_by_default: False
entity_category: config
icon: mdi:led-on
default_transition_length: 0s
effects:
- pulse:
name: "Slow Pulse"
transition_length: 770ms
update_interval: 770ms
min_brightness: 10%
max_brightness: 20%
- pulse:
name: "Fast Pulse"
transition_length: 100ms
update_interval: 100ms
min_brightness: 60%
max_brightness: 80%
- addressable_scan:
name: "Scanning"
move_interval: 120ms
scan_width: 1
- pulse:
name: "Waiting for wake word"
min_brightness: 15%
max_brightness: 35%
transition_length: 3s # defaults to 1s
update_interval: 3s
media_player:
- platform: speaker
name: None
id: speaker_media_player_id
media_pipeline:
speaker: media_spk_resampling_input
num_channels: 1
announcement_pipeline:
speaker: announcement_spk_resampling_input
num_channels: 1
on_announcement:
- mixer_speaker.apply_ducking:
id: media_spk_mixer_input
decibel_reduction: 25
duration: 0.2s
on_state:
- delay: 0.7s
- if:
condition:
and:
- not:
voice_assistant.is_running:
- not:
media_player.is_announcing:
then:
- mixer_speaker.apply_ducking:
id: media_spk_mixer_input
decibel_reduction: !lambda |-
return id(ducking_decibel).state;
duration: 1.0s
files:
- id: alarm_sound
file: https://github.com/mitrokun/esp32s3-voice_assistant/raw/main/alarm.flac # 48000 Hz sample rate, mono or stereo audio, and 16 bps
- id: beep
file: https://github.com/mitrokun/esp32s3-voice_assistant/raw/main/r2d2d.flac
voice_assistant:
id: va
microphone:
microphone: mic_id
gain_factor: 16
media_player: speaker_media_player_id
micro_wake_word: mww
noise_suppression_level: 2.0
auto_gain: 0 dbfs
volume_multiplier: 1
# When the voice assistant connects to HA:
# Set init_in_progress to false (Initialization is over).
# If the switch is on, start the voice assistant
on_client_connected:
- lambda: id(init_in_progress) = false;
- if:
condition:
switch.is_on: voice_enabled
then:
- micro_wake_word.start
- lambda: id(voice_assistant_phase) = ${voice_assist_idle_phase_id};
else:
- lambda: id(voice_assistant_phase) = ${voice_assist_muted_phase_id};
- light.turn_on:
id: led_strip
blue: 10%
red: 10%
green: 100%
effect: "Waiting for wake word"
- delay: 5s
- light.turn_off:
id: led_strip
# When the voice assistant disconnects to HA:
# Stop the voice assistant
on_client_disconnected:
- lambda: id(voice_assistant_phase) = ${voice_assist_not_ready_phase_id};
- micro_wake_word.stop
on_listening:
# Reset flags
- lambda: |-
id(voice_assistant_phase) = ${voice_assist_listening_phase_id};
id(is_tts_active) = false;
id(question_flag) = false;
# Microphone operation indicator (red led)
- light.turn_on:
id: led_strip
blue: 0%
red: 100%
green: 0%
brightness: 50%
effect: "Fast Pulse"
# Waiting for speech for 4 seconds, otherwise exit
- script.execute: listening_timeout
on_stt_vad_start:
# Turn off the script if speech is detected
- script.stop: listening_timeout
on_stt_vad_end:
- light.turn_off:
id: led_strip
- lambda: id(voice_assistant_phase) = ${voice_assist_thinking_phase_id};
on_stt_end:
# Event for HA with recognized speech
- homeassistant.event:
event: esphome.stt_text
data:
text: !lambda return x;
on_intent_progress:
- if:
condition:
# A nonempty x variable means a streaming TTS url was sent to the media player
lambda: 'return !x.empty();'
then:
- lambda: id(voice_assistant_phase) = ${voice_assist_replying_phase_id};
# Set the flag when the stage is reached
- lambda: |-
id(is_tts_active) = true;
# Start a script that would potentially enable the stop word if the response is longer than a second
- script.execute: activate_stop_word_once
on_tts_start:
- if:
condition:
# The intent_progress trigger didn't start the TTS Reponse
lambda: 'return id(voice_assistant_phase) != ${voice_assist_replying_phase_id};'
then:
- lambda: id(voice_assistant_phase) = ${voice_assist_replying_phase_id};
# Start a script that would potentially enable the stop word if the response is longer than a second
- script.execute: activate_stop_word_once
# Finding a question mark at the end of a sentence.
- lambda: |-
bool is_question = false;
if (!x.empty() && x.back() == '?') {
is_question = true;
}
id(question_flag) = is_question;
# - logger.log:
# format: "question_flag: %d (0=false, 1=true)"
# args:
# - id(question_flag)
on_tts_end:
- if:
condition:
switch.is_on: extended_dialog
then:
- lambda: |-
id(is_tts_active) = true;
on_timer_finished:
then:
- switch.turn_on: timer_ringing
on_end:
# Additional check for microphone LED
- if:
condition:
- light.is_on: led_strip
then:
- light.turn_off:
id: led_strip
- wait_until:
condition:
- media_player.is_announcing:
timeout: 0.5s
- wait_until:
not:
voice_assistant.is_running:
- delay: 0.5s
# New start of the pipeline if the conditions are met
- if:
condition:
and:
- switch.is_on: continued_conversation_enabled
- lambda: 'return !id(question_flag);'
- lambda: 'return id(is_tts_active);'
- lambda: 'return id(last_wake_word) != "Stop";'
then:
- voice_assistant.start:
wake_word: !lambda return id(last_wake_word);
else:
# Stop ducking audio.
- mixer_speaker.apply_ducking:
id: media_spk_mixer_input
decibel_reduction: !lambda |-
return id(ducking_decibel).state;
duration: 1.0s
- lambda: id(voice_assistant_phase) = ${voice_assist_idle_phase_id};
# When the voice assistant encounters an error:
# Wait 1 second and set the correct phase (idle or muted depending on the state of the switch)
on_error:
- if:
condition:
lambda: return !id(init_in_progress);
then:
- lambda: id(voice_assistant_phase) = ${voice_assist_error_phase_id};
- delay: 1s
- if:
condition:
switch.is_on: voice_enabled
then:
- lambda: id(voice_assistant_phase) = ${voice_assist_idle_phase_id};
else:
- lambda: id(voice_assistant_phase) = ${voice_assist_muted_phase_id};
micro_wake_word:
models:
- model: https://github.com/kahrendt/microWakeWord/releases/download/okay_nabu_20241226.3/okay_nabu.json
id: okay_nabu
- model: https://raw.githubusercontent.com/Darkmadda/ha-v-pe/refs/heads/main/hey_glados.json
id: hey_glados
- model: https://github.com/kahrendt/microWakeWord/releases/download/stop/stop.json
id: stop
internal: true
vad:
model: https://github.com/kahrendt/microWakeWord/releases/download/v2.1_models/vad.json
id: mww
stop_after_detection: false
on_wake_word_detected:
- if:
condition:
switch.is_on: timer_ringing
then:
- switch.turn_off: timer_ringing
else:
- if:
condition:
switch.is_on: voice_enabled
then:
- if:
condition:
voice_assistant.is_running:
# Restart the pipeline if Continued conversation is enabled
# Or stop it completely by saying “Stop”
then:
- lambda: id(last_wake_word) = wake_word;
- delay: 100ms
- voice_assistant.stop:
# Stop any other media player announcement
else:
- if:
condition:
media_player.is_announcing:
then:
- media_player.stop:
announcement: true
# Start the voice assistant and play the wake sound, if enabled
else:
- lambda: id(last_wake_word) = wake_word;
- script.execute:
id: play_sound
priority: true
sound_file: !lambda return id(beep);
- delay: 280ms
# - media_player.speaker.play_on_device_media_file:
# media_file: beep
# announcement: true
# - delay: 300ms
- voice_assistant.start:
wake_word: !lambda return wake_word;
script:
# - id: led_off
# then:
# - light.turn_off:
# id: led_strip
- id: listening_timeout
mode: restart
then:
- delay: 3s
- if:
condition:
lambda: |-
return id(voice_assistant_phase) == 2;
then:
# BARM - switch.turn_off: wake_led
- light.turn_off:
id: led_strip
- voice_assistant.stop:
- lambda: id(voice_assistant_phase) = ${voice_assist_idle_phase_id};
- id: activate_stop_word_once
then:
- delay: 1s
# Enable stop wake word
- if:
condition:
switch.is_off: timer_ringing
then:
- micro_wake_word.enable_model: stop
- wait_until:
not:
media_player.is_announcing:
- if:
condition:
switch.is_off: timer_ringing
then:
- micro_wake_word.disable_model: stop
- id: play_sound
parameters:
priority: bool
sound_file: "audio::AudioFile*"
then:
- lambda: |-
if (priority) {
id(speaker_media_player_id)
->make_call()
.set_command(media_player::MediaPlayerCommand::MEDIA_PLAYER_COMMAND_STOP)
.set_announcement(true)
.perform();
}
if ( (id(speaker_media_player_id).state != media_player::MediaPlayerState::MEDIA_PLAYER_STATE_ANNOUNCING ) || priority) {
id(speaker_media_player_id)
->play_file(sound_file, true, false);
}
select:
- platform: template
name: "Wake word sensitivity"
optimistic: true
initial_option: Slightly sensitive
restore_value: true
entity_category: config
options:
- Slightly sensitive
- Slightly+ sensitive
- Moderately sensitive
- Very sensitive
on_value:
# Sets specific wake word probabilities computed for each particular model
# Note probability cutoffs are set as a quantized uint8 value, each comment has the corresponding floating point cutoff
# False Accepts per Hour values are tested against all units and channels from the Dinner Party Corpus.
# These cutoffs apply only to the specific models included in the firmware: [email protected], hey_jarvis@v2, hey_mycroft@v2
lambda: |-
if (x == "Slightly sensitive") {
id(okay_nabu).set_probability_cutoff(217); // 0.85 -> 0.000 FAPH on DipCo (Manifest's default)
} else if (x == "Slightly+ sensitive") {
id(okay_nabu).set_probability_cutoff(191); // 0.75
} else if (x == "Moderately sensitive") {
id(okay_nabu).set_probability_cutoff(176); // 0.69 -> 0.376 FAPH on DipCo
} else if (x == "Very sensitive") {
id(okay_nabu).set_probability_cutoff(143); // 0.56 -> 0.751 FAPH on DipCo
}
- platform: logger
id: logger_select
name: Logger Level
disabled_by_default: true
button:
- platform: restart
name: reboot
number:
- platform: template
name: "Decibel Reduction"
id: ducking_decibel
min_value: 0
max_value: 12
step: 1
initial_value: 0
unit_of_measurement: "dB"
set_action:
- mixer_speaker.apply_ducking:
id: media_spk_mixer_input
decibel_reduction: !lambda 'return (int)x;'
duration: 0.2s
switch:
- platform: template
name: Enable Voice Assistant
id: voice_enabled
optimistic: true
restore_mode: RESTORE_DEFAULT_ON
icon: mdi:assistant
# When the switch is turned on (on Home Assistant):
# Start the voice assistant component
on_turn_on:
- if:
condition:
lambda: return !id(init_in_progress);
then:
- lambda: id(voice_assistant_phase) = ${voice_assist_idle_phase_id};
- if:
condition:
not:
- voice_assistant.is_running
then:
- micro_wake_word.start
- light.turn_on:
id: led_strip
blue: 15%
red: 0%
green: 15%
brightness: 20%
effect: "Slow Pulse"
# When the switch is turned off (on Home Assistant):
# Stop the voice assistant component
on_turn_off:
- if:
condition:
lambda: return !id(init_in_progress);
then:
- voice_assistant.stop
- micro_wake_word.stop
- lambda: id(voice_assistant_phase) = ${voice_assist_muted_phase_id};
- light.turn_off:
id: led_strip
- platform: template
name: "Ring Timer"
id: timer_ringing
optimistic: true
restore_mode: ALWAYS_OFF
on_turn_off:
# Stop playing the alarm
- media_player.stop:
announcement: true
on_turn_on:
- while:
condition:
switch.is_on: timer_ringing
then:
# Play the alarm sound as an announcement
- media_player.speaker.play_on_device_media_file:
media_file: alarm_sound
announcement: true
# Wait until the alarm sound starts playing
- wait_until:
media_player.is_announcing:
# Wait until the alarm sound stops playing
- wait_until:
not:
media_player.is_announcing:
- delay: 1000ms
- platform: template
name: Continued Conversation
id: continued_conversation_enabled
optimistic: true
restore_mode: RESTORE_DEFAULT_ON
icon: mdi:chat-processing-outline
- platform: template
name: Continued Conversation+
id: extended_dialog
optimistic: true
restore_mode: RESTORE_DEFAULT_OFF
icon: mdi:chat-plus-outline
EDIT: To avoid any misunderstandings about the power adapter, I went to make a new power cable where ground and + are simply connected without any strange variables. Attached is a picture of the wiring jungle, but the end result is the same → the LED strip does not work.
For me the ledstrip looks strange
Is this a WS2812 chipset ledstrip?
How to debug?
Start simple. Make a yaml that only control the ledstrip
For now save your voice device yaml and strip everything from it except the light
If this compiles without error but isn’t working then your ledstrip might have a different chipset
Here you can find the available chipsets for the RMT platform
https://esphome.io/components/light/esp32_rmt_led_strip/
(
# ESP LED strip test
# 24 Nov 2025 (v 1.0.0)
#
# by A.A. van Zoelen
#
# Just keep this naming so no new entitiues will be created.
#
esphome:
name: "esp-assistant-vp"
friendly_name: ESP Assistant VP
project:
name: AA_van_Zoelen.VoicePuck
version: '4.0.0'
esp32:
board: esp32-s3-devkitc-1
cpu_frequency: 240MHz
variant: esp32s3
flash_size: 16MB
framework:
type: esp-idf
version: recommended
sdkconfig_options:
CONFIG_ESP32S3_DATA_CACHE_64KB: "y"
CONFIG_ESP32S3_DATA_CACHE_LINE_64B: "y"
CONFIG_ESP32S3_INSTRUCTION_CACHE_32KB: "y"
CONFIG_SPIRAM_RODATA: "y"
CONFIG_SPIRAM_FETCH_INSTRUCTIONS: "y"
CONFIG_BT_ALLOCATION_FROM_SPIRAM_FIRST: "y"
CONFIG_BT_BLE_DYNAMIC_ENV_MEMORY: "y"
CONFIG_MBEDTLS_EXTERNAL_MEM_ALLOC: "y"
CONFIG_MBEDTLS_SSL_PROTO_TLS1_3: "y"
#network:
# enable_ipv6: true
# Enable logging
logger:
# Enable Home Assistant API
api:
encryption:
key: "XXXXXXXXXXXXXXXX"
ota:
- platform: esphome
password: "XXXXXXXXXXXXX"
wifi:
ssid: "XXXXXXXXX"
password: "XXXXXXXXXX"
output_power: 8.5dB
manual_ip:
static_ip: 192.168.86.150
gateway: 192.168.86.1
subnet: 255.255.255.0
dns1: 192.168.86.1
dns2: 8.8.8.8
# Enable fallback hotspot (captive portal) in case wifi connection fails
ap:
ssid: "XXXXXXXXXX"
password: "XXXXXXXXXXX"
psram:
mode: octal
speed: 80MHz
light:
- platform: esp32_rmt_led_strip
rgb_order: GRB
pin: GPIO10 #GPIO48 # On board light
num_leds: 5
chipset: WS2812
name: "Status LED"
id: led_strip
disabled_by_default: False
entity_category: config
icon: mdi:led-on
default_transition_length: 0s
effects:
- pulse:
name: "Slow Pulse"
transition_length: 770ms
update_interval: 770ms
min_brightness: 10%
max_brightness: 20%
- pulse:
name: "Fast Pulse"
transition_length: 100ms
update_interval: 100ms
min_brightness: 60%
max_brightness: 80%
Thanks. Led strip test was successfull. I installed new ESP board and tested it there. Next I’m going to copy the full YAML to the new board.
EDIT: The second ESP worked perfectly right away with the previous YAML code and straight from GPIO10 pin. I copied the code to the problematic ESP and after installation this ESP also started working as it should. So it’s a completely mysterious thing. Could the first installation have been broken somehow even though nothing was actually found. I think I’m not going to power the led strip with a power adapter at all. It seems to work just fine when powered from ESP 5v pin.
Thank you for your help and now I can start testing properly.
I just noticed that the timer feature exists and this is something I’ve been missing from Google Home. Great thing.
Another thing to consider is playing music with voice commands. Probably not possible, or what do you think? So as you know, you can ask Google to play music from, say, Spotify. Is there a similar way to do this?
It’s been almost a week of testing now. Three separate ESP devices are now installed and in use. Quick notes:
- intermittent reboots for which I haven’t found the reason
- I miss a feature in the device manager where I can choose which device the decibel reduction applies to
- assistant responds to answers based on web sources with urls included, and the AI configuration instructions don’t change this behavior (“do not include url’s…”)
- response times are slow and should be faster (now you have to wait about 10 sec for a response)
- “stop” word stops timer, but doesn’t stop pipeline when assistant is speaking
has anyone got the dynamic volume control working on the voice pe GitHub - jaapp/ha-voice-dynamic-volume: Adds noise level sensors and dynamic volume control on home-assistant-voice-pe ,when trying to load this function in it is giving a error of asr_mic as its not available
You probably need to ask this question on a thread about HAVPE.
thank you for the reply




