It is working well for me. I’d be interested to hear how well it works for others too. There was an initial issue for me with the device starting muted and not waking due to the absence switch defaulting to off, however I haven’t been able to replicate this.
Remember to clean your build files before install.
Esphome shared their yaml on GitHub. Curious to know if you used theirs or your own config specifically in relation to audio (seeing you dont experience volume issues)
Hm… the first time the configuration has not worked as intended, but I re-uploaded it and now, everything seems to work as expected.
Great work & thanks
Now, only the i2c issue remains - and I hope, that the changes, someone provided on the github issue will work - then, all components are working for the Box3
[Update]
I really need to get used to the fact, that my Box3 Display is now turning off - when I am in front of the computer but does not move enough for the radar to recognize me
Bit strange, after weeks continuesly running display - and I always think, the device is crashing and rebooting lol
You could always increase the Presence duration to a longer time from the default of 60 secs. I’ve allowed a max of 5 mins. It’d be hard to stay still for 5 mins.
In the longer term when we get the I2C bus working nicely with the VA side we may be able to tweak the radar sensitivity.
Right now, I am not sure if it is related to the integration of the Presence detector, but it seems, that my wakeword-detection failed… it is somehow the same behave as I observed it when the i2c was configured…
Even after uploading the configuration the device does not seem to detect the wake-word.
I would like to monitor, if this happens also for someone else, using the radar sensor.
I saw an error that might be related to it, but unfortunately, I had to reboot my system due to other stuff (installed updates and a new integration) before I made a copy of the message…
My BOX-3 is not processing its wake word anymore. My two atom echo’s now also fail to process the wake words. Something is off. Have to start troubleshooting…
Studio Code Server (5.14.2), Home Assistant Google Drive Backup (0.112.1), InfluxDB (4.8.0), Grafana (9.1.1), Terminal & SSH (9.8.1), File editor (5.7.0), Samba share (12.2.0), Whisper (1.0.0), Piper (1.4.0), openWakeWord (1.8.2), ESPHome (beta) (2023.12.5)
After the updates I rebooted my Home Assistant VM instance on my NAS. Still no luck and the BOX-3 wake word remains unavailable, atom’s don’t react as well.
When I “mute” the display goes black. So I don’t understand this ‘change’ in functions for the both toggles. Very confusing.
edit 13:12 All is working again, after I gave the command “Update All” in the ESPHOME (beta) tab:
2024-01-06 13:05:11,069 INFO 304 GET /devices (0.0.0.0) 1.88ms
2024-01-06 13:05:15,525 INFO 200 GET /edit?configuration=esp32-s3-box-3.yaml (0.0.0.0) 2.33ms
2024-01-06 13:05:15,602 INFO 101 GET /ace (0.0.0.0) 3.03ms
2024-01-06 13:05:15,604 INFO Running command 'esphome --dashboard -q vscode --ace /config/esphome'
2024-01-06 13:05:18,994 INFO 304 GET /devices (0.0.0.0) 1.98ms
2024-01-06 13:05:20,430 INFO 200 GET /edit?configuration=everything-presence-lite-301838.yaml (0.0.0.0) 1.59ms
2024-01-06 13:05:20,449 INFO 101 GET /ace (0.0.0.0) 0.53ms
2024-01-06 13:05:20,462 INFO Running command 'esphome --dashboard -q vscode --ace /config/esphome'
2024-01-06 13:05:22,009 INFO 200 GET /static/schema/substitutions.json (0.0.0.0) 2.67ms
2024-01-06 13:05:22,807 INFO 304 GET /devices (0.0.0.0) 1.76ms
2024-01-06 13:05:23,877 INFO 304 GET /edit?configuration=m5stack-atom-echo-803494.yaml (0.0.0.0) 1.22ms
2024-01-06 13:05:23,893 INFO 101 GET /ace (0.0.0.0) 0.77ms
2024-01-06 13:05:23,895 INFO Running command 'esphome --dashboard -q vscode --ace /config/esphome'
2024-01-06 13:05:26,661 INFO 304 GET /devices (0.0.0.0) 1.92ms
2024-01-06 13:05:29,215 INFO 304 GET /edit?configuration=m5stack-atom-echo-80b520.yaml (0.0.0.0) 1.29ms
2024-01-06 13:05:29,247 INFO 101 GET /ace (0.0.0.0) 0.66ms
2024-01-06 13:05:29,250 INFO Running command 'esphome --dashboard -q vscode --ace /config/esphome'
2024-01-06 13:05:31,273 INFO 304 GET /devices (0.0.0.0) 5.61ms
2024-01-06 13:05:36,267 INFO 304 GET /devices (0.0.0.0) 1.41ms
2024-01-06 13:05:41,273 INFO 304 GET /devices (0.0.0.0) 2.10ms
2024-01-06 13:05:46,277 INFO 304 GET /devices (0.0.0.0) 3.33ms
2024-01-06 13:05:51,284 INFO 304 GET /devices (0.0.0.0) 2.65ms
2024-01-06 13:05:56,282 INFO 304 GET /devices (0.0.0.0) 1.30ms
2024-01-06 13:06:01,283 INFO 304 GET /devices (0.0.0.0) 1.31ms
2024-01-06 13:06:06,283 INFO 304 GET /devices (0.0.0.0) 1.34ms
2024-01-06 13:07:26,607 INFO 200 GET /devices (0.0.0.0) 1.41ms
2024-01-06 13:09:44,238 INFO 304 GET / (0.0.0.0) 1.88ms
2024-01-06 13:09:44,286 INFO 304 GET /devices (0.0.0.0) 3.15ms
2024-01-06 13:09:49,284 INFO 304 GET /devices (0.0.0.0) 1.75ms
The Ollama integration adds a conversation agent powered by Ollama in Home Assistant.
This conversation agent is unable to control your house. The Ollama conversation agent can be used in automations, but not as a sentence trigger. It can only query information that has been provided by Home Assistant. To be able to answer questions about your house, Home Assistant will need to provide Ollama with the details of your house, which include areas, devices and their states.
the is a custom hacs for home assistant
thanks a lot, first some info for forum readers. I am an AI guy and often I see on youtube or read on forums that the local models are considered slow or not as good. The thing is it is up to your budget entirely. Smart home users often spend 1k USD or more on home theater systems and if a 100% private AI assistant is important for you then with about the same spending on hardware you can have such.
You need a computer that runs 24/7/365 with a good AI capable NVIDIA GPU 4090, A6000 or older Tesla T4 and then you can run opensource, 7B or 13B sized, fine tuned and even UNCENSORED and almost weekly updated AI models which are already at ChatGPT3.5 level and some are at ChatGPT4 level. Side note, this hardware can be installed as proxmox hypervisor so you can use the same GPU as virtual GPU for other AI tasks too for example computer vision AI with Frigate home security.
Users often back off due to the software complexity. Yes, you need linux and AI experience and ability to troubleshoot on AI setup however here comes the good news. ollama.ai is offloading you from all the pain of software setup and troubleshooting. So, this is the reason I highly recommend ollama AI over any other local AI solutions (eg localai.io) for non AI experienced people like average home assistant users. So, hopefully people stop promoting local AI as “slow” because this is simply not true unless you don’t have the hardware. Back to the topic, I just really like that ollama has been already discovered by Home Assistant community.