- A default code can now be set in the entity settings for every alarm control panel entities. Nice work @gjohansson-ST!
I don’t see this documented, so I have no idea how to do this.
Is this available only via the UI and not YAML?
- A default code can now be set in the entity settings for every alarm control panel entities. Nice work @gjohansson-ST!
I don’t see this documented, so I have no idea how to do this.
Is this available only via the UI and not YAML?
Looks like Hunter Hydrawise has an issue in 2024.6.0:
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/helpers/update_coordinator.py", line 312, in _async_refresh
self.data = await self._async_update_data()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/components/hydrawise/coordinator.py", line 43, in _async_update_data
user = await self.api.get_user()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pydrawise/client.py", line 119, in get_user
return deserialize(User, result["me"])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pydrawise/schema_utils.py", line 25, in deserialize
return _deserialize(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/apischema/deserialization/__init__.py", line 887, in deserialize
return deserialization_method(
^^^^^^^^^^^^^^^^^^^^^^^
File "apischema/deserialization/methods.pyx", line 504, in apischema.deserialization.methods.ObjectMethod.deserialize
...
File "apischema/deserialization/methods.pyx", line 1078, in apischema.deserialization.methods.ObjectMethod_deserialize
File "apischema/deserialization/methods.pyx", line 1348, in apischema.deserialization.methods.Constructor_construct
File "apischema/deserialization/methods.pyx", line 1227, in apischema.deserialization.methods.RawConstructor_construct
File "<string>", line 3, in __init__
TypeError: function missing required argument 'year' (pos 1)
I’m going to start digging into the LLM support. Is it designed to be backend agnostic? I want to connect to koboldcpp/llamacpp, which have a mature and open source APIs. I don’t see a reason it should be locked down to proprietary LLMs, but even so, if we can change the target host/port it’d be possible to run an OpenAI proxy. Has that been considered during development yet?
I feel like HA is losing its path. Why would someone who care about privacy want Google or OpenAI to know what they do!
At the end of the release notes there is a link to a developer blog post which in turn links to further documentation (didn’t read it in detail, but first scan seems useful)
I was so pleased about the collapsible function in the blueprint.
I implemented it straight away in the beta phase and wouldn’t want to be without it.
My “Cover Control Automation (CCA)” blueprint has therefore become much clearer.
Thank you very much.
The thing is, LLMs are quite trivial to interface with. There’s a generate endpoint where you send a prompt and get a response either synchronously or asynchronously. You can specify the format you want to receive in the output as part of the prompt. There’s nothing keeping HA (or contributing devs) from making an interface layer that makes it backend agnostic. I want to look into doing that if they haven’t designed it with that in mind yet.
pls get back epsonworkforce integration because its working fine!
Hi @dwd
the Epson Workforce has been removed already in 2024.5 (see Farewell to the following) for a good reason.
While I happen to agree that privacy is important, not everyone uses Home Assistance for privacy reasons, and I think it’s fair to add features that the privacy-conscious might not use (as long as said features are optional).
Hi @parautenbach
it is more a developer related change - see the Alarm Control Panel Entity code validation blog post which is also mentioned in the release notes.
Local LLM’s are definitely coming, the home assistant team cares too much about choice and privacy to let something like that slip. It wasn’t long ago they said “With Home Assistant you can be guaranteed two things: there will be options and one of those options will be local”, and I don’t think that this release is in any way walking back on that. It’s just that running LLM’s locally is complicated, and hitting a web endpoint is trivial. It’ll take some time for the local options to be ready, so it makes sense for them to release the part that works right now, and that involves an endpoint. In the meantime, there’s nothing stopping you from setting up your own endpoint with LocalAI and using the openAI integration with it instead
I wonder if the phrase “Dipping Our Toes” isn’t translating well for some folks? As a native English speaker (and from the US), that immediately keyed me to the fact that this is a very first step, obviously not the end.
now notice we have a ui for the sensor platform: file
too?
I did migrate my notify services, and deleted its yaml, but believe we can do away with this too:
sensor:
- platform: file
file_path: /config/logging/filed/filed_notifications.txt
name: Filed notifications
<<: &truncate_value
value_template: >
{% if value is not none %}
{% if value|length < 255 %} {{value}}
{% else %} Truncated: {{value|truncate(240,True, '')}}
{% endif %}
{% endif %}
given the fact we can do
and next set it up, even with a template:
Missed that during beta, and still cant find that in the Release notes. Or, did we have that already …?
add those yaml settings results in a nice new sensor.file:
hmm must check why it still shows the last beta though
there is no way to get to the template of the particular sensor once it is created however, so that might be something to look out for in a followup development of the integrations UI
checking the entity_registry on the sensors, there is no template registered:
{"aliases":[],"area_id":null,"categories":{},"capabilities":null,"config_entry_id":"1725ccfbcc9b9b3ed8d3ecd7f18af2f3","device_class":null,"device_id":null,"disabled_by":null,"entity_category":null,"entity_id":"sensor.file","hidden_by":null,"icon":null,"id":"42df2d4fa1097f3d5f3711af24f57f12","has_entity_name":false,"labels":[],"name":null,"options":{"cloud.google_assistant":{"should_expose":false},"conversation":{"should_expose":false}},"original_device_class":null,"original_icon":"mdi:file","original_name":"File","platform":"file","supported_features":0,"translation_key":null,"unique_id":"1725ccfbcc9b9b3ed8d3ecd7f18af2f3","previous_unique_id":null,"unit_of_measurement":null}
]
so I am not sure this will actually truncate those longer strings?
Agreed, I think most of the work here is the pipeline between the HA instance an its prompts (entity names, etc.), and the HA instance and the response. What happens between the prompt and the response just goes through the API.
In that sense it makes the most sense to get rolling using the OpenAI API so people can test and try things out. I’m running Ollama locally now, but while the local solutions are working well, they are also changing fast, APIs still shifting, and setting them up can be tricky, especially if you are hoping to use your GPU for larger models.
I’m sure this is ultimately the goal to support, it just takes time that runs in parallel to the time needed to integrate the responses with HA’s controls (which is what we are seeing here).
Is there a documentation how to change the background picture? I am not finding it.
The most useful application of LLMs in HA would be to be able to build automations and scripts based on prompts.
“Create an automation that turns on the living room light when I’m in the room and turns it off automatically when the upstairs is vacant for at least 30 minutes. If it’s after 8pm turn on the lamp instead”
It’s a relatively fixed problem space in terms of variables. It would be a lot easier than using copilots hacked together YAML since you could have deep knowledge of the triggers, conditions, and actions available.
I’d prioritize that higher than Assist integration IMO.
Edit dashboard > edit specific view > background tab.