Wondering if the new AI Task could also be used as a building block integration for integrating custom integrations that uses AI to provide suggestions and generate automations for those suggestions, such as the AI Automation Suggester and AI Agent HA as similar “Suggest with AI” buttons inside Home Assistant’s default Automation Editor to help create automations based on natural language input tailored to your specific setup and environment or even integrate to identify opportunities and give ideas on automations that you might not even have thought about based on which entities you have in different areas?
I liked the sankey charts most, while as long as being totally wrong in it’s configuration it’s pretty useless.
A battery is likely something inbetween the primary suppliers of energy (grid & solar) aswell as the primary consumers (household & grid).
Means the initial graphics for this release is wrong since it treats the battery being a black hole where you could drop energy while never getting it back.
Aswell as some follow up graphics where the battery is allowed to serve the household while it’s also possible to export to the grid. (depending upon prices with a dynamic tariff.
Sankey itself allows for some sort of collector node/level which doesn’t require an entity itself, so you could have all possibel 3 inputs aswell as all 3 possible outputs (if not more depending upon the exact installation in the house, wallboxes, waterheaters, whatsoever running in parallel to the houseconsumption)
The sankey chart is not really giving the result i expected and only generate one source, “grid” while i do have 6 meters (wich are active one at a time and are time & day dependant.
here is my current sankey result for this year:
2nd issue:
The individual device detail graph:
the filters are not applied to the period you are comparing to.
example, i compare device usage from yesterday to the day before, with all the filters off:
the bar is no longer displayed for the selected period (expected), but the filter is no longer applied to the period it compares to (NOT expected). tested with various dates / time span or entity, result is the same
it works again with palettes like below in our themes, adding the prefix ha-
Dark blue:
# color palette
ha-color-primary-05: '#001321' # Near-black with a cool blue undertone
ha-color-primary-10: '#012744' # Very dark navy blue
ha-color-primary-20: '#06457a' # Deep, rich marine blue
ha-color-primary-30: '#0d60ad' # Bold cobalt-like blue
ha-color-primary-40: '#1675C9' # Vivid, primary-leaning sky blue (base)
ha-color-primary-50: '#3d90d6' # Lighter, softer azure tone
ha-color-primary-60: '#68a9e1' # Balanced light blue, good for highlights
ha-color-primary-70: '#92c2ec' # Gentle sky blue, ideal for backgrounds
ha-color-primary-80: '#c0ddf6' # Very light pastel blue
ha-color-primary-90: '#e1effb' # Pale icy blue, near-white
ha-color-primary-95: '#f1f8fd' # Almost white with a faint blue tint
Developers need to think ahead, as while using paid cloud services for generative AI is most common way today I think that many Home Assistant users will probably soon some type of buy AI Base Station appliances (or new NAS:es and home servers with built-in AI acceleration) to enable running local LLMs offline at home.
Several open LLM models are already available + I think that hardware with the capability to run larger LLMs locally at home will become more common and come down in price as they do.
The new OAI gpt-oss:20b is small enough to run on an Nvidia RTX 4-5xxx class card with 16g or better vram fairly nicely and give o.3 level performance in the homelab at 20-30 tok/sec or better easily.
Its the current target platform for home llm IMHO if you’re not building some monster Franken rig. I just put together an EGPU that fits the bill for less than $1300 usd to add to an existing pc. (Read: unlimited 2024 gpt ‘Pro’ level service which oai prices at 200usd/mo.)
exactly…
Ive asked that a couple of times now, not yet answered.
for now this is supposed to only apply to the buttons and this was explained to me
With the release of the new button we support semantic tokens, which aim to color all components that use them. At the start it’s just the <ha-button> in the future there will be more components tha can use the same tokens.
However, I am still much in the dark when quiet/normal/loud is being required/used, or, maybe even more important, if and how we could set that in our themes
With the new button you can theme any of the variables here and change for example all fill primary quiet/normal/loud to a different color palete just chainging the “primary” in --ha-color-primary-xx to something else, and do the same for the on primary quiet/normal/loud
I suppose quiet is cancel, normal is regular OK/Update stuff, and Loud would be Alert. But that is just my guess
because not are created equal check this unhovered
# color palette
ha-color-primary-0: '#001321' # Near-black with a cool blue undertone
ha-color-primary-1: '#012744' # Very dark navy blue
ha-color-primary-20: '#06457a' # Deep, rich marine blue
ha-color-primary-30: '#0d60ad' # Bold cobalt-like blue
ha-color-primary-40: '#1675C9' # Vivid, primary-leaning sky blue (base)
ha-color-primary-50: '#3d90d6' # Lighter, softer azure tone
ha-color-primary-60: '#68a9e1' # Balanced light blue, good for highlights
ha-color-primary-70: '#92c2ec' # Gentle sky blue, ideal for backgrounds
ha-color-primary-80: '#c0ddf6' # Very light pastel blue
ha-color-primary-90: '#e1effb' # Pale icy blue, near-white
ha-color-primary-95: '#f1f8fd' # Almost white with a faint blue tint
Yes indeed, however if users want a slimmer all-in-one embedded system that can act more or less like an appliance then you can now buy a mini-PC or a NAS (Network Attaced Storage appliance that can act as a home server) based on Ryzen AI 300 Series APU as then there is there is no need for a dedicated GPU or eGPU.
Like example those built around AMD Ryzen AI 9 HX Pro 370 which can have up to 96GB of unified memory for less than $1000, though the models that only have 32GB or 64GB of memory are much more affortable than so. Or if you are willing to pay to have something even more powerful then check out those built around the AMD Ryzen AI Max+ PRO 395 which can have up to 128GB of unified memory (though they will today cost from $1500 or more since the memory has to be soldered to the motherboard).
There are now many different brands coming from several manufacturers with similar product specifications so might buy one of the alternatives if a better option comes out before the end of this year. Tip if you are looking for a mini-OC and are willing to pay for quality then suggest check out the Framework Desktop.
Personally I am considering buying either the upcoming Minisforum N5 Pro Desktop NAS or the Minisforum AI X1 Pro mini-PC with 96GB memory, (both of which actually also features OCuLink ports for inexpensive eGPU to allow for future expandability, not that I should need it). While their N5 Pro NAS cost a lot more than their AI X1 Pro mini-PC I believe the NAS model will be use useful since it can also hold loads of storage to can serve multiple purposes which will make a great all-in-one home server.
Regardless I have been researching about installing TrueNAS on the NAS variants and then having it run virtual machines and containers for different self-hosting stuff, including a local Ollama server → Ollama - Home Assistant
I’ve not been impressed with AMDs ai implementation. Intel ipex is better. But for industry standards… CUDA
I get why you like it H. I’m running Friday on an Intel NUC 14ai (Intel a770 IPEX based ollama that TECHNICALLY should run the 20b…)
And the machine is the target for that EGPU I just bought… take that for what you will. It’s cheaper and easier for me to add a second whole CUDA based GPU. Than fight the ams or Intel dance of GPU and docker pass through…
(read: I’m spending $1000 because I’m tired of screwing with ipex I can’t recommend anyone do anything except CUDA rn if you’re homelab)
@joostlek Suggest you add a note there in the UI frontend to clearify that or I bet you will otherwise have to answer that same quesion again many times