Personally, I’d rather just have a function inside a template. But that’s just my opinion. I’ll adapt to whatever decision is made, although I don’t use these attributes at the moment but I will be using events from calendars in the future.
Speaking about “integrations” , I.E Weather-integrations can/could “temporary” store these Variable Attributes Forecasts/conditions in a file(2).( in i.e /local(or some user write-protected area, just in case , i mean it’s not confidential info) and Update/remove “timestamp-sections” every hour/day, No DB, little or maybe less writes to disk, or entirely depend upon “store” in memory, thou with these “attributes/data” in a file = easy addressable
Yeah no. This is how you get the volunteer developers to just ignore you. And it’s what I’m trying to avoid. You’re more than welcome to “fight” but I’ll bow out as well as other people. It’s not an us vs them and this it’s this mentality that needs to stop.
No amount of understanding is going to make a difference, it will just be met with disdain. So, again, let’s focus on what matters: getting the functionality back by discussing our future options.
It is not Taras, who created mess… sorry…
No, the goal of this thread is to stop this change from happening.
You’re perfectly right. In my perception it is is change for sake of change…
So what is this change for then? For sake of change? Again…
This makes me worried… How much rework we should expect in our setups when this starts to be implemented? Why developers can’t just add more functionality without removing existing one, so changes in perfectly working setups could be avoided? Even at cost of duplicationg features…
So with this change developers intend to make our lives more miserable?
They don’t have to, it’s trivial to cache the forecast in local memory and return that as a response to the service call. The actual cloud API calls are (and should be) independent of the service call.
The weather platform leaves that to the individual integrations, it doesn’t do any caching itself (see here).
At least the met.no integration (which is the default afaik) caches everything. Calling the get_forecast service call will simply return the result of the last valid API call. It will not trigger a cloud connection (See here, it just returns self.coordinator.data.hourly/daily_forecast
). There isn’t going to be any performance or cloud overhead when using the service call, at least not anymore than the old method.
FWIW, I actually agree with this change. The service call is a better solution, these kind of entity attributes should be removed wherever possible. Of course this assumes that a good solution exists for not breaking functionality that used to work.
Why do you feel so?
Why are service calls better than being able to see what I have to call in one to many calls directly in the attribute or being able to use these Attributes directly in templates in UIs, sensors, etc etc.
It has already started, and some, few third-part(integration) devs , have chosen to “move” specific Attributes into additional entities( guess to make life easier for it’s users, so they don’t have to make their own Template-sensors ) , another option could be as i mentioned above, in this particular case, using files ( the integration use this file basically as it usual use the attributes, but it gives the average users a “decent” easy chance to adapt current templates, as Devs , HA or 3rd pard are Devs, they should not forget that maybe the majority of the users don’t have same understanding of the system, and various functions, an most likely a limited programming language skills, keep pushing them further and further “away” by make there life more troublesome won’t benefit the reputation of HA
Well, and this is a purely personal observation, I think the whole concept of ‘data bearing’ attributes (meaning entity attributes that contain additional sensor data to an entity, not functional / control attributes like last modified time, etc) was flawed from the beginning on. I think the devs share this opinion.
The rationale behind this is it speads data over multiple channels and there is a loss of orthogonality in how entity data is represented : some data is in the state, other is in attributes. That’s confusing. Often attributes store data that isn’t even needed for most common use. Then you need ugly type-dependent hacks to avoid writing it to the DB. And we all know how (in)efficient attribute handling is in the DB… (although it has admittedly improved a lot).
Service calls make sense from a programmatic point of view. An entity represents a sensor value, that value is represented by its current state. The entity might have access to additional information that can be queried on demand by using a service call. That’s a very common pattern for API design in general. Storing everything (as attributes or otherwise), regardless of whether it is actually used/needed or not, seems like a real waste.
Now, as I said, this is from the POV of a developer and I see how users can be upset about this (and I certainly am usually upset myself about unneeded changes in HA breaking stuff). But in this case I can understand the change.
Well yes, this is where the avoidance of breaking changes come in. There should be a simple way to call the service call from a template, as already discussed above. Technically it would also be possible to map attributes to service calls internally, like virtual on-demand attributes that are generated on the fly without storing anything. But I don’t know if HA has the structural backend to support something like this.
So above is just invalid code, or the attributes you find pasting the 2 sensors in dev-tools/state is a illusion ?
Or you mean they cache it , but still use/expose it direct en the entity ?
Anyway you are missing the whole point about the “suggestion” i provided for a simple solution ! , For the end users, who might need to penetrate/create service-calls+automation+template-sensors ., for a specific purpose.
And please note:! i didn’t talk about cloud performance etc. , whether the integration upon a get_forecast service call , deliver data it fetch from a file or cache , i couldn’t care less ( i think i would feel more comfortable/safe with the first option) … i would prefer ( as simple as possible ) when i need to “call” specific data, i.e precipitation/wind direction , starting 2 days from now, and the following 2 days, to use in an automation OR template or graph … with the less possible code and “failure” points
I think you are misunderstanding what a cache is. And I don’t really understand how storing the results in a file will help here ?
Basically it boils down to this:
- Attributes : data is always generated / requested and stored (unless you use hacks), regardless of whether it ends up being used or not.
- (Service) calls : data is generated or requested on demand only when actually needed and is never stored, even without hacks.
And this is very much about cloud performance too. Keep in mind that the weather platform is implementation agnostic. It doesn’t actually do much itself. Other integrations do the actual work and present it to the weather platform. Some weather integration might need different cloud API calls to get the current weather and the forecast. With attributes, these two cloud API calls would always have to be performed, regardless of whether you use their results or not. With service calls, one or the other can be avoided if you don’t need it.
Writing something to a file always requires it to be in a memory cache first.
Right. I think that having a good system to call the service call from a template will solve this.
However i still need the full forecast every hour and every day, for my weather card ( meaning i have already requested it ) in the weather card via a service_call, why not use re-use this data ? how is the stack/queue for service_call build ?
That’s exactly what the cache does.
See the code I linked too above.
when it works ! , how much are in the scope of ending up in a cache ? , i mean weather-integration is hardly the only integration who will have to use a cache in already in near future ( yes i know it’s suppose to be faster than reading from a file, but some integrations might benefit from this suggestion ) in specific not to bust the cache with data and requests
I’m sorry, but you really need to read up on what a memory cache is. It’s not what you think it is. There is no way yo ‘stress’ or ‘bust’ a fixed size memory cache. It’d not a database, there is no queue or anything like that. It’s just a bunch of variables (== memory) holding some data for faster reuse.
For the met.no integration, for example, the forecast data is requested from the cloud at some rate (completely independent of the service call rate) and stored in an array in memory (the cache). That data will then be returned when you call the service call, directly from memory. Means it is reused everytime you call the service, regardless from where and how often. It is stored only once for that entity.
There is no need for files, that would just make things terribly slow.
Say what ? What do you think that i think it is ? now you’ve said it twice, and i dont intend to explain for you i know or how long time ive been around ( i guess the latest u can figure out) , but knowing what i think ! come on
Alright then. I explained how the new service call handles data above. Altrrnatively check the source I linked to yourself.
It’s not that i dont understand how it works, it’s the “coming” issues i “see” when more and more have to rely on various defined caches, among other things, as i also “supported” the DB change to include BLOBS i also “supports” service_calls to caches , thou weather-forecasts does hardly belong to such a “feature”, it should be updated every hour and every day, when new data is available from provider), and it’s hardly something one needs to fetch in fastest way possible, it just have to reliable ofcause, and preferable that the integration dont need extra API calls if the cached data somehow is “missing”
Good points.
To add to it, I think it’s useful to know the behavior of the end point data. It’s easy to think hourly forecast data will be updated at the top of each hour, but my observation of the integration I use is that forecasts seldom change at the top of the hour, and the time between forecast updates is seldom an hour. If I had control using service calls, I’d wind up doing close to what’s already being done; that is, I’d poll at a rate that provides an upper limit on the staleness of the data.
I can swear I’ve seen this mentioned on some architecture discussion, but can’t find it. I also haven’t seen it mentioned here, so here goes.
Why not a new forecast
domain? Besides weather forecasts, we already have solar forecasts built in too. It will fit nicely and define a way forward for other kinds of forecasts too.
I find the service call replacement very clunky.
It’s a task for the developer of the weather-integrations( or user of the API ), and it works kind of how you describe( unrelated if it’s on top of an hour , minutes before/after , same goes for daily when it comes to weather, some provides 10 hours ahead, others 24, and they do change once an hour"normally" , 7 days ahead need atleast to be “updated” once a day, some provides i.e twice a day) , electricy-prices-forecasts, solar-solar-forecasts, stock-prices-forecasts. all various behavior, depending upon provider, which HA basically is not interested in, it’s the integration
staleness of data in a cache is not set/controlled by a 3rd part integration ( as far as i know , and that would worry me if it was), then we would end up with bunch of integrations controlling our server-memory