thanks. found where i need to look for the bitmasks.
i guess all i need to do is play around
you have played around with it.
so maybe you know if that is possible too:
is it possible to connect the created entity to a certain component from the outside?
I’m not sure what you mean by “connected component from the outside”. AppDaemon is not special in the way that it can set states and listen for events. Any application that connects to Hass’s websocket can do that. So, a NodeJS app, for instance, could do the same thing.
If you mean getting a switch that you made in AppDaemon to be handled automatically by HASS… no… that can’t be done. By setting the state of a non-existent entity, you circumvent anything automated that HASS would do for you. You can use MQTT Discovery and announce the new entity via MQTT and then home assistant will treat it like a more native switch/light/climate device.
i meant a component like limitlessled when its configured in HA.
would it be possible to connect the new entity to the already running limitlessled component?
i suspect not, but untill now i didnt think it was possible to create other working entities.
ahhh. no. not that I’m aware of. Internally, each “sensor” or “switch” is from a specific component is regarded as a type of that component. But, that can only be done from inside of HA.
I know it’s been a while since this thread was alive. But I built an AD app to dynamically change entity attributes, or even create totally new ones, that PERSIST through updates from their components or even a full restart of Home Assistant, check out: Entity Augmentation
That post also links to a github repo containing a lot of other automation services in addition to the entity augmentation.
there is a difference between creating a new entity, or changing an entity that is created by HA.
your app creates an enormous amount of overhead, because it listens to every state change.
a better way would be just to listen to entities that you want to change/control
I have sometjing similar in my system. I create entities that exist only in AppDaemon and persist restarts (namespace writeback set to “safe”), which I use in automations, in case I need the state for an entity in HA as well, I push it to a Home Assistant sensor.
FYI, there exists a draft for a HA component that will allow to create variables with attributes that persist restarts, see here and here for more details.
I agree that the overhead is not optimal, however this app is backwards compatible with all apps that use hass.set_state or that would fire off the state_changed event, also outside appdaemon using python_scripts or just the front_end developer tools or the likes.
The cost of listening to an event is a simple hash lookup, if it is new we copy the data, if it is different there must be some overhead to assure the attributes are correct and updated in the home assistant directory. But to note before releasing this I had it running with and without the app active, tracking cpu and memory resource usage over a few days without changing my routines, the impact is minimal.
But I am definitly not opposed to improve my solution to fit a wider audience. Perhaps a whitelist and blacklist constraint that could be used to get granular control over the entities we listen to would alleviate concerns of overhead?
The other solution I can think of is to listen to a custom event or use the app directly through other apps, this would either break general compatability or in the latter case increases coupling between the apps.
or just not change entities created by HA from AD at all, but just use entities created by AD for such goals.
at least it would absolutely interfere with my apps and absolutely get me into trouble.
I have actually never come across the “saver” HACS integration before, I will check it out. It seems to do a lot of the same things that I would like, and come back with more information on it. So far the only big drawback I see is how it changes interaction with entities in HA to get the desired functionality.
While the variable domain also has a great deal of freedom to customize, it creates a non-changeable top level domain for all custom entities, what happens if you want to connect the user_id of a third-party service with the relevant person entity already managed by HA?
I also read some of your posts on the last thread you linked, you mentioned snapshots of states, which I didn’t quite follow, would you like to elaborate a bit for me?
Ok, I am implementing a few custom constraints and parameters then, I quite liked the idea of white/black lists. But I will also make a “local_only” parameter that defaults to false. When it is true, states will only be recorded in the DB and not sent back to HA.
I cannot see how it would interfere with other apps, if you have a specific use case or the likes which I could use to test, or comply with I would appreciate it.
Home assistant for me is a middleware that keeps current states of importance (not only for the house, but fitness, studies and worklife also) The logic, or “smarts” if you will all use HA for current states, historic analytics just get the data from the time series db (influx) instead. So splitting the entity register to multiple services mean my services become more strongly coupled across other services.
Edit: allowing me to group important data in the same entities also reduce the amount of seperate calls to HA from external services.
i use a lot of listen_states (which is the equivalent for listen event for state changed)
and i use set_states there as well to change attributes, state, add new entities, etc.
if i would add your app then i would get 2 listen_states at the same time for 1 entity, and they would both update attributes.
i am pretty sure that that is just asking for loops or other trouble.
an app like this is nice for entities created by HA, but i wouldnt advise it to anyone who creates entities with AD, or at least blacklist all entities that are created and managed by AD.
and about your edit: for most people it would just add calls to HA.
grouping important data in the same entities is better done with an app that manages those entities.
the only logical usecase i see for your app is to manage attributes from entities that are NOT created by AD (so by HA), but in general i advise against that, because you then are interfering with HA.
I didn’t talk about the HACS “saver” integration, I don’t even know what this is I talked about a PR that is in the works, which adds this functionality and the necessary services to Home Assistant.
I don’t fully understand. Can you please explain the specific use case in more detail?
Sure, what do you want to know?
I currently (may change, depending on the direction that Home Assistant is going) keep automations completely separate from Home Assistant. I solely use AppDaemon for all automations. Same as @ReneTode, I don’t change states or attributes of HA entities directly, I “only” use HA entities to get their state or turn them on/off etc.
Regarding the snapshot, I’ll give you an actual example from my system.
When someone calls me while I’m at home and awake, decrease volume on all running media players and increase brightness to 80% for all lights currently on. When the phone call ends, return to the state before the call. How I did this:
I have a sensor in Home Assistant, which shows “calling” when someone calls me on the phone.
So in AppDaemon whenever the sensor changes to “calling”, I check the conditions (I’m at home and awake etc.), if the conditions match, I save the volume level and the brightness to an attribute of an AppDaemon only entity, change the brightness and volume and once the call has ended, I read the attributes of the AppDaemon only entity to set the volume and brightness back to the previous level. I don’t need to have this information in Home Assistant, it’s only used for the automation so it stays within AppDaemon.
i understand @Burningstone
i spoke to ceiku when i talked about his app only being usefull for HA entities in my eyes.
but at least in the past that was not the best practice.
HA changed a lot though and im not keeping up enough anymore to say if its still considered bad practice.
the way this app works i think so.
i believe at some point HA added options to customise entities and add attributes, but im not sure.
in that case that would be a better way.
The saver integration was mentioned in one of the threads on custom entities, haven’t had time to test it yet though; and if this functionality is going to be supported by default in HA I might just wait to test that instead.
I totally agree with @ReneTode that getting the double fired state_changed event is a bad practice and why I wanted to move this from an AD app over to an integration; though my automations are stable so far, as the state in new/old . I also keep all automations outside HA, jinja and yaml never seemed to be scalable to me.
Ok, so a trick that I use for keeping the user context of who activates a routine/automation is by mainly using a smart assistant (google assistant in my case) instead of lovalace dashboards; for now. So when I say ‘hey google, I’m done cleaning’ I want it to call a service grocy.execute_chore
that takes the correct grocy user id, and the chore id.
One way is to just get the data list of all tracked chores sensor.grocy_chores
, iterate through and save all chore ids, with their nested ‘next_assigned_user’ attributes, either when the automation is called, or on changes to sensor.grocy_chores
and saved under new sensors.
The way I do it instead is that each person
has the added attributes grocy_id
and chores_left
with a list of tracked chores for the user. If there is more than one chore, google assistant asks what I have done in a follow up question and and calls the service accordingly. When the chores are updated in grocy, the attributes for the relevant person is updated just like sensor.grocy_chores
would have.
Thats smart!
And I agree, that in such a deployment there is no reason to reflect data back to HA, do you persist the data from AD persistently in a db? The allure of using HA to contain the relevant states and attributes is that a lot of my work is requires batching timeseries data for data analytics like mapreduce and machine learning. Since influx has native support for HA just adding 'influxdb:` to the configuration it made a lot of sense to let HA handle all the reporting to the db and a simple deployment diagram shows how it now:
While from talking to you, @Burningstone and @ReneTode I feel something like this might be more desirable, where everything from HA and AD are managed seperatly, and AD reports everything to influx instead. But when HA supports custom entities and attributes, the first deployment diagram seems to make more sense again, what do you think?
i work somewhat like your last pic.
not with influx, but i could.
AD to HA is only to make entities visible (so HA is still my frontend for now)
HA to AD is only for some difficult to migrate intergrations.
AD controls everything, and AD saves my states from the entities i want to collect data from.
on its own its not a bad practice. but it needs carefull consideration.
a simple app like:
def init(...):
listen_state(self.light_on, self.args["motion_entity"], new = "on")
def light_on(...):
self.turn_on(self.args["light")
with yaml
a:
module: ...
class: ...
motion_entity: binary_sensor.motion
light: light.a
b:
module: ...
class: ...
motion_entity: binary_sensor.motion
light: light.b
is perfectly valid and it will turn both lights on simultanious when motion is detected.
and that also uses the same listen_state twice.
but when you got apps that feed back info to 1 single point, and you listen to that same point, you need to be carefull.
I use the first method described in the first diagram. Everything is pushed from HA to InfluxDB. I only use the AppDaemon entities for automations and I don’t care about the history of these entities (why do I need to know what the volume level was before someone rang the doorbell?), if I do want history, I push the data from AD to HA by creating a sensor from AppDaemon and then include it in the entities pushed to InfluxDB.
the difference is if you think that the way HA records states and events fits you or not.
i want to save states over a long period (years)
with the way the HA DB always did record, the result was that my DB was growing way to fast.
thats why i did stop the HA DB from the beginning and used AD to save it.
communication from HA with influx came long long after that.
another difference is that most of my entities (well over 600) are created by AD anyway
So I had a bit of time to work on it, as I must agree that I like the idea of using HA as a state reporter and generally a middleware, but the granular control over domains are paramount. Since the HA team is also working on persistent entity changes I want to keep my code in a way that don’t incur breaking changes: like using hass.set_state()
should be considered safe from the AD perspective.
I recreated the entity_augment app so that you can supply a list of top level domains under either a whitelist or blacklist, whitelist is prioritized and listen to state changes rather than the state_changed
event. Additionally the boolean reflect paramter decided if values are reflected back to HA, defaults to true. Example apps.yaml
entry:
entity_augment:
module: entity_augment
class: EntityAugment
blacklist:
- sensor
- light
db_path: /conf/states_db.json
reflect: True
The app itself now looks like this:
import hassapi as hass
from tinydb import TinyDB, Query
"""
An AD app to create and manage data entities and their states and attributes, it also allows for overwriting (reflecting) HA entities (like new attributes) optionally new domains and entities can be dynamically created as well.
In case of using automations or the like that depends on or is an aggreagate of the event itself being fired, two optional parameters exist, with black and whitelisting where the latter takes precedence.
"""
class EntityAugment(hass.Hass):
"""
Creates the database is not present and initializes based on supplied parameters
"""
def initialize(self):
self.db = db = TinyDB(self.args['db_path'])
self.query = Query()
self.init_reflect()
self.callback_delegation()
"""
Checks if the reflect parameter and listens to appropriate events
"""
def init_reflect(self):
try:
self.reflect = self.args['reflect']
except KeyError:
self.reflect = True
if self.reflect:
self.listen_event(self.populate_entities, 'homeassistant_start')
"""
Checks if the whitelist or blacklist parameter is set and chooses a callback for the 'state_changed' event listener
"""
def callback_delegation(self):
try:
for item in self.args['whitelist']:
self.listen_state(self.wupdate, item, attribute='all')
except KeyError:
try:
self.list = self.args['blacklist']
self.listen_event(self.blacklist_update, 'state_changed')
except KeyError:
self.log('Tracking all entities')
self.listen_event(self.event_update, 'state_changed')
"""
This is a dynamic constraint on the entity domain, for now it only filters on top level domains (lights, sensor etc)
It is only used as a wrapper if any domain filters are active.
"""
def blacklist_update(self, event_name, data, kwargs):
if data['entity_id'].split('.')[0] not in self.list:
self.event_update(event_name, data, kwargs)
"""
Every time a state_changed event that fulfills our constraints is fired this function will update the database
- If the item is new, insert new entry into tinyDB
- If the attribute keys differ, the latest attributes are combined, updated and optionally reflected back into HA
"""
def event_update(self, event_name, data, kwargs):
id = data['entity_id']
event_state = data['new_state']['state']
event_attrib = data['new_state']['attributes']
self.entity_update(id, event_state, event_attrib)
def entity_update(self, id, event_state, event_attrib):
db_entity = self.db.search(self.query.entity_id == id)
if(db_entity):
db_attrib = db_entity[0]['attributes']
keys = self.new_keys(db_attrib, event_attrib, data.get('remove_attributes',[]))
attrib = { "attributes": self.update_entity_attributes(event_attrib, db_attrib, keys), "state": event_state }
self.db.update(attrib, self.query.entity_id == id)
if(event_state.get('siblings', [])):
self.update_siblings(event_entity.split('.')[:-1].join('.'))
if(self.reflect and len(set(attrib.keys() - set(event_attrib.keys()))) != 0):
self.set_state(id, state=data['new_state']['state'], attributes=attrib)
else:
self.db.insert({"entity_id": id, "attributes": event_attrib})
"""
Function to find the list of all unique attribute keys in both tinyDB and HA entity registry
"""
def new_keys(self, hass_attrib, db_attrib, remove_keys=[]):
db_keys = db_attrib.keys()
hass_keys = hass_attrib.keys()
return(set(db_keys) | set(hass_keys)) - set(remove_keys)
"""
If the 'sibling' flag has been set to true in the event, this function will update all entity attributes under the same domain
"""
def update_siblings(self, domain):
for sibling in self.get_state(domain, attribute='all'):
attrib = db.search(self.query.entity_id == sibling['entity_id'])
for key in removing:
sibling.pop(key)
self.db.update(sibling, self.query.entity_id == sibling['entity_id'])
"""
Uses the updated attribute keys to create a new entry, using the HA value if present.
"""
def update_entity_attributes(self, hass_attrib, db_attrib, keys):
attrib = {}
for key in keys:
if key in hass_attrib:
attrib[key] = hass_attrib[key]
else:
attrib[key] = db_attrib[key]
return attrib
"""
Only run if reflect parameter is set to true, it ensures that on a HA restart the entities will reflect all augmented attributes
"""
def populate_entities(self, event_name, data, kwargs):
for entity in self.db.all():
id = entity['entity_id']
hass_state = self.get_state(id, attribute='all')
add = set(entity['attributes'].keys()) - set(hass_state['attributes'].keys())
for key in add:
hass_state['attributes'][key] = entity['attributes']
self.set_state(id, state=hass_state['state'], attributes=hass_state['attributes'])
If our only usecase is novel entities then this strategy should let us use HA for all its neat integrations and components on our dynamically generated entities.