Semantic Modeling (Tags, Nested Area, Nested Group) into action

After trying multiple other main local home automation platforms, I am now much more convinced Home Assistant fits my need the best, but one feature I truly liked from the another competitor is called Semantic modeling. Their implementation was unnecessary cumbersome to be practically usable for me but concept was great.

In a nutshell, this is a way to OPTIONALLY organize hundreds/thousands of entities we have. The organizations/semantic modelings are used to automatically create UI and simplify automation target.

In a big picture this needs nesting groups/areas/devices/tags.

Therefore, the concept has already been suggested by different feature requests on this forum as well:

The key here to me though is a full system that utilize such organization on UI and automation implementation for the simplicity and elegance.

In an ideal world, here is how I think the proper/user friendly semantic model can be implemented.

Nesting Group
Tags, Areas and Group, I see as a simple all same concepts just a different name. Since group is the oldest feature in Home assistant (as far as I can tell), I assume its all groups. In, fact I see device is even a group.

  • Area: Group without input/output
  • Tag: Nothing but a group
  • Device: Group that contains same origin entities

So I think missing feature here is just an ability to hierarchically organize groups.

Setup

A menu/setup screen where one can drag and drop each nodes to create a hierarchically organize grouping whether its Areas, Device type Group, it is one’s choice.

Example:
Area driven organization

  • Ground Floor
    • Living room
      • Speakers
        • Sonos Arc
        • Echo Dot
      • Lights
        • Window Side
        • Wall Side
    • Kitchen

The key here is hierarchy information is not exclusive i.e. it is Tag based. Because one may want to have device type organization instead/in addition.

Device type driven organization

  • Audio Devices
    • Sonos Speakers
      • Sonos Arc
    • Amazon Echos
      • Echo Dot
  • Lights
    • Living Room
      • Window Side
      • Wall Side

In an above case, I can drag and drop Speakers, Sonos Arc etc. to Kitchen.

One can create group (speakers), area directly on this screen. Multiple organization/semantic modeling is acceptable.

UI

There are many ways UI can be implemented using semantic modeling data.

Filter driven Lovelace Menu/Tabs/cards

When creating Tabs, cards or menu, one can optionally select semantic model node. Then all the things under will be displayed accordingly.

For example,

New TAB - I choose Ground floor as semantic context then all entities/groups/area under the living room and kitchen will be auto-poulated on the page.

Here instead if I set semantic context as Living room, then it starts from there.

Basically, the system uses hierarchy but user can decide where to start for displaying, how to show them per one’s preference. I can only imagine strong HA community will continue to create custom display styles using the Semantic modeling.

Automation Target

Here semantic model context will look at the greatest common factor under the each node and that represents the node’s target property.

Example:
Using example above, if I choose area based semantic model’s Living room > speakers as target, then volume setting, TTS, play etc can be set/manipulated and both Echo and Sonos responds.

On the other hand, if I chose living room as target, lights and speakers share only “On/off” so that’s only element one can set.

I know I am asking a drastic functionality here, but with such rapid development rate of Home Assistant, especially user friendliness of UI, I can only hope someday…

Just found the term i was looking for. Semantic Modelling. I’ve made a couple of posts similar but your wording makes more sense. I live in an open plan three level house, and this would be one of the best features that would make life so much easier.

I’ve tried to work with a labeling standard for all my entities which works to a point, but as much as i try to stick to it, it gets mucked up sometimes, and then relabeling can be troublesome too. I know there are ways around it using templates and such, but it can be difficult to setup, and the minute you change something, it stops working.

I think, if more people realized how beneficial this could be, it would get far more votes.

1 Like

Now combine the semantic model with a voice assistant and an AI like ChatGTP and you got the smart home assistant of your dreams. If you e.g. say “Make it dark” the assistant should know

  • in which room you are (aka what Alexa I am talking to)
  • if its before or after sunset
  • if the lights are on or off
  • The shutters are open or closed
  • Has an illuminance measurement of a light sensor
    The AI could decide what to do:
  • Turn of the lights
  • Close shutters
  • Do both
  • Do nothing (if its dark already)
    With that functionality the “year of the voice” would be really interesting :wink:
    So: How to get the information of your smart home semantic model combined with an AI?

Here is a link to the semantic model and ontology of OpenHAB; Semantic Model | openHAB

1 Like