Feasibility of AI monitoring solution

Hi folks!

I am completely new to home automation and AI, but I am an experienced IT professional with extensive infrastructure, networking and coding experience. I have set up HA in our home and it is working perfectly so far.

Our house was wired for manual monitoring and future automation, including KNX as device backbone, cameras, speakers and microphones in every room. This was the single largest expense during the house renovation, but it is necessary due to my family members’ health conditions, which require close monitoring in certain situations. Such manual monitoring has already saved lives.

Having read lots of threads related to home automation and AI, I get an 'early ‘90s Unix’ vibe: lots of tinkering and fiddling with systems, and a huge time investment. That’s OK with me as long as the goal is potentially achievable.

I would be very grateful for your opinion on the feasibility of meeting my family’s requirements and wishes with current state of locally run AI technology.

Here are the minimum requirements for what we would like to have:

  • Keep track of individual movement throughout the house. In other words, rather than analysing individual images in an attempt to recognise human shapes, connect those images together to create a coherent analysis of presence and movement paths. This requires spatial awareness.

  • Attempt to detect individuals based on their appearance, gait, movement, and contextual presence. For example, if someone stood up from a bed in room X or sat at a table in position Y, they are most likely family member Z.

  • Detect falls.

  • Detect the absence of movement for a certain time period, or irregular behaviour/gait.

  • Detect abnormal sounds or calls of distress.

Based on these triggers, I wish to initiate a series of events. For example:

If family member Z enters a room, then perform action X.

Or:

If a fall or irregularity is detected, broadcast a query via the internal speaker closest to the person in question asking if they are OK. Then act based on their response, which could be visual or audio.

Thanks,

Forever lost

If you are asking these questions, you do not have the programming ability or hardware to pull this off. (Are you going to chip all the occupants so that something can monitor their every movement)
Ask again in a couple of years.

Thank you, Sir_Goodenough. This is valuable insight. Knowing that someone with the right programming ability and hardware can do it is already very helpful.

Regarding your comment about the occupants, rest assured that everyone in my household clearly understands the consequences of the project and wishes for it to be realised (if it is completely localized). There are situations in life where you wish you had a ‘god’s eye’ on you. I hope you never find yourself in such a situation.

God is a professional, not a hobbyist. Keeping his eye on you is a full time job he does to perfection, consistently, reliably. I trust him far more than a anonymous open source software update.

Using a hobbyist style lashup for life support is going to slash a few zeros off your support bill, but what if it doesn’t work and you have waited a month for the GitHub changes to be updated? Who do you call?

That is why your disability provider supplied item costs $5000 when you can buy it online for $75 - certification and support.

Something you get here for free, from anonymous strangers, with different abilities, and validity. No recourse if it is wrong, or AI generated slop.

Ask your folks - is this something they want to risk?

Then of course there is the weekly updates, along with a list of breaking changes…

Yeah/naah.