Motion AI - Detection and classification of entities

Motion Ã👁

This system is a demonstration and proof-of-concept for edge AI providing improved situational awareness from a collection of network accessible video and audio sources.

What is edge AI?

The edge of the network is where connectivity is lost and privacy is challenged.

Low-cost computing (e.g. RaspberryPi, nVidia Jetson Nano, Intel NUC, …) as well as hardware accelerators (e.g. Google Coral TPU, Intel Movidius Neural Compute Stick v2) provide the opportunity to utilize artificial intelligence in the privacy and safety of a home or business.

To provide for multiple operational scenarios and use-cases, e.g. the elder’s activities of daily living (ADL), the platform is relatively agnostic toward AI models or hardware and more dependent on system availability for development and testing.

An AI’s prediction quality is dependent on the variety, volume, and veracity of the training data (n.b. see Understanding AI, as the underlying deep, convolutional, neural-networks – and other algorithms – must be trained using information that represents the scenario, use-case, and environment; better predictions come from better information.

The Motion Ã👁 system provides a personal AI incorporating both a wide variety artificial intelligence, machine learning, and statistical models as well as a closed-loop learning cycle (n.b. see Building a Better Bot); increasing the volume, variety, and veracity of the corpus of knowledge.

Composition

The motion-ai solution is composed of two primary components:

Home Assistant add-ons:

Open Horizon AI services:

Status







Videos

Example

The system provides a default display of aggregated information sufficient to understand level of activity.

A more detailed interface is provided to administrators only, and includes both summary and detailed views for the system, including access to NetData and the motion add-on Web interface.

Data may be saved locally and processed to produce historical graphs as well as exported for analysis using other tools, e.g. time-series database InfluxDB and analysis front-end Grafana. Data may also be processed using Jupyter notebooks.

Supported architectures include:

CPU only

  • Supports amd64 Architecture - arm64 - Intel/AMD 64-bit virtual machines and devices
  • Supports aarch64 Architecture - aarch64 - ARMv8 64-bit devices
  • Supports armv7 Architecture - armv7 - ARMv7 32-bit devices (e.g. RaspberryPi 3/4)

GPU accelerated

  • Supports tegra Architecture -aarch64 - with nVidia GPU
  • Supports cuda Architecture - amd64 - with nVida GPU
  • Supports coral Architecture - armv7- with Google Coral Tensor Processing Unit
  • Supports ncs2 Architecture -armv7- with Intel/Movidius Neural Compute Stick v2

Installation

Installation is performed in five (5) steps; see detailed instructions.

Recommended hardware: nVidia Jetson Nano (aka tegra)

In addition to the nVidia Jetson Nano developer kit, there are also the following recommended components:

  1. 4 amp power-supply
  2. High-endurance micro-SD card; minimum: 32 Gbyte; recommended: 64+ Gbyte
  3. Jumper or wire for enabling power-supply
  4. Fan; 40x20mm; cool heat-sink
  5. SSD disk; optional; recommended: 250+ Gbyte
  6. USB3/SATA cable and/or enclosure


Example: Age-At-Home

This system may be used to build solutions for various operational scenarios, e.g. monitoring the elderly to determine patterns of daily activity and alert care-givers and loved ones when aberrations occur; see the Age-At-Home project for more information; example below:


Changelog & Releases

Releases are based on Semantic Versioning, and use the format
of MAJOR.MINOR.PATCH. In a nutshell, the version will be incremented
based on the following:

  • MAJOR: Incompatible or major changes.
  • MINOR: Backwards-compatible new features and enhancements.
  • PATCH: Backwards-compatible bugfixes and package updates.

Author

David C Martin ([email protected])

Buy Me A Coffee

Contribute:

  • Let everyone know about this project
  • Test a netcam or local camera and let me know

Add motion-ai as upstream to your repository:

git remote add upstream [email protected]:dcmartin/motion-ai.git

Please make sure you keep your fork up to date by regularly pulling from upstream.

git fetch upstream master
get merge upstream/master

Stargazers

Stargazers over time

CLOC

Files language blank comment code
1231 JSON 782 0 91110
459 YAML 9928 46482 90979
32 Bourne Shell 345 207 1789
9 Markdown 276 0 962
3 make 105 68 568
3 Python 11 17 96
1 HTML 19 1 90
-------- -------- -------- -------- --------
1738 SUM 11466 46775 185594

License

FOSSA Status

6 Likes

Hey, this looks great! I am happy to test it tomorrow on my Pi4 (Hassio). I have a Coral - is there any way to use it with Motion AI? If not, will you support it in the future?

Wow! This is exhaustive. I have so many questions, but the most important ones:

  1. how many cameras can this reasonably support on a Pi4?
  2. how much latency for detection events have you noticed on a Pi4?
  3. Does it make sense to have an entire HA instance dedicated to this or should my regular installation support most of this?
  4. Do you use any kind of NVR separately from this?

No support for the Coral USB stick (yet); I don’t have one, so … Working on Intel NCS2 w/ OpenVino right now

  1. I would not recommend more than 2 RTSP cameras on a Pi4; the important consideration is the resolution and frame rate of the camera. The Wyze cameras I use for testing produce 1920x1080 images at 15 FPS; downsampling to 640x480 is a good first step.
  2. For the Pi4 the latency is about 45 seconds, E2E, including generating the GIF from the MP4
  3. The simplest installation is all-in-one; you can modify and save through the GUI, but the make process rebuilds everything back to original form, so you would want to capture your edits and change the underlying JSON template files.
  4. I am using Plex to show the saved videos for the annotation events; it’s a complete hack right now…

struggling to get the config for the addon. Even the default config doesnt validate

Failed to save addon configuration, not a valid value for dictionary value @ data[‘options’]. Got {‘log_level’: ‘info’, ‘log_motion_level’: ‘info’, ‘log_motion_type’: ‘ALL’, ‘default’: {‘changes’: ‘on’, ‘event_gap’: 30, ‘framerate’: 5, ‘minimum_motion_frames’: 25, ‘post_pictures’: ‘best’, ‘text_scale’: 2, ‘threshold_percent’: 2, ‘username’: ‘!secret motioncam-username’, ‘password’: ‘!secret motioncam-password’, ‘width’: 640, ‘height’: 480}, ‘mqtt’: {‘host’: ‘!secret mqtt-broker’, ‘port’: ‘!secret mqtt-port’, ‘username’: ‘!secret mqtt-username’, ‘password’: ‘!secret mqtt-password’}, 'group…

Failed to save addon configuration, not a valid value for dictionary value @ data['options']. Got {'log_level': 'info', 'log_motion_level': 'info', 'log_motion_type': 'ALL', 'default': {'changes': 'on', 'event_gap': 30, 'framerate': 5, 'minimum_motion_frames': 25, 'post_pictures': 'best', 'text_scale': 2, 'threshold_percent': 2, 'username': '!secret motioncam-username', 'password': '!secret motioncam-password', 'width': 640, 'height': 480}, 'mqtt': {'host': '!secret mqtt-broker', 'port': '!secret mqtt-port', 'username': '!secret mqtt-username', 'password': '!secret mqtt-password'}, 'group...

OMG! Apologies! Let me check with a fresh VM; maybe I forgot something. I will post an update when I have checked through everything; probably EOD (Wednesday, 7/1).

What was the system? Ubuntu18 or Raspbian Buster? VM or host? AMD64, ARM64 (nVidia Jetson), or ARM7 (Pi3/4 w/ 32-bit kernel)?

The motion and motion-video0 addons have been updated to version 0.10.1 to fix a configuration error; the OOTB configuration should now work properly. For reference the following is an operational configuration for a Pi4 (device: pi42) with two RTSP cameras defined.

log_level: info
log_motion_level: info
log_motion_type: ALL
default:
  changes: 'on'
  event_gap: 30
  framerate: 5
  minimum_motion_frames: 25
  post_pictures: best
  text_scale: 2
  threshold_percent: 2
  username: '!secret motioncam-username'
  password: '!secret motioncam-password'
  netcam_userpass: '!secret netcam-userpass'
  width: 640
  height: 480
mqtt:
  host: '!secret mqtt-broker'
  port: '!secret mqtt-port'
  username: '!secret mqtt-username'
  password: '!secret mqtt-password'
group: motion
device: pi42
client: pi42
timezone: America/Los_Angeles
cameras:
  - name: dogshed
    type: netcam
    netcam_url: 'rtsp://192.168.1.221/live'
  - name: sheshed
    type: netcam
    netcam_url: 'rtsp://192.168.1.223/live'
1 Like

Another configuration setup bug; should not refer to secret netcam_userpass, but netcam-userpass. Now version 0.10.2.

I’m getting an error as well trying to save the configureation:

Failed to save addon configuration, not a valid value for dictionary value @ data['options']. Got {'log_level': 'info', 'log_motion_level': 'info', 'log_motion_type': 'ALL', 'default': {'changes': 'on', 'event_gap': 30, 'framerate': 5, 'minimum_motion_frames': 25, 'post_pictures': 'best', 'text_scale': 2, 'threshold_percent': 2, 'username': '!secret motioncam-username', 'password': '!secret motioncam-password', 'netcam_userpass': '!secret netcam-userpass', 'width': 640, 'height': 480}, 'mqtt': {'host': '!secret mqtt-broker', 'port': '!secret mqtt-port', 'username': '!secret mqtt-username'...

Did you run make in the top-level directory? It could be that the secrets are not defined; if you don’t run make then the secrets.yaml file will not be created from the secrets.yaml.tmpl file.

I installed the motion-ai server addon that’s all. My system is Ubuntu Ubuntu 18.04.4 LTS running supervised home assistant in docker. Is there something else I need to do besides just installing the addon? I also tried not using secrets and putting the information directly in the configuration and that didn’t work either.

Wow, excellent job!
I think i will give it a try. Currently on deepstack, can you tell from performance aspect which is faster/lighter?

Hi did you manage to get it working?

1 Like

I did not, I do realize why it isn’t working though. Based on the youtube video motion-ai needs a fresh install of home assistant using the bash scripts in the motion-ai github repository and is beyond my knowledge of how to add to an existing install of home assistant

Can you please advise how to proceed?
I’ve successful installed and configured motion server (for network cameras), I do see it’s stream on port 8090/1.
I’ve configured and run the yolo4motion docker, it is runnign and i get output with curl on port 4662, but all i see is repeative output:

Unable to connect (Lookup error.).
Unable to connect (Lookup error.).
yolo4motion.sh 43 [2020-07-14T21:41:32Z] NOTICE >>> Listening to MQTT host: core-mosquitto; topic: motion/+/+/event/end
yolo4motion.sh 43 [2020-07-14T21:41:32Z] NOTICE >>> Announcing on MQTT host: core-mosquitto; topic: service/yolo4motion/0673b3b355a0; message: {"config":{"timestamp":"2020-07-14T21:31:12Z","log_level":"info","debug":false,"group":"motion","client":"+","camera":"+","event":"event/end","old":500,"payload":"image/end","topic":"motion/+/+","services":[{"name":"mqtt","url":"http://mqtt"}],"mqtt":{"host":"core-mosquitto","port":1883,"username":"XXXXX","password":"XXXX"},"yolo":{"log_level":"info","debug":false,"timestamp":"2020-07-14T21:31:59Z","date":1594762319,"period":60,"entity":"all","scale":"none","config":"tiny-v2","services":[{"name":"mqtt","url":"http://mqtt"}],"darknet":{"threshold":0.25,"weights_url":"http://pjreddie.com/media/files/yolov2-tiny-voc.weights","weights":"/openyolo/darknet/yolov2-tiny-voc.weights","weights_md5":"fca33deaff44dec1750a34df42d2807e","cfg":"/openyolo/darknet/cfg/yolov2-tiny-voc.cfg","data":"/openyolo/darknet/cfg/voc.data","names":"/openyolo/darknet/data/voc.names"},"names":["aeroplane","bicycle","bird","boat","bottle","bus","car","cat","chair","cow","diningtable","dog","horse","motorbike","person","pottedplant","sheep","sofa","train","tvmonitor"]}},"service":{"label":"yolo4motion","version":"","port":80},"hostname":"0673b3b355a0"}
Unable to connect (Lookup error.).
Unable to connect (Lookup error.).

i added mqtt camera:

- platform: mqtt
  name: yolo2msghub
  topic: 'yolo2msghub/image'

but i got nothing:
image

Can you please suggest how to get the processed image and what was detected?

Thanks!

is there a walkthrough on how to add to an existing hass.io installation?

1 Like

I’d also like having a walk through.

Me to, addon dont work on existing install