Motion Ã👁
This system is a demonstration and proof-of-concept for edge AI providing improved situational awareness from a collection of network accessible video and audio sources.
What is edge AI?
The edge of the network is where connectivity is lost and privacy is challenged.
Low-cost computing (e.g. RaspberryPi, nVidia Jetson Nano, Intel NUC, …) as well as hardware accelerators (e.g. Google Coral TPU, Intel Movidius Neural Compute Stick v2) provide the opportunity to utilize artificial intelligence in the privacy and safety of a home or business.
To provide for multiple operational scenarios and use-cases, e.g. the elder’s activities of daily living (ADL), the platform is relatively agnostic toward AI models or hardware and more dependent on system availability for development and testing.
An AI’s prediction quality is dependent on the variety, volume, and veracity of the training data (n.b. see Understanding AI, as the underlying deep, convolutional, neural-networks – and other algorithms – must be trained using information that represents the scenario, use-case, and environment; better predictions come from better information.
The Motion Ã👁 system provides a personal AI incorporating both a wide variety artificial intelligence, machine learning, and statistical models as well as a closed-loop learning cycle (n.b. see Building a Better Bot); increasing the volume, variety, and veracity of the corpus of knowledge.
Composition
The motion-ai
solution is composed of two primary components:
- Home Assistant - open-source home automation system
- Open Horizon - edge AI platform
Home Assistant add-ons:
-
motion
- add-on for Home Assistant - captures images and video of motion (n.b. motion-project.github.io) -
MQTT
- messaging broker -
FTP
- optional, only required forftpd
type cameras
Open Horizon AI services:
-
yolo4motion
- object detection and classification -
face4motion
- face detection -
alpr4motion
- license plate detection and classification -
pose4motion
- human pose estimation
Status
Videos
Example
The system provides a default display of aggregated information sufficient to understand level of activity.
A more detailed interface is provided to administrators only, and includes both summary and detailed views for the system, including access to NetData and the motion add-on Web interface.
Data may be saved locally and processed to produce historical graphs as well as exported for analysis using other tools, e.g. time-series database InfluxDB and analysis front-end Grafana. Data may also be processed using Jupyter notebooks.
Supported architectures include:
CPU only
-
-
arm64
- Intel/AMD 64-bit virtual machines and devices -
-
aarch64
- ARMv8 64-bit devices -
-
armv7
- ARMv7 32-bit devices (e.g. RaspberryPi 3/4)
GPU accelerated
-
-
aarch64
- with nVidia GPU -
-
amd64
- with nVida GPU -
-
armv7
- with Google Coral Tensor Processing Unit -
-
armv7
- with Intel/Movidius Neural Compute Stick v2
Installation
Installation is performed in five (5) steps; see detailed instructions.
Recommended hardware: nVidia Jetson Nano (aka tegra
)
In addition to the nVidia Jetson Nano developer kit, there are also the following recommended components:
- 4 amp power-supply
- High-endurance micro-SD card; minimum: 32 Gbyte; recommended: 64+ Gbyte
- Jumper or wire for enabling power-supply
- Fan; 40x20mm; cool heat-sink
- SSD disk; optional; recommended: 250+ Gbyte
- USB3/SATA cable and/or enclosure
Example: Age-At-Home
This system may be used to build solutions for various operational scenarios, e.g. monitoring the elderly to determine patterns of daily activity and alert care-givers and loved ones when aberrations occur; see the Age-At-Home project for more information; example below:
Changelog & Releases
Releases are based on Semantic Versioning, and use the format
of MAJOR.MINOR.PATCH
. In a nutshell, the version will be incremented
based on the following:
-
MAJOR
: Incompatible or major changes. -
MINOR
: Backwards-compatible new features and enhancements. -
PATCH
: Backwards-compatible bugfixes and package updates.
Author
David C Martin ([email protected])
Contribute:
- Let everyone know about this project
- Test a
netcam
orlocal
camera and let me know
Add motion-ai
as upstream to your repository:
git remote add upstream [email protected]:dcmartin/motion-ai.git
Please make sure you keep your fork up to date by regularly pulling from upstream.
git fetch upstream master
get merge upstream/master
Stargazers
CLOC
Files | language | blank | comment | code |
---|---|---|---|---|
1231 | JSON | 782 | 0 | 91110 |
459 | YAML | 9928 | 46482 | 90979 |
32 | Bourne Shell | 345 | 207 | 1789 |
9 | Markdown | 276 | 0 | 962 |
3 | make | 105 | 68 | 568 |
3 | Python | 11 | 17 | 96 |
1 | HTML | 19 | 1 | 90 |
-------- | -------- | -------- | -------- | -------- |
1738 | SUM | 11466 | 46775 | 185594 |