Does anyone here use motioneye?
I have install motioneye with snap on a ubuntu server.
I have also add motioneye intergration and everything looks fine.
I can now connect Motion Detection to dashboard on Home assistant and later I going to try to connect it to Node-Red
But I have notice that motions does not work and not sure why.
I have set the right settings what I know but still it does not recording anything and my Home Assistant does not say anything
Frame rate and minimum motion frames.
If camera frame rate is 20 fps and minimum motion frames is 10 that mean motioneye must see change in frames for 1/2 second. But if camera frame rate is 10 or 15, motion must b be seen for longer period. Fast moving object may be missed. Or moving then still image may not cause detection
Object ratio in camera relates to frame change threshold. Far away object will change few pixels and code object will change many. Example is if threshold is 5% and resolution is 100px x 100 px then object must make change to 50px(5% of 1000).
Turn ON “show frame changes” and use that to help determine issue
I presume time was causing false positive so that why mask added
Turn on “record always” and verify it has write permission to disk
If no error maybe it not detect. Motion detect was never great for me so I record always then move to frigate since it better all around (no slow loading archive video)
Hi and thanks for the replay
This is a little new for me but I do understand what you mean and I have try to change some settings that I have found some other people use.
I have change framerate a little from 2 to 15 to see what happend.
It does have write permission, my other camera did recording alot but nothing was on the picture so I guess it it was to sensitive.
I have change the video device to this to see what happend
what I can see in Home Assistant it still to sensitive, motion is always active so I think I have to find a good guide
Motion is detected by looking at changes in individual pixels. Imaging looking at camera on TV. Let’s say the image is 1920x1080 pixels. If image starts out all black and suddenly white ball roll across screen, the motion is detected when black pixels become white. Motioneye don’t understand or care about object, it just know pixel go from black to white.
If ball is far away, it will look smaller. Since it look smaller it take up less pixels. If it roll close ot take up more pixels. This is we’re frame change threshold has meaning. How many pixels change from black to white between each frame of video
Motion gap: how much time must occur before new motion event will be detected
Minimum motion frames: how many frames need to show change before motion is detected
And try turning on “show frame changes”, it will show you an on screen counter that represents the amount of changed pixels, enabling you to fine tune😉
Thanks
Now I understand more how it works
I did not know that I need to use a webhook to make motioneye understand something is happend.
I did just think it use to send info external.
Yeah, no issues here:
It uses a webhook from Motion eye
The automation is a bit messy, as it originally used to be a blueprint, which I adapted to an automation.
For clarification, the first device is a Android phone, the 2nd device an IOS; they differ a bit as they use a different URL for the message