Yep, I had a chat with Google engineer as well but he is also telling me to ask VMware…
The error you pasted above seems to happen for empty/unused VM pci slots in my case.
Could you double check with lspci command which pci id is your coral is installed?
Mine fails with this in dmesg:
[5.589867] apex 0000:03:00.0: Page table init timed out
[5.590320] apex 0000:03:00.0: MSI-X table init timed out
[5.603491] apex: probe of 0000:03:00.0 failed with error -110
Doesn’t look like there’s much interest from Google to fix the issue, and this is beyond my expertise so we’re going to have to wait until more people get their hands on this module. Also, VM passthrough for a device like this shouldn’t be such a crazy idea…
I’m sure VMware is just going to refer me back to Google
Very little. The debug endpoint is efficient since it is just JSON data. Viewing the camera feed generates load because it is creating a jpg image every second.
Sure. I used the bottom-most example on the rest sensor page – the one that uses a rest sensor and 3 template sensors for 3 bedrooms. The benefit is it only does a single rest call to the debug endpoint and then uses the template sensors to extract the data.
I have 3 cameras and wanted to see the FPS for each of those, as well as some Coral stats.
It probably won’t help in Docker for Windows because the container will still be Linux. These seem to be for running python directly in Windows or macOS.
how do you enable the debug page, i cant find it anywhere on the github. i have the latest installed and if i goto ipaddress:5000/debug/stats i get nothing…everything is working though
Just got the 0.4.0 beta running on my rpi4 and it’s running beautifully, thanks Blake Four low-res streams @ 6fps with average CPU load of 25%, compared to 70%+ before.
For anyone interested, here are the tweaks I made to the DockerFile. This was a bit trial and error so there is probably a more elegant way.
I just got this set up today as a replacement for Zoneminder and it is wonderful!
I have one camera that outputs MJPEG instead of h.264. Is there a way for me to use this camera? I have tried setting it up with the default options and (as I expected), it doesn’t work…
Thanks for sharing! I’m going to give it a go because with my current setup, using the standard camera integration I’m regularly getting hit by this:
@blakeblackshear - FYI, few nights ago car thieves visited my front yard and frigate worked like a charm! I was calling police seconds after they approached my car. Unfortunately they escaped before police arrived, sometimes I wish I was living in the US…
Thanks for developing it and sharing with the community!
Quick question @blakeblackshear: how feasible would it be for a future version to enable the use of multiple models with the Coral?
I ask because a use case that would be beneficial for me would be to use object/person detection on several of my cameras but use person AND face detection specifically at my doorbell camera. I’d love to be able to pass a best_face image to a facial recognition Docker container and send me an alert from Home Assistant. I already do facial recognition with the doorbell camera now, but since the Coral has been so fast and reliable at detecting objects/people, it might work better to detect a face as someone is approaching the door rather than relying on an external trigger (like a motion sensor or doorbell button) to take a snapshot and pass it to my facial recognition container.
Maybe there’s an easy way to combine tensorflow models that I don’t know about? To have “person” and “face” in one model would be great and probably wouldn’t require any changes to frigate.
That is already on my mental road map. Combined with object tracking, I can find the best face image associated with that person. The Coral can support multiple models, but the switching cost is high. I will need to be smart about when to use face detection and ultimately face recognition down the line.