Image processing with USB acceleration - all pi! ARCHIVED

Would I have to run “python3 coral-app.py” as a service?

Essentially, that’s the effect you want. What this means in the Linux world is that the process you start isn’t connected to an interactive session. If your SSH session times out or is disconnected, then all of the processes started by it (“in that terminal process group”) will get a hang-up (“SIGHUP”) UNIX signal sent to them which probably cause them to exit. Or the standard input/output is no longer valid and an error occurs next time it tries to write to it.

You can start it manually using a tool like daemon which does the extra stuff to isolate it from your terminal session, or have the system start it as a service with using systemd when it boots. Maybe start by looking at https://www.raspberrypi.org/documentation/linux/usage/systemd.md for example.

You can also use nohup

1 Like

Thank you for the advice on “nohup” it solved my problem.

1 Like

For completeness I mention screen. You’ll find lots of howtos, but essentially if your session disappears (eg you log out or your ssh session gets interrupted) then any process running under screen stays alive and can be reconnected to.

FYI this project is now merged with my Deepstack Object integration (forum thread).
Cheers

hi @robmarkcole,

so the HASS-Google-Coral custom component is not working anymore?
I see it is not with newer coral-rest-server.

Migration to Deepstack is great but as far as I understand this require a decent amount of ram even using the Coral USB stick isn’t it?

You can simply use the newer version of the Google Coral TPU API server (which responds to the same API request as deepstack) with the Deepstack Home Assistant component. The coral API server is now mostly “plug compatible” with the deepstack server, though need to set it up separately with the model you want to use, etc. It’s working great for me!

1 Like

Hi Louis,

Thanks for this.
So basically you mean update to latest version of this:
https://github.com/robmarkcole/coral-pi-rest-server
Setting up locally the model I want to use (which I already did but still on version 0.4)

And use Deepstack component which will send request to the above server based on Tensorflow and running on USB Stick for inference instead to run Deepstack itself.

Correct?

I also have a question for @robmarkcole : if I understand correctly tensorflow lite is using 300x300 images for post processing, is that correct?
Or does it use full resolution images from the cameras?

@mspinolo your understanding is correct. Also yes images are resized, this is true of all models

What are the advantages/ disadvantages when compared to Frigate?

Frigate allows processing on a stream (i.e. high FPS), but this comes at the cost of a more complex setup.

1 Like

Is there any way to increase the size of images?
Or does it require a new model?

You could crop the regions of interest you care about, this is what frigate allows you to do. You should use the proxy camera for cropping

OK I published release v0.7 which works with Buster and the PI4, so now you can use any of pi-zero/pi3/pi4. Also setup process is simplified by making use of a disk image released by Google. Inference times on the pi3 are around half a second (when queried from another computer) so this is suitable for 1 FPS image processing in Home Assistant. Surprisingly inference times are not any faster on a pi4 despite the USB3, I am chasing this up with Google but suspect the USB3 is not yet implemented in the Coral library.
Cheers

Hi @robmarkcole, is facial recognition still planned? Or I should better go with Deepstack and a Movidius stick?

Well if you have a movidius you should tryout Deepstack. I am using coral with Deepstack as performance is better (at the moment). I may implement face recognition but theres already multiple solutions for that so may do something else instead.

This is some great work but I’m struggling to get things running, although I feel I’m pretty close!

I’ve got the coral-pi-rest-server running on a Pi and I can call it from my NUC running HA and get a response/result using the following command:

curl -X POST -F [email protected] 'http://192.168.1.149:5000/v1/vision/detection'

I’ve got a camera (Xiaofang with fanghacks) set up in Home Assistant and displaying on a card with the following code. I use Zoneminder to provide the jpg snapshot, but I get the stream directly from the camera:


camera:
  - platform: generic
    still_image_url: https://192.168.1.146/cgi-bin-zm/nph-zms?mode=single&monitor=3&user=xxxxxxxx&pass=xxxxxxxx
    stream_source: rtsp://192.168.1.148/unicast
    name: fang
    verify_ssl: false

The yaml for the deepstack custom component looks like:

image_processing:
  - platform: deepstack_object
    ip_address: 192.168.1.149
    port: 5000
    scan_interval: 5000
    save_file_folder: /var/www/deepstack_person_images
    target: person
    confidence: 50
    source:
      - entity_id: camera.fang
        name: person_detector

All I get in HA is the card with “person_detector Unknown” and the following when I click on the card:

target

person

target confidences

all predictions

{}

save file folder

/var/www/deepstack_person_images/

And the only warning I get in the HA log is:

2019-09-08 00:57:46 WARNING (MainThread) [homeassistant.loader] You are using a custom integration for deepstack_object which has not been tested by Home Assistant. This component might cause stability problems, be sure to disable it if you do experience issues with Home Assistant.

I’ve been through the READMEs on both githubs, but can’t see where I might be going wrong.

You have set a 5000 second scan interval, so reduce that or call the scan service with an automation

That did the trick! For some reason I had it in my head the interval was in ms

1 Like