Image processing with USB acceleration - all pi! ARCHIVED

Yes this is the reason the core devs are not on the forums much

I noticed that the Pi Zero is unofficially supported and this also applies to the USB accelerator. Is using a Pi Zero for this purpose worth it, or do I need to get the Coral stick?

I also have the Google AIY Vision kit which does have the ā€œVision Bonnetā€, but I have no current idea if/how this is compatible with Coral. Also @robmarkcole I saw that you made a rest backend for Coral and the Pi. Do you think itā€™s worth testing this on a Pi Zero W via WiFi? Iā€™d love to integrate this with the Google AIY kit in any case but on a Pi Zero am thinking it might be very slow. Thanks for your help :smiley:

Hi @codycodes I think the AIY board has a Movidius chip on it, and a completely different codebase. This project requires the Coral USB stick, and should work fine with a pi zero, as all the compute is done on the stick. Cheers

I was very excited to receive my Coral stick. Enough that, even though Iā€™m away from home on business travel, talked my wife through opening up the package and plugging it into the NUC at home so I can play with it remotely.

Iā€™m really impressed at how fast this thing is! You donā€™t get a sense of that just running their demo, since a significant amount of time seems to be spent initializing things, loading the model, etc. But when I ran the simple flask server and bounced images off it, it really flies! Thanks @robmarkcole for the work you did here! I canā€™t wait to play with this further inside of Home Assistant.

I want to pair it up some some cheap RTSP cameras in some rooms around the house to do people detection for occupancy where lack of movement means the PIR sensors donā€™t work so good. Like when you fall asleep on the couch or something :slight_smile: This platform seems plenty fast enough to take images every few seconds from a bunch of cameras.

I wonder if the COCO model is trained on groundhogs. Would like to see who/whatā€™s eating the plants in the gardenā€¦

2 Likes

Big fan of all your work in this arenaā€¦ thanks!!

1 Like

@imamakos you might need to add the groundhog :slight_smile: however probably it looks very similar to a pig?

@robmarkcole ,
do you know if the raspberry pi 2 itā€™s fine to use with the Google Coralā€™s TPU USB Accelerator ?

Please check the Coral website for supported devices, but I would guess itā€™s fine

@robmarkcole, I was playing around a bit more, and I believe that the little flask server app has the coordinates returned incorrectly for the bounding boxes. I opened an issue on github with a diff of the changes that I made which resolved the weird bounding boxes.

I noticed that in the github repo for the Home Assistant component, there was a remark in the Jupyter Notebook about fixing something :slight_smile: I was using that notebook to test the installation of the flask app and probably noticed the same weird bounding box rectangles being draw on the image.

1 Like

Will one get picture returned with bounding boxes when analysis is done?

That is not implemented yet

1 Like

Playing with this a bit more. I managed to smash the little flask-based server into a Docker container, just to add it to the zoo of all the random stuff I have running.

For the Home Assistant component, it would be useful to be able to specify a different ā€œtargetā€ per image processing entity. For instance, the camera looking at the porch might want to count persons, but another camera pointing at the driveway might want to count cars instead.

Iā€™m not sure if thereā€™s some trick in maybe having this in two different packages, each with a different target?

Also, it may not be obvious to others, but this platform supports the scan_interval: specification in the configuration. Iā€™m scanning 8 cameras every 10 seconds with no noticeable impact. I think Iā€™ll maybe take it down to once per second, though I need maybe hack the daemon on the other end to stop logging each request just to keep that under control.

1 Like

@lmamakos thats great, can you share the dockerfile?
For target you can see the way this can be implemented here. The approach could be extended to support multiple targets, or a template sensor used.

With multiple cameras, is there a way to see which camera seen what?

For example can you tell that your ā€œdriveway cameraā€ recognized a ā€œcarā€.

The Dockerfile is sort of a hack right now - it presumes that youā€™ve unpacked the zip file already and copies files manually into the container being built. And also that youā€™ve retrieved the model files manually as well. I need to put that step into the container build to make it a bit more standalone. Iā€™ve no real experience building Docker images, so I imagine this approach isnā€™t ā€œbest practiceā€.

This Dockerfile assumes youā€™re running on an x86_64 architecture CPU, and copies that version of the shared library into the container being built. (So it wonā€™t work as-is on the Raspberry Pi). Possibly thereā€™s some sort of conditional thing in the Dockerfile that allows you to figure that out? It also chooses the ā€œthrottledā€ version of the library to be safe.

I would have tried to just run the installer script, but it is interactive and asks which version of the library you want to install. I thought I could just pre-process the script with a sed command or something to edit that outā€¦ but it was easier in the short run to just reproduce its behavior. Which will probably stop working well when future versions of the script do something differentlyā€¦

I also previously installed the package on the host, and noticed that it fiddles with installing a udev rule that changes permissions on the USB devices so that the plugdev group can access the device. As the container is running as root, I wonder if thatā€™s necessary or not?

#
#  Build a container to run the edgetpu flask daemon
#
#  Run it something like:
#
#  docker run -it --rm --name coral -p 5000:5000 --device /dev/bus/usb:/dev/bus/usb:rwm  coral
#
#  It's necessary to pass in the /dev/bus/usb device to communicate with the USB stick.
#

FROM python:3.6
WORKDIR /usr/src/app

# Choose carefully.  Per Google's installer script:
#
# Warning: During normal operation, the Edge TPU Accelerator may heat up, depending
# on the computation workloads and operating frequency. Touching the metal part of the
# device after it has been operating for an extended period of time may lead to discomfort
# and/or skin burns. As such, when running at the default operating frequency, the device is
# intended to safely operate at an ambient temperature of 35C or less. Or when running at
# the maximum operating frequency, it should be operated at an ambient temperature of
# 25C or less.
#
# Google does not accept any responsibility for any loss or damage if the device is operated
# outside of the recommended ambient temperature range.
#
#COPY edgetpu_api/libedgetpu/libedgetpu_x86_64.so   /usr/lib/x86_64-linux-gnu/libedgetpu.so.1.0
COPY edgetpu_api/libedgetpu/libedgetpu_x86_64_throttled.so   /usr/lib/x86_64-linux-gnu/libedgetpu.so.1.0

COPY models requirements.txt coral-app.py   ./

RUN pip install --no-cache-dir -r requirements.txt && \
  ldconfig && \
  apt-get update && \
  apt-get install -y \
    libusb-1.0-0 \
    libc++1 \
    libc++abi1 \
    libunwind8 \
    libgcc1

COPY edgetpu_api/edgetpu-*-py3-none-any.whl /tmp/
RUN pip install --no-deps /tmp/edgetpu-*-py3-none-any.whl

EXPOSE 5000
ENTRYPOINT [ "python", "coral-app.py" ]
1 Like

@lmamakos Thanks for the post, sounds like it could be quite complicated to make production ready owing to the OS, hardware and model combinations. This reference might be useful.

@robmarkcole, yes that looks like a good starting point to adapt from. Thanks for the pointer.

1 Like

Each camera has itā€™s own distinct image_processing entity created. The value of each entity is the count of the number of targets, however, the attributes associated with that entity include a count of all the recognized objects. For example:

which indicates there are no people (state is a count of zero persons), but there is one potted plant visible on cam3, which looks like this right now

My configuration looks something like this:

image_processing:
  - platform: google_coral
    ip_address: 127.0.0.1
    port: 5000
    confidence: 45
    target_object: person
    source:
      - entity_id: camera.cam1
        name: cam1_coral

      - entity_id: camera.cam3
        name: cam3_coral

of course, you have to have the small flask web server application running that actually talks to Google Coral edgetpu device.

1 Like

hi guys,

i set up the usb stick on my RPI3 together with flask server and integrated with HA: works fine!

@robmarkcole : is it possible to specify more than one target? like ā€œpersonā€ and ā€œcatā€ or similar?
is something like this going to work?

image_processing:
  - platform: google_coral
    ip_address: 127.0.0.1
    port: 5000
    confidence: 45
    target_object: person
    source:
      - entity_id: camera.mycam
        name: mycam_coral_person
  - platform: google_coral
    ip_address: 127.0.0.1
    port: 5000
    confidence: 45
    target_object: cat
    source:
      - entity_id: camera.mycam
        name: mycam_coral_cat

The target specification is what determines what the ā€œstateā€ of the sensor will be - a count of how many of those objects are discovered. But at least the model that Iā€™m using still looks for the other types of objects and returns those as attributes associated with the sensor.