Yes this is the reason the core devs are not on the forums much
I noticed that the Pi Zero is unofficially supported and this also applies to the USB accelerator. Is using a Pi Zero for this purpose worth it, or do I need to get the Coral stick?
I also have the Google AIY Vision kit which does have the āVision Bonnetā, but I have no current idea if/how this is compatible with Coral. Also @robmarkcole I saw that you made a rest backend for Coral and the Pi. Do you think itās worth testing this on a Pi Zero W via WiFi? Iād love to integrate this with the Google AIY kit in any case but on a Pi Zero am thinking it might be very slow. Thanks for your help
Hi @codycodes I think the AIY board has a Movidius chip on it, and a completely different codebase. This project requires the Coral USB stick, and should work fine with a pi zero, as all the compute is done on the stick. Cheers
I was very excited to receive my Coral stick. Enough that, even though Iām away from home on business travel, talked my wife through opening up the package and plugging it into the NUC at home so I can play with it remotely.
Iām really impressed at how fast this thing is! You donāt get a sense of that just running their demo, since a significant amount of time seems to be spent initializing things, loading the model, etc. But when I ran the simple flask server and bounced images off it, it really flies! Thanks @robmarkcole for the work you did here! I canāt wait to play with this further inside of Home Assistant.
I want to pair it up some some cheap RTSP cameras in some rooms around the house to do people detection for occupancy where lack of movement means the PIR sensors donāt work so good. Like when you fall asleep on the couch or something This platform seems plenty fast enough to take images every few seconds from a bunch of cameras.
I wonder if the COCO model is trained on groundhogs. Would like to see who/whatās eating the plants in the gardenā¦
Big fan of all your work in this arenaā¦ thanks!!
@imamakos you might need to add the groundhog however probably it looks very similar to a pig?
@robmarkcole ,
do you know if the raspberry pi 2 itās fine to use with the Google Coralās TPU USB Accelerator ?
Please check the Coral website for supported devices, but I would guess itās fine
@robmarkcole, I was playing around a bit more, and I believe that the little flask server app has the coordinates returned incorrectly for the bounding boxes. I opened an issue on github with a diff of the changes that I made which resolved the weird bounding boxes.
I noticed that in the github repo for the Home Assistant component, there was a remark in the Jupyter Notebook about fixing something I was using that notebook to test the installation of the flask app and probably noticed the same weird bounding box rectangles being draw on the image.
Will one get picture returned with bounding boxes when analysis is done?
That is not implemented yet
Playing with this a bit more. I managed to smash the little flask-based server into a Docker container, just to add it to the zoo of all the random stuff I have running.
For the Home Assistant component, it would be useful to be able to specify a different ātargetā per image processing entity. For instance, the camera looking at the porch might want to count persons, but another camera pointing at the driveway might want to count cars instead.
Iām not sure if thereās some trick in maybe having this in two different packages, each with a different target?
Also, it may not be obvious to others, but this platform supports the scan_interval:
specification in the configuration. Iām scanning 8 cameras every 10 seconds with no noticeable impact. I think Iāll maybe take it down to once per second, though I need maybe hack the daemon on the other end to stop logging each request just to keep that under control.
@lmamakos thats great, can you share the dockerfile?
For target you can see the way this can be implemented here. The approach could be extended to support multiple targets, or a template sensor used.
With multiple cameras, is there a way to see which camera seen what?
For example can you tell that your ādriveway cameraā recognized a ācarā.
The Dockerfile is sort of a hack right now - it presumes that youāve unpacked the zip file already and copies files manually into the container being built. And also that youāve retrieved the model files manually as well. I need to put that step into the container build to make it a bit more standalone. Iāve no real experience building Docker images, so I imagine this approach isnāt ābest practiceā.
This Dockerfile assumes youāre running on an x86_64 architecture CPU, and copies that version of the shared library into the container being built. (So it wonāt work as-is on the Raspberry Pi). Possibly thereās some sort of conditional thing in the Dockerfile that allows you to figure that out? It also chooses the āthrottledā version of the library to be safe.
I would have tried to just run the installer script, but it is interactive and asks which version of the library you want to install. I thought I could just pre-process the script with a sed command or something to edit that outā¦ but it was easier in the short run to just reproduce its behavior. Which will probably stop working well when future versions of the script do something differentlyā¦
I also previously installed the package on the host, and noticed that it fiddles with installing a udev rule that changes permissions on the USB devices so that the plugdev
group can access the device. As the container is running as root, I wonder if thatās necessary or not?
#
# Build a container to run the edgetpu flask daemon
#
# Run it something like:
#
# docker run -it --rm --name coral -p 5000:5000 --device /dev/bus/usb:/dev/bus/usb:rwm coral
#
# It's necessary to pass in the /dev/bus/usb device to communicate with the USB stick.
#
FROM python:3.6
WORKDIR /usr/src/app
# Choose carefully. Per Google's installer script:
#
# Warning: During normal operation, the Edge TPU Accelerator may heat up, depending
# on the computation workloads and operating frequency. Touching the metal part of the
# device after it has been operating for an extended period of time may lead to discomfort
# and/or skin burns. As such, when running at the default operating frequency, the device is
# intended to safely operate at an ambient temperature of 35C or less. Or when running at
# the maximum operating frequency, it should be operated at an ambient temperature of
# 25C or less.
#
# Google does not accept any responsibility for any loss or damage if the device is operated
# outside of the recommended ambient temperature range.
#
#COPY edgetpu_api/libedgetpu/libedgetpu_x86_64.so /usr/lib/x86_64-linux-gnu/libedgetpu.so.1.0
COPY edgetpu_api/libedgetpu/libedgetpu_x86_64_throttled.so /usr/lib/x86_64-linux-gnu/libedgetpu.so.1.0
COPY models requirements.txt coral-app.py ./
RUN pip install --no-cache-dir -r requirements.txt && \
ldconfig && \
apt-get update && \
apt-get install -y \
libusb-1.0-0 \
libc++1 \
libc++abi1 \
libunwind8 \
libgcc1
COPY edgetpu_api/edgetpu-*-py3-none-any.whl /tmp/
RUN pip install --no-deps /tmp/edgetpu-*-py3-none-any.whl
EXPOSE 5000
ENTRYPOINT [ "python", "coral-app.py" ]
@lmamakos Thanks for the post, sounds like it could be quite complicated to make production ready owing to the OS, hardware and model combinations. This reference might be useful.
@robmarkcole, yes that looks like a good starting point to adapt from. Thanks for the pointer.
Each camera has itās own distinct image_processing
entity created. The value of each entity is the count of the number of targets, however, the attributes associated with that entity include a count of all the recognized objects. For example:
which indicates there are no people (state is a count of zero persons), but there is one potted plant visible on cam3, which looks like this right now
My configuration looks something like this:
image_processing:
- platform: google_coral
ip_address: 127.0.0.1
port: 5000
confidence: 45
target_object: person
source:
- entity_id: camera.cam1
name: cam1_coral
- entity_id: camera.cam3
name: cam3_coral
of course, you have to have the small flask web server application running that actually talks to Google Coral edgetpu device.
hi guys,
i set up the usb stick on my RPI3 together with flask server and integrated with HA: works fine!
@robmarkcole : is it possible to specify more than one target? like āpersonā and ācatā or similar?
is something like this going to work?
image_processing:
- platform: google_coral
ip_address: 127.0.0.1
port: 5000
confidence: 45
target_object: person
source:
- entity_id: camera.mycam
name: mycam_coral_person
- platform: google_coral
ip_address: 127.0.0.1
port: 5000
confidence: 45
target_object: cat
source:
- entity_id: camera.mycam
name: mycam_coral_cat
The target
specification is what determines what the āstateā of the sensor will be - a count of how many of those objects are discovered. But at least the model that Iām using still looks for the other types of objects and returns those as attributes associated with the sensor.