Image processing with USB acceleration - all pi! ARCHIVED

OK, I’ll swap out the power supply and see if that helps, if not I will create an issue. Thanks!!

1 Like

Hi. Can be used with Coral M2 accelerator?

1 Like

I think someone on this thread might have tried that. In principle there should be no problem

Hi
Is the hassio add on released already?
I got my coral stick yesterday. Now I want to get started. I’m running hassio on a raspberry 4. Which way should I go?

There is no Hassio add on currently, that requires some work on Hassio to support the hardware. Perhaps @frenck can provide some guidance on what needs to be done to support the coral stick?

how would you set it up? on a separate raspberry? or can i use the raspberry where hassio is running on?

Hassio occupies the whole pi, and getting hardware to work with it is a significant challenge, I understand (but have not tried!)

i set it up and it is running with the default detector. as soon as i changed the detector the service will stop and i got a message that the process uses more than 10% of system resources.

for me the default detector is fine at the moment.

does anybody has setup a notification service which sends the picture with html5_push to the connected devices?

Hi
I finished the installation but I get this error when I try to start the server.

Any suggestions?

pi@raspberrypi:~/coral-pi-rest-server $ python3 coral-app.py
Traceback (most recent call last):
  File "coral-app.py", line 103, in <module>
    engine = DetectionEngine(model_file)
  File "/usr/local/lib/python3.7/dist-packages/edgetpu/detection/engine.py", line 72, in __init__
    super().__init__(model_path)
  File "/usr/local/lib/python3.7/dist-packages/edgetpu/basic/basic_engine.py", line 40, in __init__
    self._engine = BasicEnginePythonWrapper.CreateFromFile(model_path)
RuntimeError: Could not open '~/Documents/GitHub/edgetpu/test_data/mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite

I got the same error, changed the path in coral-app.py to:
“–models_directory”,
default=“/home/pi/all_models/”,
and it worked, not sure if this is the right thing to do, i,m still setting things up

Thanks , that fixed the issue .

I have few questions on the next steps integrating this with HA . I have 2 Pi’s one for the coral-pi-rest server and the other for HA instance . Now, I am reading HASS-Deepstack-object GitHub page and looks like this needs to be run a different server ?? Is that true in my case?

Excuse me for my questions as I am new to this :slight_smile:

The coral pi API mirrors the deepstack API, so they are swappable

Thanks Robin!

What would be the next steps for me to integrate this to HA without the deepstack as I am using the coral?

Once I integrate the coral with HA , I plan to do an automation for a switch using node-red .
The automation would be something like to turn on a switch when the camera sees a person.
Appreciate if you could forward any documentation which helps in implementing this .

I am making some improvements to deepstack integration, then will pick up the coral pi again

Great, thanks for the update !

1 Like

some people are hoping the coral code i wrote will perform as well as the official deepstack docker container - that is not the case. The coral code is very alpha, and doesn’t handle multiple requests

@robmarkcole
Yeah, that may be the case, but it still works great!

1 Like

Agreed , it works great !

Has anyone tried training a custom model from google teachable - https://teachablemachine.withgoogle.com/
?

I exported the edgetpu tflite for a custom model . When I try to run the same model with this script , I get below error :
ValueError: Detection model should have 4 output tensors!This model has 1

1 Like

I am working on support for custom models now…! :slight_smile:

2 Likes

@robmarkcole

re: Coral rest server

there any way to make the font larger and/or a different color background on the boxes? I’m not able to read anything on any detections, possibly b/c I’ve reduced the camera image to VGA resolution (and the bitrate is now 96kbps on the sub stream) as its a wifi camera and the link isn’t great. This solved some of my delay problems and timeouts:

I tried modifying this bit of code and restarted, but it doesn’t have any effect:

I’d like to move them to the bottom instead of the top of the image as well.

If you can just point me to the module that handles these drawing routines, I should be able to figure it out from there.

Jeff