Image processing with USB acceleration - all pi! ARCHIVED

Well I am running HA on docker so I tried to install manually with pip install deepstack-python
and get the error

  Could not find a version that satisfies the requirement deepstack-python (from versions: )
No matching distribution found for deepstack-python

Thanks for the help so far!!

Sounds like it did not install, try again and share the result here

This is the error I get on my NUC running Ubuntu

Collecting deepstack-python
  Could not find a version that satisfies the requirement deepstack-python (from versions: )
No matching distribution found for deepstack-python

I even tried it on a seperate raspberry pi and get

Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple
Collecting deepstack-python
  Could not find a version that satisfies the requirement deepstack-python (from versions: )
No matching distribution found for deepstack-python

And I tried to install manually with pip install deepstack-python-0.4.tar.gz and get

deepstack-python requires Python '>=3.7' but the running Python is 2.7.15

but I cant seem to install python 3.7

It sounds like you are trying to install into the system python (2.7) rather than the docker environment python (3.7)

Well I cant get it going and dont want to waste more if your time.

One last question did you ever get this post working?

maybe I can get the results myself through this…

I think you should work through the first issue, which is easier to resolve than creating a new component

@robmarkcole - fantastic work!

Coral arrived yesterday and the Google SD card image worked great on a Pi3 (though I plan to use an Ubuntu machine in my “production” version) and your rest API was a cinch to get working. It was the end of the day so I only had a chance to throw a few random images at it but everything “just worked” (once I uninstalled numby - a version conflict was causing import errors which is odd given it was a Google supplied disk image)

If the stars align I’ll get the HA integration installed in the next 24 hours. I plan to use this for at least a few things:

  1. Right now whenever BlueIris detects motion it throws a MQTT alert to HA. I get false positives though that I can’t get rid of. I’d like to feed all the cameras through this looking for “person” and if any are detected when we don’t expect them, push an alert to us and/or trigger security. Basically an engaged motion detection system.

  2. I’d been working on presence tracking with Bluetooth dongles to tell when the kids arrive in the area to alert us if the school bus arrived yet. This has been problematic with range, kids taking the dongles apart and the battery falling out, them turning them off, etc. Now I plan to scan for “bus” on a few of the front facing cameras and alert us that way so we’re not constantly looking out the window or on the camera feeds for them.

  3. Delivery notification if “box” or something similar is detected on the front door camera. This one might need some tweaking.

  4. Enhancing presence detection and as input into logic to close garage doors if left open - detect if any cars and which ones are in the garage or approaching it.

The possibilities with what you’ve put together so far are vast and wide. I recently completed a “nano degree” in AI programming with Python and this is a relief compared to that. It took 16 hours to train my model in that class! I plan to make my own model for this as well or perhaps augment a sample Google one so I don’t have to retrain for car, bus, person, etc and just add the labels I want that are missing.

Thanks for your work and contributions. If I can help in some way please let me know.

-joni-

Interestingly you can also do training using the USB stick, or there is a docker image. Also I don’t see any benefit to using Ubuntu as the processing is done on the stick anyway. Re help and contributions, just keep us posted how you get on with training your own models

I’ll have to look into training using the stick directly - the webapp/UI that Google has is the first thing I saw and it looked ridiculously easy to train models that way and I think my sample set will be below their threshold for charging for cloud time/resources. Is there an app/UI for local training or is it done via a labels file and directory structure of classified images? That’s how my project assignment was done.

Re Ubuntu vs the RPi, the only advantage is that I don’t have to worry about the SD card deciding not to work one day (I’ve had this happen a few times even with cards from Sandisk) and I could run it on a VM which makes it automatically fit into my existing snapshot/backup schedule vs the RPi which I don’t (currently) have an automated backup/recovery mechanism. I also use VM snapshots to save myself from myself. :slight_smile: In addition to HA backups internal to HA I take snapshots of my VM’s before I do a HA version upgrade or make any significant changes so I can rollback easily and quickly. I troubleshoot enough at work. It’s important to me to not have to do a bunch more at home (guess I picked he wrong hobby for that!) as well as the WAF if the house goes offline due me fat fingering something! :slight_smile:

Thanks again,

-joni-

For training, also checkout Google Colab. You can arrange your images in Google drive and train them for free (12 hrs runtime max).

UPDATE: Disregard this post. It turns out that the model seems to love the corner of that foundation and thinks it’s a person all the time and it just so happened that the snapshot I was looking at actually had a person standing on the foundation near the bounding box. Odd that it didn’t pickup the actual PEOPLE in the image but I’m chalking that up to a model in need of more training and/or the image being too high in resolution such that when it’s scaled down for classification that the people are nulled out - if that makes sense.

— Original post below:

I’ve got the custom component installed and setup a person detector on one camera and on another camera a person and a car detector. The image that’s saved though seems to have the bounding box offset from the actual object, a person in this case. Is this something I need to reconfigure somehow?

ha-object-detection

Also, apologies in advance but now that I’ve got this working in HA, I’m sure I’ll have more questions today/tomorrow as I get time to play around more. :slight_smile: Very cool stuff!

Thanks,

-joni-

Yolo will not always find small objects, this is a limitation of the model. There is always a tradeoff between speed and performance.

I’m using Coral TPU on an raspberry 4 , it is working great. I wanne use this as a realtime sensor, but it throws some errors
It is detecting almost all person’s , because of real time detection i’m using a scan_interval: 1.
It is working, ok. but it is better with no errors…

2019-10-03 18:01:02 ERROR (MainThread) [homeassistant.helpers.entity] Update for image_processing.detectie_personen_tpu_achterdeur fails
Traceback (most recent call last):
  File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 261, in async_update_ha_state
    await self.async_device_update()
  File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 439, in async_device_update
    await self.async_update()
  File "/usr/src/homeassistant/homeassistant/components/image_processing/__init__.py", line 136, in async_update
    await self.async_process_image(image.content)
  File "/usr/local/lib/python3.7/concurrent/futures/thread.py", line 57, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/config/custom_components/deepstack_object/image_processing.py", line 150, in process_image
    self._dsobject.detect(image)
  File "/usr/local/lib/python3.7/site-packages/deepstack/core.py", line 129, in detect
    error = response.json()["error"]
KeyError: 'error'
2019-10-03 18:01:06 ERROR (MainThread) [homeassistant.helpers.entity] Update for image_processing.detectie_persoon_tpu_voordeur fails
Traceback (most recent call last):
  File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 261, in async_update_ha_state
    await self.async_device_update()
  File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 439, in async_device_update
    await self.async_update()
  File "/usr/src/homeassistant/homeassistant/components/image_processing/__init__.py", line 136, in async_update
    await self.async_process_image(image.content)
  File "/usr/local/lib/python3.7/concurrent/futures/thread.py", line 57, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/config/custom_components/deepstack_object/image_processing.py", line 150, in process_image
    self._dsobject.detect(image)
  File "/usr/local/lib/python3.7/site-packages/deepstack/core.py", line 129, in detect
    error = response.json()["error"]
KeyError: 'error'

Anyone could give me a hint to reduce this!?,

@reneeetje I have created an issue for this. I am waiting on the deepstack guys to update their API which currently is not handling errors in a great way

Thanks Robin for your great work on this!

I’m using a Raspberry 4 2 gb, for real time person detection. CPU is at constant 18%. I have 2 camera, one camera for car and person detection, the other one for person detection. For the car detection i have a scan_interval of 60 seconds. For person detection i have a scan_interval of 1.

i’m using person recognition if i’m not at home and for my outside light if i am at home.
Car detection triggers openalpr to find the licence plate and warn me if it is a known licence plate or not!
Latest image is are shown in lovelace.

The raspberry was realy hot, with only 18 % cpu usage. I bought a PiCoolFAN4, because the temperature was above 80 degrees!, the PiCoolFAN4 brings this down to 57 degrees, i could lower this but then the PiCoolFAN4 makes a lot of noise. Also i drilled a thew holes to get a lower temperature.CORAL TPU feel realy warm but not hot.
At night confidence is dropping but still doable.

20191004_211359

3 Likes

Really good to see the community making a success of deep learning on a pi + stick, I think this is the future :slight_smile:

Just a quick update. I now have two different types of doorbells - one that sends notifications when someone presses the doorbell (Dingdong) and one when a person is detected at the front door even if they don’t ring the doorbell itself (Dongding).

A recurring theme with my posts though, the model definitely needs work to the point that I wouldn’t recommend the demo models to anyone for more than tinkering or proof of concept. Training my own model will be next.

I’ll admit I haven’t searched yet, am about to, but advice on best pictures to use to train new models with? Say I wanted to detect school buses, should I train with images fairly well zoomed in and cropped on school buses or images that have a school bus somewhere in them but with lots of other “noise” around them as things would appear during real life use/detection?

Thanks,

-joni-

I like this term :slight_smile:

Re training, that is outside the scope of this thread so suggest starting another. There are quite a few tutorials online, but not many which are deployed in real life so that would be interesting to see. Note that to detect busses you are talking about object detection, so you will need to annotate your images. I suggest a minimum of 30 annotated images, but clearly more images is better

Hope anyone can offer insight to my error here: all was working fine until I updated my hassbian venv to python 3.7 and then stretch>buster.
the error is mentioning a ‘broken pipe’ - I rolled back from 1.6 to 1.5 using HACS to see if that helped, and i got substantially more in the log.

2019-10-06 21:50:20 WARNING (MainThread) [homeassistant.components.image_processing] Updating deepstack_object image_processing took longer than the scheduled update interval 0:00:05
2019-10-06 21:50:21 ERROR (MainThread) [homeassistant.helpers.entity] Update for image_processing.kitchen_person_counter fails
Traceback (most recent call last):
  File "/srv/homeassistant/lib/python3.7/site-packages/urllib3/connectionpool.py", line 672, in urlopen
    chunked=chunked,
  File "/srv/homeassistant/lib/python3.7/site-packages/urllib3/connectionpool.py", line 387, in _make_request
    conn.request(method, url, **httplib_request_kw)
  File "/usr/local/lib/python3.7/http/client.py", line 1244, in request
    self._send_request(method, url, body, headers, encode_chunked)
  File "/usr/local/lib/python3.7/http/client.py", line 1290, in _send_request
    self.endheaders(body, encode_chunked=encode_chunked)
  File "/usr/local/lib/python3.7/http/client.py", line 1239, in endheaders
    self._send_output(message_body, encode_chunked=encode_chunked)
  File "/usr/local/lib/python3.7/http/client.py", line 1065, in _send_output
    self.send(chunk)
  File "/usr/local/lib/python3.7/http/client.py", line 987, in send
    self.sock.sendall(data)
BrokenPipeError: [Errno 32] Broken pipe

really looking forward to this supporting my presence detection at home. thanks @robmarkcole you always make top components!