Well done @robmarkcole. Really excited about this new update.
Kindly correct the custom model doc link to https://docs.deepstack.cc/custom-models
Well done @robmarkcole. Really excited about this new update.
Kindly correct the custom model doc link to https://docs.deepstack.cc/custom-models
Hi @Unthred, you can do that now with DeepStackās new custom model APIs and @robmarkcoleās latest update to HASS-DeepStack-Object.
@robmarkcole thank you for the update.
Iāve upgraded via Hacs from 3.4 to 3.5 and the component doesnāt appear in HA. I.e image_processing.object_xxxxxxx doesnāt load.
Reverting to 3.4 goes back to normal.
oh wow cool I will look into it thanks
@OlafenwaMoses Thanks a lot for the custom model especially!
And for the detailed video explaining how to prepare the dataset and train the model!
All went smooth, tested and working perfect so far.
However, @robmarkcole the addon (v3.7) is not working for custom model (it works correct for person detection on other cameras), hereās my ha config:
- platform: deepstack_object
ip_address: 10.0.X.X
port: 5000
api_key: 30667XXXXXXXXXXXX
custom_model: lock
save_file_folder: /media/camera/bike_lock
save_timestamped_file: True
targets:
- no-lock
confidence: 85
source:
- entity_id: camera.bike_lock
name: deepstack_bike_lock
Getting:
2020-12-21 23:56:35 ERROR (SyncWorker_3) [custom_components.deepstack_object.image_processing] Deepstack error : Error from Deepstack request, status code: 400
When testing the deepstack itself with:
import requests
image_data = open("/XXXX/open_2207-16-53-30.jpg", "rb").read()
response = requests.post("http://10.0.XXXXXX:5000//v1/vision/custom/lock", files={"image": image_data}).json()
It returns good:
>>response
{'success': True, 'predictions': [{'confidence': 0.95367974, 'label': 'no-lock', 'y_min': 0, 'x_min': 0, 'y_max': 250, 'x_max': 450}]}
Is the addon uses wrong api endpoint?
ā¦
Tested now with deepstack-ui tool and it works ok (detects no-lock when thereās not chain over the wheel):
Only not working from Homeassistantā¦
UPDATE:
Fixed in recent addon versionā¦
This is probable caused by caching on the clientside. This also appears if you display the latest picture in a picture entitycard in lovelace. Clear the cache on the clientside to see if this is you problem.
I created a custom component to handle the created snapshot files. It is now easy to create a animated GIF or MP4 from the snapshots. Also moving and deleting snapshot files is supported.
For use cases see:
https://github.com/robmarkcole/HASS-Deepstack-object/issues/150
https://github.com/robmarkcole/HASS-Deepstack-object/discussions/163
The custom component can be dowloaded at:https://github.com/jodur/snaptogif
OK just released v3.9 of object detection integration which makes it much easier to include saved images in notifications. You can now have a simple automation like:
- action:
- data_template:
caption: "New person detection with confidence {{ trigger.event.data.confidence }}"
file: "{{ trigger.event.data.saved_file }}"
service: telegram_bot.send_photo
alias: Object detection automation
condition: []
id: "1120092824622"
trigger:
- platform: event
event_type: deepstack.object_detected
event_data:
name: person
In telegram:
Hi @johnolafenwa. Just an info ā¦ Could you kindly tell us if the rpi docker version (ncs and non ncs versions) is still scheduled for next days/weeks ? Thank you in advance
Great stuff @robmarkcole. I canāt believe I only just came across this custom component.
For sending Companion App notifications you will have to do a bit of post processing on the saved_file key I think because if you save the Deepstack files in ā/mediaā you have to reference the file with ā/media/local/ā when sending the image in a companion app notification.
The companion app needs a link that is accesable from the outsite, therefore the recommended place is the www
folder.
Hi @jodur,
Yes, you are correct that the Companion app needs an Internet facing accessible link.
The media folder is however accessible externally, itās just thatās itās authenticated; and the Companion App provides that authentication for you by default.
It has the added benefit of you not having your personal camera footage available by an unauthenticated endpoint - which the www is.
Give it a try. I use the media folder for notifications with an image already.
Hi!
Just started learning and using the DEEPSTACK add onā¦
Lovely piece of software!
A Google Coral is on itās way to assist my kinda old i7 6:gen CPU.
Questionsā¦
See pic:
"name": "truck",
"confidence": 58.838,
"entity_id": "image_processing.deepstack_object_mot_parkering",
Next event:
"name": "car",
"confidence": 42.285,
"entity_id": "image_processing.deepstack_object_mot_parkering",
And the third:
"name": "car",
"confidence": 83.301,
"entity_id": "image_processing.deepstack_object_mot_parkering",```
I setup deepstack with the correct face detection variable in docker. This is on a separate computer from my HA. I can reach deepstack page fine and it says activated. I have the config added to my config.yaml and have the entity showing fine. No errors on reboot. I tried serving an image to the service but it eventually says that it timed out. I extended the wait to 60 seconds and it still fails. Below is the my config.yaml and my service call attempt. then the error below that.
image_processing:
- platform: deepstack_face
ip_address: 192.168.50.230
port: 82
timeout: 5
detect_only: False
save_file_folder: /config/snapshots/
save_timestamped_file: True
save_faces: True
save_faces_folder: /config/faces/
show_boxes: True
source:
- entity_id: camera.local_file
name: face_counter
Service:
image_processing.deepstack_teach_face
Service Data (YAML, optional)
{
"name": "JP",
"file_path": "/config/www/jpface.jpg"
}
ERROR:
Logger: homeassistant.components.websocket_api.http.connection
Source: custom_components/deepstack_face/image_processing.py:259
Integration: Home Assistant WebSocket API (documentation, issues)
First occurred: 5:42:03 PM (1 occurrences)
Last logged: 5:42:03 PM
[140537795396032] Timeout connecting to Deepstack, the current timeout is 5 seconds, try increasing this value
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py", line 445, in _make_request
six.raise_from(e, None)
File "<string>", line 3, in raise_from
File "/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py", line 440, in _make_request
httplib_response = conn.getresponse()
File "/usr/local/lib/python3.8/http/client.py", line 1347, in getresponse
response.begin()
File "/usr/local/lib/python3.8/http/client.py", line 307, in begin
version, status, reason = self._read_status()
File "/usr/local/lib/python3.8/http/client.py", line 268, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
File "/usr/local/lib/python3.8/socket.py", line 669, in readinto
return self._sock.recv_into(b)
socket.timeout: timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/requests/adapters.py", line 439, in send
resp = conn.urlopen(
File "/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py", line 755, in urlopen
retries = retries.increment(
File "/usr/local/lib/python3.8/site-packages/urllib3/util/retry.py", line 531, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/usr/local/lib/python3.8/site-packages/urllib3/packages/six.py", line 735, in reraise
raise value
File "/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py", line 699, in urlopen
httplib_response = self._make_request(
File "/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py", line 447, in _make_request
self._raise_timeout(err=e, url=url, timeout_value=read_timeout)
File "/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py", line 336, in _raise_timeout
raise ReadTimeoutError(
urllib3.exceptions.ReadTimeoutError: HTTPConnectionPool(host='192.168.50.230', port=82): Read timed out. (read timeout=5)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/deepstack/core.py", line 100, in post_image
return requests.post(
File "/usr/local/lib/python3.8/site-packages/requests/api.py", line 119, in post
return request('post', url, data=data, json=json, **kwargs)
File "/usr/local/lib/python3.8/site-packages/requests/api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/local/lib/python3.8/site-packages/requests/sessions.py", line 542, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python3.8/site-packages/requests/sessions.py", line 655, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python3.8/site-packages/requests/adapters.py", line 529, in send
raise ReadTimeout(e, request=request)
requests.exceptions.ReadTimeout: HTTPConnectionPool(host='192.168.50.230', port=82): Read timed out. (read timeout=5)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/components/websocket_api/commands.py", line 135, in handle_call_service
await hass.services.async_call(
File "/usr/src/homeassistant/homeassistant/core.py", line 1445, in async_call
task.result()
File "/usr/src/homeassistant/homeassistant/core.py", line 1484, in _execute_service
await self._hass.async_add_executor_job(handler.job.target, service_call)
File "/usr/local/lib/python3.8/concurrent/futures/thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "/config/custom_components/deepstack_face/image_processing.py", line 161, in service_handle
classifier.teach(name, file_path)
File "/config/custom_components/deepstack_face/image_processing.py", line 259, in teach
self._dsface.register(name, image)
File "/usr/local/lib/python3.8/site-packages/deepstack/core.py", line 277, in register
response = process_image(
File "/usr/local/lib/python3.8/site-packages/deepstack/core.py", line 124, in process_image
response = post_image(url=url, image_bytes=image_bytes, timeout=timeout, data=data)
File "/usr/local/lib/python3.8/site-packages/deepstack/core.py", line 104, in post_image
raise DeepstackException(
deepstack.core.DeepstackException: Timeout connecting to Deepstack, the current timeout is 5 seconds, try increasing this value
I want to do the same setup and not working at HA.Exact the same problem.
EDIT: You must go ti minimum timeout: 120. At the first activation.Now working for me.Just playing with automatations now.
Hi all, do scan_interval
works?
I can manually call image_processing.scan
service and it works. Now I need to run image_processing.scan
automatically every 2 seconds but scan_interval
seems doing nothing.
My config:
image_processing:
- platform: deepstack_object
ip_address: 10.10.xx.xx
port: 80
scan_interval: 2
save_file_folder: /config/www/deepstack_person_images
source:
- entity_id: camera.mycamera
name: person_detector
Hi all, latest release of deepstack-object integration add object_type
which allows triggering and alerting on object types including vehicles
and animals
. This should simplify a few automations for people. Read about the types on https://github.com/robmarkcole/HASS-Deepstack-object#objects
@danyolgiax use an automation or script to trigger a scan
@jplivingston08 as the logs suggest, try increasing your timeout
@Minglarn I am aware of the issues with the image annotations, feel free to make comments on the open issues. For the events, it is up to the user to use them wisely, they are there for flexibility
@asknoone I enabled discussions on the repo, we have a section for Automations, if you have some good ones for notifications please share them
Iāve implemented a new feature on master branch of deepstack-object, allows setting a confidence
per object name/type. It is a breaking change so I am looking for a couple of beta testers before making it an official release. Any volunteers?
@jplivingston08 anything in the logs? Might be an issue with the image you are posting
@robmarkcole
Thank you for the work! This integration is very helpful. I made a custom version off of it - please consider adding some of the features into your master: