Any developers on this thread might be excited to know that deepstack is now a public repo, and open for contributions
I just started playing with DeepStack. Can anyone point me to the curl syntax for registering multiple pictures for one person? Thanks
Coral USB Accelerator
or
Intel Neural Compute Stick 2
or
stick with my old Nvidia 960 graphics card
I prefer to use one of these USBS because I rather go to a server that would fit in a rack but if the card is better I will stick with it
I want to build a new NVR and looking for recommendations for AI detection on devices above
Thank you for your time and any information you can give me about the above two USB devices
Hi,
for test purposes I have Deepstack-Object Docker running on 2 different systems:
- Synology DS218+ NAS (Intel CeleronJ3355 (Dual Core), 6 GB RAM), Image: NOAVX, non-GPU and container limited to 3GB RAM
- Notebook with Windows 10, Core i3-4010U (2 cores with HT) and 4 GB RAM. docker image: AVX, non-GPU version
I do a curl -X POST -F [email protected] ‘http://192.168.1.x/v1/vision/detection’ against those 2 systems by changing the IP address (same test.jpg file. It includes a person, garden plants and potted plants.)
- The Synology NAS takes 6-10 seconds (!) to process the image and returns:
{"success":true,"predictions":[{"confidence":0.7821689,"label":"person","y_min":217,"x_min":945,"y_max":494,"x_max":1045},{"confidence":0.7728115,"label":"potted plant","y_min":361,"x_min":731,"y_max":489,"x_max":842},{"confidence":0.77145827,"label":"potted plant","y_min":410,"x_min":559,"y_max":522,"x_max":634},{"confidence":0.6516304,"label":"potted plant","y_min":376,"x_min":1068,"y_max":474,"x_max":1157},{"confidence":0.64147574,"label":"potted plant","y_min":426,"x_min":641,"y_max":504,"x_max":702},{"confidence":0.62971485,"label":"potted plant","y_min":413,"x_min":413,"y_max":526,"x_max":494},{"confidence":0.5387862,"label":"potted plant","y_min":350,"x_min":410,"y_max":535,"x_max":518},{"confidence":0.4051743,"label":"potted plant","y_min":329,"x_min":623,"y_max":477,"x_max":732},{"confidence":0.40153322,"label":"vase","y_min":400,"x_min":753,"y_max":489,"x_max":820}]}
- The Notebook replies in ~650 ms and returns:
{"success":true,"predictions":[{"confidence":0.46187398,"label":"potted plant","y_min":754,"x_min":166,"y_max":979,"x_max":470},{"confidence":0.46211874,"label":"potted plant","y_min":472,"x_min":1260,"y_max":920,"x_max":1637},{"confidence":0.4783537,"label":"potted plant","y_min":231,"x_min":351,"y_max":368,"x_max":454},{"confidence":0.60136586,"label":"potted plant","y_min":368,"x_min":1073,"y_max":473,"x_max":1156},{"confidence":0.62832135,"label":"potted plant","y_min":334,"x_min":617,"y_max":506,"x_max":729},{"confidence":0.7375296,"label":"potted plant","y_min":357,"x_min":740,"y_max":493,"x_max":844}]}p
So the NAS identifies a person whereas the Notebook identifies a potted plant but no person.
Any ideas why the results can be so different?
One observation I made: On the NAS the datastore is used. It includes 2 files: activate and faceembedding.db. On the Notebook it is not used.
Regards
That’s awesome news! I am too far along with my own project running retinaface+arcface and YoloV4+mish cuda on pytorch integrated into HA which is docker free and seems more accurate but requires a GPU. Maybe I’ll give deepstack another look. I am looking to improve efficiency to accommodate more inferences than my current 8 cameras (1536x2048) at 4fps each.
Is the raspberry pi version moving to docker? And will it still require the NCS2?
Hi,
in the meantime I figured out that the deepstack:noavx image ist about 1 year old and that the latest deepstack image includes both, the avx & noavx code. It seems to identify the CPU capabilities and uses the appropriate code. This should be mentioned in the docs (https://github.com/robmarkcole/HASS-Deepstack-object).
Using deepstack:latest the processing time on my NAS goes down from 6-10 secs to 1.6 secs and seems to work fine now.
Hey all!
Has anyone run into an issue with this just almost completely locking up a Jetson Nano 2GB? After a while it takes a very long time for commands to complete with the image running. Weird as tegrastats don’t seem to indicate that it is maxed out. I’m pretty sure the power supply is fine (not getting the power supply warning) and changed SD cards etc.
Using the jetpack-x3-beta but was the same on the x1 beta also.
When I do manage to get it to go, the detection times are taking from 45seconds to 5 seconds, sometimes it goes a bit faster. But generally the first time I run an image it is 45 seconds, then sometimes subsequent detections are faster, is that normal?
Edit: Ah, can see it pretty much maxing the memory! Has anyone got any further optimisations perhaps?
Edit 2: Would these be normal times for a 1200x800 image?
[GIN] 2020/12/16 - 17:40:28 | 200 | 24.83118358s | 172.17.0.1 | POST /v1/vision/face/register
[GIN] 2020/12/16 - 17:41:03 | 200 | 22.153595992s | 172.17.0.1 | POST /v1/vision/face/register
[GIN] 2020/12/16 - 17:41:17 | 200 | 6.34755231s | 172.17.0.1 | POST /v1/vision/face/recognize
[GIN] 2020/12/16 - 17:42:57 | 200 | 39.857662424s | 172.17.0.1 | POST /v1/vision/face/register
[GIN] 2020/12/16 - 17:44:11 | 200 | 1m4s | 172.17.0.1 | POST /v1/vision/face/recognize
Hi Lewis try reducing the image resolution down to 640x360 I find this speeds up things a little bit more.
Image resolution 640x360
[GIN] 2020/12/16 - 19:34:01 | 200 | 64.780744ms | 192.168.1.87 | POST /v1/vision/detection
[GIN] 2020/12/16 - 19:34:06 | 200 | 67.543634ms | 192.168.1.87 | POST /v1/vision/detection
[GIN] 2020/12/16 - 19:34:11 | 200 | 69.684934ms | 192.168.1.87 | POST /v1/vision/detection
[GIN] 2020/12/16 - 19:34:16 | 200 | 69.160897ms | 192.168.1.87 | POST /v1/vision/detection
[GIN] 2020/12/16 - 19:35:25 | 200 | 64.513607ms | 192.168.1.90 | POST /v1/vision/detection
[GIN] 2020/12/16 - 19:35:25 | 200 | 60.024567ms | 192.168.1.90 | POST /v1/vision/detection
[GIN] 2020/12/16 - 19:35:26 | 200 | 60.532398ms | 192.168.1.90 | POST /v1/vision/detection
Hey Andy, thanks I will give that a go tomorrow, had enough for today so hopefully some fresh eyes tomorrow will help
Regarding the Deepstack version - I’m now just using the deepstack:jetpack image, which is about 2 weeks newer than the x3-beta …??
FYI - here’s my Jetson Nano 2G processing images off my Hikvision cam (2048x1536)
running the latest deepstack:jetpack image. Not intentionally processing such hi-res images, but not sure how to get lower res.
[GIN] 2020/12/17 - 05:31:57 | 200 | 491.585899ms | 192.168.1.30 | POST /v1/vision/detection
[GIN] 2020/12/17 - 06:30:39 | 200 | 439.996844ms | 192.168.1.30 | POST /v1/vision/detection
[GIN] 2020/12/17 - 06:31:51 | 200 | 455.971786ms | 192.168.1.30 | POST /v1/vision/detection
[GIN] 2020/12/17 - 06:32:19 | 200 | 481.706564ms | 192.168.1.30 | POST /v1/vision/detection
Yeah I’m also using that jetpack image after I posted last night, didn’t seem to make a difference and still getting really bad processing time, your is way better especially at a much greater resolution!
Hi all
I’ve just released v3.5 of Deepstack object integration which adds support for custom object detection models. Note custom models are not supplied, you must create your own or source one from the community.
Example mask detection using a custom model:
I’ve read about people removing all the GUI/X related packages to free up memory on the Jetson - I intend to do it once I find the article describing how to do it again!
Is it a RAM+swapfile+SD card speed issue maybe?
Yeah, in theory not connecting the HDMI cable should start it in console only mode, at least it has for me so far, will test that.
Yeah definitely could be that, just wondered if anyone had experienced the same, but seems to be as you described!