HA, Deepstack and variable confidence level

Hi All,

I would like to have my (fixed) confidence level of Deepstack object detection to be variable so
that I can control the confidence level from a dashboard. Input_number would be the start but
how to I connect that to the Deepstack confidence level in configuration.yaml?
Please advice,

Nongsung

Here is a sample of my setting

  - platform: deepstack_object
    ip_address: [xyz]
    port: 8085
    # api_key: mysecretkey (optional)
    # custom_model: mask
    # confidence: 80
    scan_interval: 10000
    save_file_folder: /config/www/
    save_file_format: png
    save_timestamped_file: False
    always_save_latest_file: True
    scale: 0.75
    roi_x_min: 0.2
    roi_x_max: 0.99
    roi_y_min: 0.01
    roi_y_max: 0.99
    targets:
      - target: person
        confidence: 60
      - target: vehicle
        confidence: 60
      - target: car
        confidence: 40
      - target: cat
        confidence: 40
      - target: dog
        confidence: 40
    source:
      - entity_id: camera.tapo_camera_545d_hd
        name: deepstack_outdoor_live_cam

Yes, But that has a fixed confidence level at well.
I’m looking for something like: confidence: confidence_level_variable

OK, not known to that one … could you maybe elaborate with use case so others can possibly chime in?

OK, here it goes. Have 7 cameras running perfectly daytime. The only object I monitor is ‘person’ and that works well. At night however the detection is poor so my thinking is that, if I lower the confidence level at night, I’ll have more positive hits. Then I can automate daytime let’s say confidence level 80 and nighttime confidence level 70 (or whatever works best). Don’t know how to achieve this because my confidence level is fixed at 80…

Clear… about 6 (?) months ago I read something about a solution for deepstack to help identifying objects better in a nighttime situation…sadly can not find it with a quick (!) search but it should be out there…this is of course not what you asked for but might help.
For making deepstack levels flexible, I guess you would have to raise this with the owner(s), as you probably know, Deepstack is not HA specific. https://forum.deepstack.cc/

Yeah, I know that you mean, Dark models. I’ve tried that but couldn’t get that to work.
Thanks anyway, I’ll sort it out eventually.

Could you send me the link please to Darkmodels and … what did not work … no effect at all?

Hi

Got the container running with both object & model detection but it only performed the object detection, the model detection was never invoked. I’ll give it another try tonight

btw… a very odd way of dealing with this is having mutlilpe deepstack cam sensors…e.g. one for the night and one for the day…

I have kind of figured it out because I know now why the dark mode wasn’t working.
First your Deepstack docker container (object detection) should point to the path of your custom model (did this) and in configuration.yaml you have to specify that the object detection should use your custom model;
In this case ‘dark’.

image_processing:

  • platform: deepstack_object
    ip_address: 192.168.1.X
    port: 8183
    custom_model: dark

But that is also hard-coded and I would lose or reduce the ability to detect objects during
daytime. Oner step further but still not what I want.
Then I stumbled on a custom model that contains models for daytime AND nighttime.

Downloaded the general.pt file, installed it, changed configuration.yaml, restarted Deepstack & home assistant and it’s is running now; it’s pretty fast and pretty accurate but we are mid day so I don’t know
yet how accurate it’s gonna be when it’s dark outside.

Keep you posted…

Thanks for this as I (and probably many others) have the same challenges :slight_smile:

Thanks for posting this.

I’ve had a brief play with the general.pt custom model for persons and vehicles only.
It is significantly faster than stock for docker CPU deepstack, however I’ve noticed that the corresponding deepstack detection events return “other” for trigger.event.data.object_type rather than the standard model’s person / vehicle object type.

Do you see a similar behaviour in your testing?

Hi TazUK,

I’m handling Deepstack from Node-RED; if one of my 7 cameras sees motion the flow will
further executed. Since I explicitly check for ‘person’ (for notifying) I’m not getting other results (because I’m not checking anything other than ‘person’).
Furthermore in my configuration.yaml I have set the target to ‘person’

So I guess my total configuration is different than yours…

But like you said; it’s way faster but I’m still not satisfied with the motion detection at night.
Working on that…

Regards,

NongSung

Yes sounds like different use cases - I use the object type returned by the event to exclude certain object types within ROI boundaries for the camera on the front of our house (person on path, vehicle on road, etc) from notifications. Other cameras have less filtering in the associated YAML automations, so I may need use them to test the models first before I start modifying the logic.

Hi, I use the Deepstack HA Addon. Can you please tell me where so you put this config lines? in addon_config folder or ha config yaml?

Deepstack works for me with Double Take and frigate all on the Same Raspberry Pi. But the confidence and Box sizes from Deepstack are too Limited for me.

Thanks

These are in the config yaml, I am running docker-version