Might be a problem reaching pypi, because installation should not be a problem. Maybe try to force the install and see what errors surface
I see in the documentation the ātargetā configuration is optional and defaults to person. Iām not finding what the other choices are, though. Are license plate or car or animal, etc options? (Iām assuming this relies on whatever Google makes available.)
All of the objects seen in an image are listed in the summary
attribute, so you can choose from there the target
. Cheers
Robin, do you have any approach you could share with regards to using multiple cloud services for image processing? A bayesian sensor? I have a situation with images produced in a low light, so clouds are returning low probabilities. I am wondering what is the correct way to aggregate results from multiple servicesā¦
Hi @ros
this is a hot topic in research, and doing anything fancy is probably going to require a lot of thought and effort. For an overview of the topic checkout this article. My advice is not to leap into doing anything involving bayesian sensor, but to start as simple as possible. For example if one model is doing well in daytime, and another in low light, then use a simple time based rule to determine which model to use. In the meantime as you develop more sophisticated rules, keep a log of model results and try to spot any patterns that will be useful, e.g. which model does well and when. This would be very interesting to see and help guide development for people like myself who spend more time developing and not enough time hands on with the tools.
Cheers
Can we use this system to recognize people in my household or this is just generic persons?
Just generic. For recognising faces locally try dlib or facebox
To use it, do I need to do all the steps from https://www.home-assistant.io/integrations/google_cloud/? That is: enable billing for project? I still get error:
Cloud Vision API has not been used in project XXXXX before or it is disabled
I did enable it and still error
I did but Sighthound is less accurate. Lately it took flower as person I sent the very same image to GCP and no mistake. But it has one big advantage over GCP -> no need to set billing cardā¦
Yes that is an advantage
All of the services will make mistakes sometimes, and they are also probably making a different tradeoff of speed vs accuracy
Hi @robmarkcole , thanks for another great component. been lookin for this for a while. Working great except that one issue. sometime it recognises me as āmanā but not " person". hence not triggering the automation. Is not man a person? (no pun intended)
Google choose the labels - I noticed that and its a bit wierd. Please create a feature request on the repo explaining the solution
anyone able to get the targets attribute working?
have tried
target:
- person
- car
- wildlife
but get this:
Invalid config for [image_processing.google_vision]: value should be a string for dictionary value @ data['target']. Got ['person', 'car', 'wildlife']. (See ?, line ?).```
Only 1 target is possible with this integration. Chechout my amazon rekognition integration instead
Hi Can you say me what kind of camera do you use ?
Regards
Giloris
Use any kind of camera, personally I am using RPi cameras
Hi Thank you. Your custom component is top. I build a chicken coop connected and i want counting eggs produced every days with camera .
But i use ESP32 Cam and i dont know if itās compatibility.
How google vision find it ? i dont see anywhere on program ID camera use ?
Regards
You need your ESP32 Cam as a camera in home assistant.
Hi
How declare it on home assistant ? I need an IP camera only , i suppose.
camera integration ?. rpi camera need connected to raspberry module but module is my home and camera is outdoor !
Regards
Giloris