Microsoft facial recognition, who is able to make it work?

Sorry, do you mean taking a picture using the same camera that I’m using to detect my face on?

I see that a face is detected but i’m not sure if it identify my face. Here is what the log said. Notice total face change to 1 but known face is still 0

INFO:homeassistant.core:Bus:Handling <Event state_changed[L]: entity_id=image_processing.microsoftface_basement_livingroom_camera, old_state= state image_processing.microsoftface_basement_livingroom_camera=unknown; total_faces=0, known_faces=, friendly_name=MicrosoftFace basement_livingroom_camera @ 2017-02-12T00:01:58.881868-05:00,

new_state= state image_processing.microsoftface_basement_livingroom_camera=unknown; total_faces=1, known_faces=, friendly_name=MicrosoftFace basement_livingroom_camera @ 2017-02-12T00:01:58.881868-05:00

here is the api page that explains it. it needs to be “JPEG, PNG, GIF(the first frame), and BMP are supported. The image file size should be larger than or equal to 1KB but no larger than 4MB.”

https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b

that is not easy to understand if you have no coding experience looking at that link where do you put the picture

{
“url”:“http://home/homeassistant/.homeassistant/www/saraiyhw.jpg
}

I add it like this in HA dev tools… it did not come back with an error but again I don’t know what to look for… Do I need to add a list or it will create a fave list…

how do you train group ??

Ok, got it working by turning down the confidence level to 50 (default 80). maybe adding different picture of my face and train again should help. Here is the log (if anyone whats know):

INFO:homeassistant.core:Bus:Handling Event state_changed[L]: new_state= state image_processing.microsoftface_1stflr_livingroom_camera=daddy; friendly_name=MicrosoftFace 1stflr_livingroom_camera, known_faces=daddy=50.822, total_faces=1 @ 2017-02-12T00:52:45.786532-05:00,

old_state=state image_processing.microsoftface_1stflr_livingroom_camera=unknown; friendly_name=MicrosoftFace 1stflr_livingroom_camera, known_faces=, total_faces=0 @ 2017-02-12T00:50:03.799456-05:00, entity_id=image_processing.microsoftface_1stflr_livingroom_camera

sorry, im just giving you a reference of what MS said about picture size. The page can add a picture but you need to have saraiyhw.jpg file accessible via a website. If you just have it on your computer, you need to use curl commandline as I have posted before.

the curl commandline are giving me 2 error

  1. too big or too small
  2. unknown format

here is my setup

microsoft_face.family 2 Bobby: b1cfa8ea-d0b4-4e66-942f-09628dc035a9
Saraiyh: c61fce65-e4d5-4fc3-a47a-123089b646fc

pic location

{
“url”:“http://home/homeassistant/.homeassistant/www/saraiyhw.jpg
}

can you tell me where am going wrong ?
and

curl -v -X POST “https://westus.api.cognitive.microsoft.com/face/v1.0/persongroups/family/persons/c61fce65-e4d5-4fc3-a47a-123089b646fc/persistedFaces
-H “Ocp-Apim-Subscription-Key: 987f46852353491aa”
-H “Content-Type: application/octet-stream” --data “@/home/homeassistant/.homeassistant/www/saraiyhw.jpg”

also

[
{
“personId”: “b1cfa8ea-d0b4-4e66-942f-09628dc035a9”,
“persistedFaceIds”: [],
“name”: “Bobby”,
“userData”: null
},
{
“personId”: “c61fce65-e4d5-4fc3-a47a-123089b646fc”,
“persistedFaceIds”: [],
“name”: “Saraiyh”,
“userData”: null
}
]

Can anyone tell where the curl command is given… This is very confusing now. Can anyone who has it working make a step by step for noobs…

yes very confusing

Got it to work.
I created my own scripts that I run on my windows machine. I basically created folder for groups and then folders for persons which each folders containing photo of the person.
my dirty code here
http://pastebin.com/T5MUfjqL

Problem is that we eventually burn the quota of azure api. Is there a way to disable the image_processing component based on rules/conditions (time, motion sensor) ??

The ipcam where I assigned image processing automatically turns off itself without motion, I’m not sure if the image processing stops when this happen.

Welcome to my first real post

Or at least the first one I felt would be useful to at least one other person...

After a bunch of plugging and chugging along with countless rereading of this entire thread, I was able to get MS Face to learn my face, trained the group, and then based off the automation trigger some lights to come on in my house. However, within minutes of the success, I reached the limit for the free MS Face using Azure.

I will post my config and the steps i took below but I believe I identified where one of the major issues is coming into play. And it didn’t become clear to me until the latest update that saw microsoftface_detect added as a separate component.

You need to have both a microsoftface_identify and microsoftface_detect as image processing components. The detect sees if there is a face and then the identify identifys the face if the face is known to your MS Face account as a person in a group.

I could also be wrong and my having both as separate components under image processing was superfluous. But that is how I had it configured when I finally got it to work.

Also, I set confidence down to 40 as well.

Steps I Took

  1. Firstly, I did "pip install cognitive_face" on my RPi3 while HA was not running for good measure and while I am not sure if this was absolutely necessary, my gut says it was and hence I have listed it here.
  2. Creating the person and the group can be done relatively easily in the front end dev section. The JSON info needed to send needs to contain the parameters listed for each service call and only those parameters. For those of you (us) newer to this, your data (JSON) should look like this: > {"foo":"bar","derp":"a derpa"}
  3. For teaching the face to the image, you can use either the front end dev similar to creating a person and group like above but by specifying a camera entity id to capture the face from. Note: if there are multiple faces in the camera's view field when you call the service, MS Face will follow its documented protocols to determine which face is to be saved. And that might not be the one you want it to choose.
    See MS Face documentation on that if you want to try to specify which one to save but it would be incredibly tedious to do so through the HA front end. I recommend using the api demo's embedded curl which can be accessed [here](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b/console).
    1. The persongroupid is the name of the group the person is in. The personID can be found in the dev section of the front end under Microsoft_face.yourgroupname. Then delete the the other query parameters (ie: targetFace). content-type should be unchanged. Add your subscription key to O...
    2. In request body, change the url that is quoted to a url of an image of the person or face you are trying to ad... This can be done by using the URL of a social media profile, an image on a site or even an image on your HA server (if you have an externally accessible URL that correctly points to an image).
    3. Hit SEND at the bottom of the page and if it was successful it will say so under Response Status.
      • If it failed, it will say why. Troubleshoot accordingly.
    4. Once the face has been successfully added, you can verify it in the frontend of HA under dev tools. The image_processing.yourMSFaceIdentify_entityid should now read total_faces: 1 instead of 0.
  4. Next, train the MS Face Group. I was unable to get this to successfully run via HA or curl and therefore in my opinion must be done on the MS Face Train Person Group site [here](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395249).
  5. Once trained, any automation using the above components (as listed on the HA documentation [here](https://home-assistant.io/components/image_processing/)) should execute without error.
  6. Note: I am trigger happy with HA restarts and do them after every step of this includiing anything done on the MS Face site. Also, to solve the issue with 401 on the camera image pull, you have to make sure your HA is fully accessible from external ip's as the MS Face api has to reach it... and you need to have specified a base URL for your http component if you are using encryption. However, the base URL is not necessary if you run your HA server without any encryption. (I turned off my encryption during the testing and setup process of MS Face because I was trying to isolate the issues I was having. I would not recommend having an HA setup that is exposed externally without encryption normally. So be advised)

    My Configuration

    As promised though , here is my configuration. Mind you, I am still very much a NOOB at this and I want to thank the folks of you who commented above me on this thread. Your failures and success helped me in countless ways.

    Conclusion and My Next Move

    All this being well and good, it felt really f-ing good to finally get this working but due to the almost immediate maxing out of my api calls (and countless other good reasons listed above and in other threads), I cannot continue to use this in my system and will now be tackling the integration of openCV. If I get it working When I get it working, I will post my steps taken (unless someone else has already done so).
4 Likes

Hi,

I test the microsoft component, it works well nevertheless, we need to contact microsoft server… so i began to integrate OpenCV to be able to do facial recognition offline.

Many bugs must be resolved … to work fine…

I hear you. And I actually am currently bringing myself up to speed on your progress as mentioned in the above thread. Was planning on using that as a jump off point anyway. I am skipping installing openCV on one of my RPi’s though and instead opting for a dedicated server that sits next to my main PC doing pretty much nothing. (I think the only always on process it is currently running is a PLEX server. And it’s got plenty of power so I am curious to see how smoothly openCV can maybe run.

I was having similar problems using the Computer Vision API which uses a similar Curl command.

I think I solved the problem and probably it is the same in your case. The manual page in the microsoft web page is misleading for the case where I want to upload a file from my local (from my own laptop). It says that the last option should be --data-ascii “{body}” .However the last option of the curl command should be --data-binary “{body}”.

So, for your case I guess the last option should be --data-binary “@/tmp/saraiyhw.jpg” instead of --data “@/tmp/saraiyhw.jpg”.

The @ sign is necessary because it tells to Curl to find the file at that location and send it solely as binary formated.

I hope this helps!

I think Microsoft end point is incorrect even in their own documentation… Try it here https://westus.api.cognitive.microsoft.com/face/v1.0/

Rob,
Did you get your Google tensorflow face recognition working? Can you share your code and experiences?

Thank you for this great post. What was the reason you ran out of API Calls?
Where you processing images or video?

30 per minute sounds like a LOT of calls. My naive guess at this moment is that once you detect motion , you take 2-3 pics and process them. Most of the day, there is no motion in the house.

how did you get those 30 calls per minute? why do you need to process so many requests?

What are you using now? Did you do another post on OpenCV or other image processing API?
I am about to start on this path, so would be great to get your feedback on where did you wind up.