Door bell with pi camera and motion detection

That code takes a still image, then processes it, but Im currently going through another code snippet trying to understand it so I can get the script to process it live .

Nice! Im by far not a developer/programmer but a ā€˜tinkererā€™. Ive got a zero W coming from overseas as still cant get in australia.
What guide did you use to get cv installed?

It was the opencv part of the who is at the coffe machine guide, robmarkcole posted in this thread.

https://www.hackster.io/luisomoreau/who-is-at-the-coffee-machine-72f36c3

 sudo apt-get install python-dev
 sudo apt-get install python-opencv
 sudo apt-get install libopencv-dev
 sudo pip install imutils
 sudo pip install numpy

Yeah I was.The whole opencv thing was becoming a vendetta, and I was getting somewhere so I didnt want to sleep until I cracked it.

I posted those a 1:30am ish.

2 Likes

Additional notes.
I Found a way to detect faces direct from the camera feed, although , when I apply it to the doorbell script , it conflicts with the motion detection.

However. I managed to work around this by saving the image, then run the face detection and it works well, it can take a couple of seconds to plough through the detection though, I dont think that would hurt the operation too much .
Despite this, tomorrow I will be doing more research on how to adapt the motion detection and face detection so they work together.

So your idea is to ring the doorbell when motion AND face are detected? I suppose next step is facial recognition. I like the idea of my door welcoming me home :slight_smile: Or more usefully, unlocking the door

It is my aim to get recognition to work, mainly for backup people tracking (household people), and it would be nice to be greeted and have automations setup based on whoever just walked through the door.

I dont think I would trust it to unlock the door without additional security (ble tags or something similar) as anyone could gain access with a photo.

1 Like

There was a suggestion about integrating voice to this project, wonder if telegram could also be used for that? https://telegram.org/blog/calls

Ive spent the best part of this afternoon researching voice options for this project, Not found anything solid yet. progress so far.

  • Mumbleā€¦Possible (not keen as I want to make it as generic and simple as possible)

  • Asteriskā€¦A lot of setup this is voip based and possible call costs involved, again not preferable.

  • VOIP/SIPā€¦See above

  • GSTREAMā€¦ Promising but need more research on syntax and usage , possibly send and receive audio via RTSP, but will need to find a method to send and receive at the device end Android/OS.

  • Telegramā€¦ Promising but audio chat not available for pi.

  • Whatsapp Using YOWSUPā€¦All good but you need to register a working phone number (Unused for whatsapp).

Unexplored options

  • HTML5ā€¦ Not sure how possible this is, needs looking into, this would be good as it wouldnt need any apps at the client end.

Anyone else have any suggestions I would be grateful. I just need a method to setup a two way audio chat with pi and a device android or OS and python 2 friendly.

1 Like

Iā€™m not clear why not suggesting Asterisk, there is also this Pi specific version
http://www.raspberry-asterisk.org

Every thing I looked at for asterisk and PBX related was either GUI based and/or requires a complete re installation of the OS, the setup also seems quite in depth and generally overkill for the task needed for this project.
I could be wrong as I only looked at 4 different sites and a couple of forums.
The audio communications need only to be fairly simple, lightweight and essentially needs to be run headless, so GUI applications are a no go.
I was hoping to find a solution already available with some light tinkering to adapt.
Other avenues are to develop something for the pi and IOS/Android to communicate directly, which also seem quite a drastic measure but at least it would be custom made to work with the setup.
I have only spent a short time (2-3 hours) looking for something so I shall continue searching for a solution. As said before I would be grateful for any suggestions.

1 Like

ok, I see.
Iā€™m not a coder so I canā€™t help you any further, sorry

I would never consider myself a coder either hehe , more of a hobbyist who dabbles with it, hence the need for continuous need for advice and research.

Getting there with face recognition
Ive jigged around with some code snippets ive found on the web.
This is just a script I jigsawed together and adapted which finds a face from the live camera stream then recognises it, based on the closest match (this is why it is important to have a large list of negative matching faces) , then displays who it has found, with the confidence rating.

import cv2
from picamera import PiCamera
import picamera.array
import time
from picamera.array import PiRGBArray
from sklearn.decomposition import RandomizedPCA
import numpy as np
import glob
import math
import os.path
import string

#Initialize camera
camera = PiCamera()
camera.vflip = True
rawCapture = PiRGBArray(camera)
time.sleep(0.1) # allow the camera to warmup
camera.resolution = (320, 240)
camera.framerate = 32
cascade = cv2.CascadeClassifier("haarcascade_frontalface_alt.xml")
y=[]
#detect faces from haarcascade xml reference file
#returns relevent location data of faces
ci=1

def reckognize(img,X_pca,pca):
    global y
    IMG_RES = 92 * 112 # img resolution
    NUM_EIGENFACES = 10 # images per train person
    NUM_TRAINIMAGES = 20 # total images in training set
    X = np.zeros([1, IMG_RES], dtype='int8')
    
    # load test faces (usually one), located in folder test_faces
    #test_faces = glob.glob('test_faces/*')

    # Create an array with flattened images X
    #X = np.zeros([len(test_faces), IMG_RES], dtype='int8')

    # Populate test array with flattened imags from subfolders of train_faces 
    #for i, face in enumerate(test_faces):
    X[0,:] = prepare_image(img)
 
    # run through test images (usually one)
    for j, ref_pca in enumerate(pca.transform(X)):
        distances = []
        # Calculate euclidian distance from test image to each of the known images and save distances
        for i, test_pca in enumerate(X_pca):
            dist = math.sqrt(sum([diff**2 for diff in (ref_pca - test_pca)]))
            distances.append((dist, y[i]))
        
        found_ID = min(distances)[1]
        
        idname="unknown"
        if found_ID =="1":
            idname="Chloe"
        if found_ID == "2":
            idname="Lee"
        
    return idname,min(distances)[0]

#function to get ID from filename
def ID_from_filename(filename):
    part = string.split(filename, '/')
    return part[1].replace("s", "")

#function to convert image to right format
def prepare_image(img):
    img_color = img
    img_color = cv2.resize(img_color, (92, 112))
    img_gray = cv2.cvtColor(img_color, cv2.cv.CV_RGB2GRAY)
    img_gray = cv2.equalizeHist(img_gray)
    return img_gray.flat

def loadtrainfaces():
    global y
    print "loading train faces"
    IMG_RES = 92 * 112 # img resolution
    NUM_EIGENFACES = 10 # images per train person
    NUM_TRAINIMAGES = 20 # total images in training set

    #loading training set from folder train_faces
    folders = glob.glob('train_faces/*')
 
    # Create an array with flattened images X
    # and an array with ID of the people on each image y
    X = np.zeros([NUM_TRAINIMAGES, IMG_RES], dtype='int8')
    y = []

    # Populate training array with flattened imags from subfolders of train_faces and names
    c = 0
    for x, folder in enumerate(folders):
        train_faces = glob.glob(folder + '/*')
        for i, face in enumerate(train_faces):
            img = cv2.imread(face)
            X[c,:] = prepare_image(img)
            y.append(ID_from_filename(face))
            c = c + 1

    # perform principal component analysis on the images
    pca = RandomizedPCA(n_components=NUM_EIGENFACES, whiten=True).fit(X)
    X_pca = pca.transform(X)
    print "Complete"
    return X_pca,pca

def detect(X_pca,pca):
    global ci
    
    camera.capture(rawCapture, format="bgr")
    img = rawCapture.array
    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    rects = cascade.detectMultiScale(gray, 1.1, 5, cv2.cv.CV_HAAR_SCALE_IMAGE, (20,20))

    if len(rects) == 0:
        return [], img
    rects[:, 2:] += rects[:, :2]
    
    # draws rectangle around detected faces and saves image
    #def box(rects, img):
    ci=1
    for x1, y1, x2, y2 in rects:
        cv2.rectangle(img, (x1-10, y1-10), (x2+10, y2+10), (127, 255, 0), 1)
        sub_face = gray[y1:y2, x1:x2]
        resized_image = cv2.resize(sub_face, (92, 112)) 
        face_file_name = "./" + str(ci) + ".jpg"
        person,confidence=reckognize(img,X_pca,pca)
        cv2.imwrite(face_file_name, resized_image)
        print "Found "+person+" with "+str(confidence)+" confidence"
        ci=ci+1
    cv2.imwrite('./detected.jpg', img);
    return rects, img

X_pca,pca=loadtrainfaces()

while True:
    rects, img = detect(X_pca,pca)
    
    print "faces Detected = "+str(len(rects))
    
        
        
    #print rects
    rawCapture.truncate(0)

This is the output-

Found Lee with 2.33284454765 confidence
faces Detected = 1
Found Lee with 2.33325777615 confidence
faces Detected = 1
Found Lee with 1.94089303853 confidence
faces Detected = 1
Found Lee with 2.26895103642 confidence
faces Detected = 1
Found Lee with 2.24063981916 confidence
faces Detected = 1
Found Lee with 2.34530201278 confidence
faces Detected = 1
Found Lee with 1.95130334247 confidence
faces Detected = 1
Found Lee with 1.91662331889 confidence
faces Detected = 1
Found Lee with 2.45193249705 confidence
faces Detected = 1
Found Lee with 2.20069181304 confidence
faces Detected = 1
Found Lee with 1.84620549768 confidence
faces Detected = 1
Found Lee with 1.87807848023 confidence
faces Detected = 1
Found Lee with 2.30698351619 confidence
faces Detected = 1
Found Lee with 2.3290739458 confidence
faces Detected = 1
Found Lee with 1.9275310061 confidence
faces Detected = 1
Found Lee with 2.30034177304 confidence
faces Detected = 1
Found Lee with 2.30437725477 confidence
faces Detected = 1
Found Lee with 2.26709627767 confidence

Apologies for not putting remarks in the code , its getting late.
You need a folder named trained_faces with subfolders named s1,s2,s3ā€¦
each folder should contain 10 faces of people you want to recognise and more importantly 10 faces of random people.
I have 10 folders with random people, downloaded from
http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html

and an additional 3 folders for me, my wife and my daughter, each with 10 various grayscale images 92x112ā€¦

The guide I followed is here
https://www.raspberrypi.org/forums/viewtopic.php?f=38&t=85755

You have to play around with your training faces , but once you got it , its great.
just need to tidy the code up and implement this in with the doorbell now.

1 Like

This is great progress, nice work! Is it fast enough that the doorbell could announce who is at the door?

I havenā€™t tried it on the zero yet it was running well on the pi2, It was doing a programme loop approx 3-4 per second, which seemed fast enough.
But will have to see what the zero is like.
If the zero is too slow, I could always turn this into a face recognition server and run it on say a pi 2 or 3.

The issue I have at the moment is the current motion detection script works really well within the doorbell script, but when I try to integrate the face detection and recognition code it conflicts.

They way I understand it is

  • first the motion detection is still running while the recognition is being processed and it thows a new image into the mix and crashes.
  • second the when I try to process the image it is for some reason in the wrong format.

Im sure its an easy fix but while still learning at the deep end and pulling scripts from all over the web I suppose these problems are inevitable.

Ill keep at it and Ill get there at some point.

Still havenā€™t found anything satisfactory for the audio communication yet so open to any suggestions for this.

1 Like

OK I have the rf codes for my door bell ringer, surprisingly those 1 pound receivers work even with the bell at the other end of my flat. I want the receiver to be on the pi zero next to the door nevertheless. Do you have code for triggering the camera just on the rf signal of the door bell? I suppose I could just modify https://github.com/milaq/rpi-rf/blob/master/scripts/rpi-rf_receive

Hi Rob
Try this tutorial.
http://www.securipi.co.uk/remote-433-receivers.pdf

I know its windows based but from what I remember it is easily adapted for command line.

Its been a while but Ill see if I can dig out my conversion.

Thanks Lee!

Rob
I have been hunting for the programme I used, I realised I ended up using an arduino nano plugged into my raspberry pi to receive the rf packets.
It was a needless workaround really as I could have done it via gpio on the pi, it was when I first started tinkering and didnt know much about it.
Im sure you can do something with the snippets of code both you and I posted on here.
Im willing to help if I can If you get stuck, I should be able to find a rf reciever kicking about somewhere.

I played with the code in the link you posted and it works a treat but it only works in python 3