Door bell with pi camera and motion detection

Hi Lee
I see a couple of people have also modified that script, so should be straightforward.

Re python 2, is that required for openCV? Could always call it from within a python 3 script :slight_smile:

Cheers

Ive been using python 2 for my projects, but the import libraries for the rf script is for pyhton 3.
Opencv i think works in either.
I am not sure if the future command can be used in this case im not 100% on what the limits and capabilities are.Its worth looking into.

Sorry I meant calling the python 2 script from the command line within python 3.

I have been considering whether:

  1. to write a stand-alone script that is hosted on the pi zero or even setup an HA instance on the pi zero
  2. Use my primary instance HA on pi3 to control the show, e.g. using https://home-assistant.io/components/switch.command_line/

The advantage of the second approach is that once all of the components are configured, they could be used in other automations without needing to tinker with the pi zero.

Personally I think option 2 would be better.
Thereā€™s a lot going on with the little pi with the image processing and so on, you need as much processing power you can get.

Things to consider.
Would you need the zero to know if the doorbell has been pressed?
If so how would it be interfaced with the additional pi?

Going to place my rf receiver on the pi zero, then MQTT to HA.

Actually found that the cheap receiver picked up the signal from my doorbell even 30 feet away so alternatively could be placed on my main pi in the living room

You could always connect it to a nodemcu as an independant device and have that send mqtt to HA.

Managed to brick my nodeMCUā€¦ Testing the Wipy platform now :slight_smile:

Using a pi for my back garden without the doorbell button installed (we had someone in our garden recently climbing over fences etc , Probably kids)
It is good , not many false positives but it just takes cloud movement or a bird to fly past and it is triggered. But I think it is as good as it will get with the detection.

That Amazon Rekognition may be able to accurately differentiate clouds and peopleā€¦ :slight_smile:

1 Like

I recently made a doorbell. Removed my old, dead doorbell ringer and transformer and hooked the wires to the pins on a NodeMCU. Flashed code that publishes to an MQTT topic whenever the button is pressed. HASS has an automation trigger on that topic. When seen, it plays a doorbell sound over my multi-room audio system (pausing whatever is playing), and sends me a Pushbullet notification.

2 Likes

I have yet to figure out a multi room audio system, how did you go about it?

Snapcast is what I use. Basically lets you have DIY Sonos-like functionality.

1 Like

Ive found an opencv based motion detection script https://github.com/mowings/picam_motion/blob/master/motion_detect.py

I am going to scan through and salvage code from my existing script and merge it with this one.
It means I can run the face detection from the live stream.
Hopefully make things more fluent.

1 Like

For voice, perhaps Skype is possible? https://blog.adafruit.com/2017/04/07/run-skype-on-raspberry-pi-raspberry_pi-piday-raspberrypi/

1 Like

This runs a desktop ui, we need something command line based, as it will be running headless.
Interesting though , it might do for something else I have in mind.

1 Like

It seems the audio chat is more difficult than the image processing (currently rewriting).
I cant seem to find any basic ā€˜none gui basedā€™ voice chat solutions, other than VOIP, which appears to need 2-3 layers of software, client, server, interface. Then the phone needs to be setup with something similar.

Ive tried a few sip varieties, found they either just wont work , or they are GUI driven. After a silly amount of time of messing around and re imaging the sd card countless times, I have not made any progress on this front.
None of this seems very simple or generic.

So I am going to hunt for an android app I can hopefully create a python script which it can communicate with.

1 Like

Think I will mod this design to mount my pi on the doorā€¦ http://www.thingiverse.com/thing:2023487 Any thoughts on mounting?

1 Like

Sorry I havent posted recently, been rather busy with house , and rewriting my scripts.

This code works quite nice now Ive adapted this code to use the pi-camera (was written for web-camera), and Ive adjusted the motion sensitivity area, which seems to be perfect for human detection and not too many false positives.

It highlights the moving object in a rectangle and it prints the area status and time/date stamp on to the image.
It currently works as a stand-alone security camera
I now just need to merge my original button press and face detection subroutines with this code, then its back to audio communication for the chat.

# import the necessary packages
import argparse
import datetime
import imutils
import time
import cv2
import io
import time, datetime
import numpy as np
import picamera

# construct the argument parser and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-v", "--video", help="path to the video file")
ap.add_argument("-a", "--min-area", type=int, default=1000, help="minimum area size")
args = vars(ap.parse_args())

print "Initializing camera..."
with picamera.PiCamera() as camera:
    camera.rotation = 0
    camera.resolution = (800,600)
    camera.exposure_mode = 'auto'
    camera.awb_mode = 'auto'
    camera.brightness = 50
    camera.framerate = 32
    print "Setting focus and light level on camera..."
    time.sleep(2)


	

    count=1
# initialize the first frame in the video stream
    firstFrame = None
# loop over the frames of the video
    while True:
	# grab the current frame and initialize the occupied/unoccupied
	# text
	
            text = "Unoccupied"
    # Create the in-memory stream
            print "create stream"
            stream = io.BytesIO()
            print "getting camera feed"
            camera.capture(stream, use_video_port=True, format='jpeg')
    # Construct a numpy array from the stream
            print "Creating Numpy"
            data = np.fromstring(stream.getvalue(), dtype=np.uint8)
            frame = cv2.imdecode(data, 1)
            print "Got Frame"

 
	# resize the frame, convert it to grayscale, and blur it
            frame = imutils.resize(frame, width=500)
            gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
            gray = cv2.GaussianBlur(gray, (21, 21), 0)
 
	# if the first frame is None, initialize it
            if firstFrame is None:
                    firstFrame = gray
                    continue

	# compute the absolute difference between the current frame and
	# first frame
            frameDelta = cv2.absdiff(firstFrame, gray)
            thresh = cv2.threshold(frameDelta, 25, 255, cv2.THRESH_BINARY)[1]
 
	# dilate the thresholded image to fill in holes, then find contours
	# on thresholded image
            thresh = cv2.dilate(thresh, None, iterations=2)
            (cnts, _) = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL,
                    cv2.CHAIN_APPROX_SIMPLE)
 
	# loop over the contours
            for c in cnts:
		# if the contour is too small, ignore it
                    if cv2.contourArea(c) < 2000:
                            continue
                    print cv2.contourArea(c)
		# compute the bounding box for the contour, draw it on the frame,
		# and update the text
                    (x, y, w, h) = cv2.boundingRect(c)
                    cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)
                    text = "Occupied"
	# draw the text and timestamp on the frame
            cv2.putText(frame, "Garden Area Status: {}".format(text), (10, 20),
                    cv2.FONT_HERSHEY_SIMPLEX, 0.5, (90, 100, 155), 2)
            cv2.putText(frame, datetime.datetime.now().strftime("%A %d %B %Y %I:%M:%S%p"),
                    (10, frame.shape[0] - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.4, (0, 220, 255), 1)
 
	# show the frame and record if the user presses a key
	    
            cv2.imwrite("/var/www/html/camera/recent.jpg", frame)
            #cv2.imwrite("./thresh.jpg", thresh)
            #cv2.imwrite("./delta.jpg", frameDelta)
            
            
            print text
            if text=="Occupied":
                    print "saving"
                    dimage="./detected"+str(count)+".jpg"
                    cv2.imwrite(dimage, frame)
                    cv2.imwrite("/var/www/html/camera/lastoccupied.jpg",frame)
                    count=count+1
            time.sleep(1)
            key = cv2.waitKey(1) & 0xFF
	# if the `q` key is pressed, break from the lop
            if key == ord("q"):
                    break
 
# cleanup the camera and close any open windows
camera.release()
1 Like

Perfect, canā€™t wait to try this out.
Are you going with RF for the button press? Will code that up soon

1 Like

This is a free book today about Python and OpenCV you might find it helpful:

https://www.packtpub.com/packt/offers/free-learning

1 Like