Smart Security Camera

Aabhas Senapati

Smart Security Camera

Smart Security Camera

(with attendance on Google Sheets)

Summary
The AIM of the project is to make a smart camera that can monitor your house, office, etc and
give you valuable data like keeping track of who is on the door, at when did any person arrive, click photographs on unknown people and only open the smart lock (electromagnetic based) when a known person is at the door.
Hardware Requirements
-Raspberry Pi (any version will do for best results 4B(2GB) was used)
Raspberry Pi CSI Camera
-Power Bank
-SD card(flashed with latest raspbian)
-Relay (for controlling door lock)
Software Requirements
-PIP (install it along with python)
-GIT
and unzip the file on the raspberry pi.
Bringing the Project into Real Life
There are two main parts of this project one is Face Recognition using OpenCV and the other is sending the onto Google Sheets and the most difficult task is to integrate these two together.
So to start building this project first we need to set up the requirements for OpenCV face recognition
so hook up to the terminal and run following commands one by one
  1. $ sudo apt-get install libhdf5-dev libhdf5-serial-dev libhdf5-100  
  2. $ sudo apt-get install libqtgui4 libqtwebkit4 libqt4-test python3-pyqt5  
  3. $ sudo apt-get install libatlas-base-dev  
  4. $ sudo apt-get install libjasper-dev  
  5. $ wget https://bootstrap.pypa.io/get-pip.py  
  6. $ sudo python3 get-pip.py  
  7. $ pip install virtualenv virtualenvwrapper  
  8. $ mkvirtualenv cv -p python3  
  9. $ workon cv  
  10. $ pip install opencv-contrib-python  
  11. $ pip install dlib        
  12. $ pip install face_recognition  
  13. $ pip install imutils  
after then we need to unzip the repository and  root to its directory
  1. $ workon cv  
  2. $ cd smart_security_camera  
  3. $ tree  
you need to then replace the dataset with your desired dataset 5 images per person is sufficient, then you need to run the encode_faces.py
  1. $ python encode_faces.py –dataset dataset –encodings encodings.pickle   
  2.  –detection-method hog  
the next step is to get the client_secret.json file from google sheets API to use it to upload data onto google sheets by following the steps below, then place the client_secret.json file in the smart_security_camera directory.
The next step is to get the Spreadsheet ID and Sheet ID of your google sheet worksheet where you want the data and replace it in the code in face_reconition.py and sheet.py files.
the next step is to connect the relay 5v and GND pin with  VCC and GND of raspberry pi and signal pin of the relay with GPIO pin 18 and connect it with your electromagnetic lock as a switch connection
NOW you are ready with all required installations and then you need to run both python files to get the smart security camera working. In a new terminal type following commands
  1. $ workon cv  
  2. $ cd smart_security_camera  
  3. $ python pi_face_recognition.py –cascade haarcascade_frontalface_default.xml   
  4.  –encodings encodings.pickle  
  1. $ workon cv  
  2. $ cd smart_security_camera  
  3. $ python sheet.py  

Working VIDEO Demonstration

(please bear with bad quality of the video as I had to record with the phone as don’t have a camera so…) 



Code and Its basic concept of Working

(earlier the code below had an error with tabs  when I paste the code , so if you want  code take it from GitHub it has proper tabs , now have added the code using GIST which is correct with tabs)
# USAGE
# python pi_face_recognition.py –cascade haarcascade_frontalface_default.xml –encodings encodings.pickle
#import the necessary packages
from __future__ import print_function
from xlrd import open_workbook
from imutils.video import VideoStream
from imutils.video import FPS
from xlutils.copy import copy
import face_recognition
import argparse
import imutils
import pickle
import time
import datetime
import cv2
import xlwt
import xlrd
import RPi.GPIO as GPIO
import time
# construct the argument parser and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument(-c, –cascade, required=True,
help = path to where the face cascade resides)
ap.add_argument(-e, –encodings, required=True,
help=path to serialized db of facial encodings)
args = vars(ap.parse_args())
GPIO.setmode(GPIO.BCM)
GPIO.setup(18,GPIO.OUT)
GPIO.output(18,HIGH)
# load the known faces and embeddings along with OpenCV’s Haar
# cascade for face detection
print([INFO] loading encodings + face detector…)
data = pickle.loads(open(args[encodings], rb).read())
detector = cv2.CascadeClassifier(args[cascade])
# initialize the video stream and allow the camera sensor to warm up
print([INFO] starting video stream…)
vs = VideoStream(src=0).start()
# vs = VideoStream(usePiCamera=True).start()
time.sleep(2.0)
# start the FPS counter
fps = FPS().start()
count2=0
row=1
wb = xlwt.Workbook()
count = 1
ws = wb.add_sheet(My Sheet)
# loop over frames from the video file stream
while True:
# grab the frame from the threaded video stream and resize it
# to 500px (to speedup processing)
frame = vs.read()
frame = imutils.resize(frame, width=500)
cv2.imshow(Frame, frame)
key = cv2.waitKey(1) & 0xFF
# convert the input frame from (1) BGR to grayscale (for face
# detection) and (2) from BGR to RGB (for face recognition)
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
# detect faces in the grayscale frame
rects = detector.detectMultiScale(gray, scaleFactor=1.1,
minNeighbors=5, minSize=(30, 30),
flags=cv2.CASCADE_SCALE_IMAGE)
# OpenCV returns bounding box coordinates in (x, y, w, h) order
# but we need them in (top, right, bottom, left) order, so we
# need to do a bit of reordering
boxes = [(y, x + w, y + h, x) for (x, y, w, h) in rects]
# compute the facial embeddings for each face bounding box
encodings = face_recognition.face_encodings(rgb, boxes)
names = []
# loop over the facial embeddings
for encoding in encodings:
# attempt to match each face in the input image to our known
# encodings
cv2.imshow(Frame, frame)
key = cv2.waitKey(1) & 0xFF
matches = face_recognition.compare_faces(data[encodings],
encoding)
name = Unknown + str(count)
name1 = Unknown + str(count)
# check to see if we have found a match
if True in matches:
# find the indexes of all matched faces then initialize a
# dictionary to count the total number of times each face
# was matched
matchedIdxs = [i for (i, b) in enumerate(matches) if b]
counts = {}
# loop over the matched indexes and maintain a count for
# each recognized face face
for i in matchedIdxs:
name = data[names][i]
counts[name] = counts.get(name, 0) + 1
cv2.imshow(Frame, frame)
key = cv2.waitKey(1) & 0xFF
# determine the recognized face with the largest number
# of votes (note: in the event of an unlikely tie Python
# will select first entry in the dictionary)
name = max(counts, key=counts.get)
# update the list of names
names.append(name)
# loop over the recognized faces
for ((top, right, bottom, left), name) in zip(boxes, names):
# draw the predicted face name on the image
cv2.rectangle(frame, (left, top), (right, bottom),
(0, 255, 0), 2)
y = top 15 if top 15 > 15 else top + 15
cv2.putText(frame, name, (left, y), cv2.FONT_HERSHEY_SIMPLEX,
0.75, (0, 255, 0), 2)
# display the image to our screen
cv2.imshow(Frame, frame)
key = cv2.waitKey(1) & 0xFF
#update_sheet(“Face_Recognition” , name)
if (name==name1) :
count=count+1
cv2.imwrite(name+.jpg, frame)
if(name!=name1):
GPIO.OUTPUT(18,LOW)
time.sleep(.8)
GPIO.OUTPUT(18,HIGH)
count2=count2+1
row=row+1
x = copy(open_workbook(book2.xls))
ws.write(row, 0, name)
x.get_sheet(0).write(1, 2, count2)
ws.write(row, 1, str(datetime.datetime.now()) )
wb.save(myworkbook.xls)
w = copy(open_workbook(myworkbook.xls))
w.get_sheet(0).write(row,0,name)
w.get_sheet(0).write(1,2,count2)
w.get_sheet(0).write(row,1, str(datetime.datetime.now()))
w.save(book2.xls)
# if the `q` key was pressed, break from the loop
if key == ord(q):
break
# update the FPS counter
fps.update()
# stop the timer and display FPS information
fps.stop()
print([INFO] elasped time: {:.2f}.format(fps.elapsed()))
print([INFO] approx. FPS: {:.2f}.format(fps.fps()))
# do a bit of cleanup
cv2.destroyAllWindows()
vs.stop()
USAGE
from __future__ import print_function
from xlrd import open_workbook
from googleapiclient.discovery import build
from httplib2 import Http
from oauth2client import file, client, tools
from oauth2client.service_account import ServiceAccountCredentials
import xlrd
import time
import datetime
import cv2
import xlwt
MY_SPREADSHEET_ID = 12EC6AmdEUOcwQgLIDZ64rMbR9pDLmvMdb6gjcN0w5mY
def update_sheet(sheetname, name , time):
“””update_sheet method:
appends a row of a sheet in the spreadsheet with the
the latest temperature, pressure and humidity sensor data
“””
# authentication, authorization step
SCOPES = https://www.googleapis.com/auth/spreadsheets
creds = ServiceAccountCredentials.from_json_keyfile_name(
client_secret.json, SCOPES)
service = build(sheets, v4, http=creds.authorize(Http()))
# Call the Sheets API, append the next row of sensor data
# values is the array of rows we are updating, its a single row
values = [ [ time,
Person, name] ]
body = { values: values }
# call the append API to perform the operation
result = service.spreadsheets().values().append(
spreadsheetId=MY_SPREADSHEET_ID,
range=Sheet1 + !A1:C1,
valueInputOption=USER_ENTERED,
insertDataOption=INSERT_ROWS,
body=body).execute()
row=2
count = 2
while True:
# A3 to D7
workbook = xlrd.open_workbook(rbook2.xls)
sheet = workbook.sheet_by_index(0)
m = str(sheet.cell(1,2))
o =int( float (m[7:]))
n=o+2
print(n)
if(count<n):
data = str(sheet.cell(row,0))
time = str(sheet.cell(row,1))
print(count)
update_sheet(Face_Recognition , data[5:] , time[5:])
count=count+1
row = row + 1
#print (count)
view rawsheets.py hosted with ❤ by GitHub

good frame rate about 8.5 frames per second was achieved
the python script uploading data to google sheets
the relay turns on when a known person is detected

data on google sheets

The two essential part of the entire project is the pi_face_recognition.py and sheet.py python file. In the broader aspect, the pi_face_recognition.py file does the face recognition and stores it locally while the sheet.py takes this data and uploads it to Google Sheets.

Working of pi_face_recognition.py:-
The first tas is to use the encodings of the faces and determine that is there any face in the frame, then the next task is that it does is that it checks with each and every face to identify whose face it is or is it an unknown face then it draws a bounding box around the face and if an unknown face is detected it stores that frame with numbering the unknown face serially, then the last task is that it stores this data locally to an excel file.
Working of sheet.py:-
The task of this script is to take the data from the locally saved file and upload it onto Google Sheets with data who was at the door and at what time and date.
Problems and errors faced during coding
-The first problem I was facing was with the installation of libraries which I had to do many times and was successful when I installed libraries on a virtual environment creation method.
-Then the next problem I faced was that after I was successful in using face recognition with python was that when I tried to use the Google sheets API to upload the data, there was an error which said that it was due to improper authentication but I found that the sheet id name was to be changed and client_secret.json was to be obtained.
-Then I was successful in sending the data into google sheets but what I observed was that the frame was very very low because it depended on the internet connection as it only went to the next frame after the data uploading of the previous frame was complete. Thus I thought that I should store the data locally and then run another script to upload it onto Google Sheets
-The next error I faced was that while doing the storage of data locally onto an excel file it actually was not possible to edit a saved file so I had to learn and used the method in which two files with same data is created and then data is stored by copying the data of one file and adding new data to it and this process continues.
-The final error I was facing was that the code terminated after some time as the no person was detected and it reached the end of file while parsing so then I had to get the number of rows from face recognition script and use that as a condition to wait until more rows are added .(earlier I thought that the problem of termination of code can be solved by running the periodically with crontab but was not good approach as it caused repetition of data being uploaded ).
I learned a lot about face recognition with open cv, using Google API’s and also using excel files in python while making this project overall it was a very great learning experience making and troubleshooting with this project.
Some references which were used to make the project are:-
   
ANY COMMENTS, IMPROVEMENTS, AND QUERIES ARE WELCOMED IN THE COMMENTS BELOW
I would also like to thank danzima for the Raspberry Pi 4 and tariq.ahmad for the Beaglebone AI

One Response

  1. Aryan says:

    Hi Aabhas!
    This is a very useful and simple project.
    Keep it up!

    Aryan.

Leave a Reply

Your email address will not be published.