Josephine and Elizabeth's CSC270 Spring 2019 Project

From CSclasswiki
Jump to: navigation, search


FaceOn: Project Description

We plan to create a facial recognition device. If a recognized person stands in front of the camera, the device will unlock (open) a box containing a secret message. The Raspberry Pi will handle the photography and facial recognition aspects, then send a Boolean signal to the Arduino through serial communication: if True, the Arduino will activate a motor to open the box.




We will be using Python 3 to implement facial recognition on the Raspberry Pi, and we will use C for the Arduino robotics programming. Check out our Raspberry Pi code and Arduino code!


The facial recognition library we are using is called OpenCV. It is widely used for hobbyist projects like ours, has interfaces for Java, Python, and C++, and is "designed for computational efficiency and with a strong focus on real-time applications." We will be using it in Python 3 on the Raspberry Pi. OpenCV provides several tutorials for getting started, listed here.



  • PiCam (Raspberry Pi camera module): datasheet

Building Plan

  1. We will start with the facial recognition using the Raspberry Pi which will be divided into the following parts:
    • Set up the camera and test it. To test it, we will have it capture an image and save it to the folder. We will also use the PiCam to capture the images that will be used for recognition later on so that the recognition is more accurate.
    • Write the code for facial recognition. To test that the code works, we will use an LED that turns on or off if the face is recognized.
  2. We will then work on opening the box using the Arduino which will be divided into the following parts:
    • Build the box at the design clinic.
    • Install the motor by glueing inside the box. To test it, we will have the motor open the box as soon as the code is run.
    • In order to test that the motor opens or closes the box depending on the input, we will use a push button. If the button is pressed, we will open the box and if it is released, then the box is closed.
  3. Finally, we will marry the two parts of the project:
    • Have the Raspberry Pi recognize one of our faces and send data to the Arduino. We will have an LED go on or off depending on the data received by the Arduino.
    • Once we confirm that the data received is accurate, we will then test that the motor opens the box or closes the box depending on the data received from the Pi.

Presentation Slides

Elizabeth and Josephine's Slides


    Josephine Nyoike and Elizabeth Carney
    CSC 270 Spring 2019 Final Project: faceon
    Haar Cascade Face detection and Real Time Face Recogition  with OpenCV
    From "Real-Time Face Recognition: An End-to-End Project" by Marcelo Rovai -,
    Tutorial link:
    Based on tutorial found at:   
from __future__ import print_function

import cv2
import glob
import numpy as np
import os, sys
from time import sleep
import serial

#Find the url to the serial port whenever it changes
def find_ports():
    ports = glob.glob('/dev/ttyACM[0-9]*')
    print( "ports = ", ports )
    res = ''
    for port in ports:
            s = serial.Serial(port)
            res = port
            print("res= ", res)
        except Exception, e:
            print('Failed: '+ str(e))
    return res

cascadePath = "/home/pi/270/faceon/Cascades/haarcascade_frontalface_default.xml"
faceCascade = cv2.CascadeClassifier(cascadePath)
port = find_ports()
print( "port = ", port )

recognizer = cv2.face.LBPHFaceRecognizer_create()'/home/pi/270/faceon/trainer/trainer.yml')
ser = serial.Serial(port, 9600)

# initiate id counter
id = 0

# serial value to send to Arduino
data = '0'
new_data = '0'

# names related to ids: example ==> Marcelo: id=1,  etc
names = ['None', 'Elizabeth', 'Joe'] 

# Initialize and start realtime video capture
cam = cv2.VideoCapture(0)
cam.set(3, 640) # set video width
cam.set(4, 640) # set video height

# Define min window size to be recognized as a face
minW = 0.1*cam.get(3)
minH = 0.1*cam.get(4)

while True:
    # send data to Arduino
    if (new_data != data):
        data = new_data

    ret, img =

    gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)

    faces = faceCascade.detectMultiScale( 
        scaleFactor = 1.2,
        minNeighbors = 5,
        minSize = (int(minW), int(minH)),

    # if no faces detected, close box
    if faces == ():
        new_data = '1'
        for(x,y,w,h) in faces:
            #display rectangle around face and perform recognition
            cv2.rectangle(img, (x,y), (x+w,y+h), (0,255,0), 2)
            id, confidence = recognizer.predict(gray[y:y+h,x:x+w])

            # if face recognized
            if (confidence < 100):
                new_data = '0'
                id = names[id]
                confidence = "  {0}%".format(round(100 - confidence))

            # if face not recognized
                new_data = '1'
                id = "unknown"
                confidence = "  {0}%".format(round(100 - confidence))
            cv2.putText(img, str(id), (x+5,y-5), font, 1, (255,255,255), 2)
            cv2.putText(img, str(confidence), (x+5,y+h-5), font, 1, (255,255,0), 1)  
    k = cv2.waitKey(10) & 0xff # Press 'ESC' to exit program
    if k == 27:

# exit and cleanup
print("\n Exiting Program.")


// Arduino Serial Receiver 
// Elizabeth Carney and Josephine Nyoike
// Receives data from Raspberry Pi and activates Servo accordingly.
// Setup: Arduino connected to Raspberry Pi via USB cable.
// Servo motor connections: brown to GND, red to battery +, orange to Arduino pin 8.

#include <Servo.h>

Servo myservo;
int servopin = 8;

void setup() {
  // setup Servo

void loop() {
  if (Serial.available()) {
    int byter =;
    // if high signal, open box
    if ((byter) == '1') { 
    // if low signal, close box
    } else if ((byter) == '0') {

Progress Log

April 16

Getting started: Today we finished our project proposal and enabled the camera functions on the Raspberry Pi. Our next step will be experimenting with taking and saving images using the PiCam.

April 18

Camera testing: Today we learned that the camera's preview function only works with a monitor connected, not with SSH/VNC connections. We successfully previewed the camera's image and noted that it is "right-side up" when the red light is in the upper right corner. (However, it is possible to rotate the camera's image in the code.) We also took multiple images with dynamically-created file names and learned to save them at specific file paths.

Input testing: Next, we triggered a photo capture with a button, which is what the user will do with the final device. Since we won't be able to have a monitor in the display case to preview the picture, we plan to mount the camera in a permanent spot and have a "frame" of some sort to tell the user where to put their face.

Bug: We want our program to run infinitely so that whoever walks up to it can take their photo and see the result. However, when we use while(True) in our code, the program stalls and cannot be stopped with Ctrl^C. While debugging, we plan to just use a for-loop that runs 1000 times. We will research other methods; one idea for a permanent solution is to have another button specifically for stopping the program.

Note to kill a process (like a stalled program), open a new command line window and enter:

ps auxw | grep filename

This finds all processes running that contain the filename. Then, find the command you entered to run your program and look to its left to find the numeric code associated with it (usually 3-4 digits). Enter:

kill ####

The process should stop.

Adding a stop-button: We now have a second button which kills the program cleanly, resolving our previous issue with stalling. It is important to remember that inside the while loop, if we use button.wait_for_press() the program will pause and wait for the button press, whereas checking if button.is_pressed immediately returns True or False and then continues with the rest of the loop. The latter is the one we want to use because we check both buttons' states in each loop.

Next time we plan to install the OpenCV library. We are on Step #2 of this tutorial.

April 21

OpenCV installation: Today we almost completed this tutorial, which included installing OpenCV as well as a virtual environment in Python. We will be using this virtual environment (called cv) for all of our work with OpenCV. To enter the environment, run the following commands:

source ~/.profile
workon cv
When you are in the environment, "(cv)" will precede your prompt at the command line. Today we finished running
make -j4
sudo make install
sudo ldconfig
so our next step will be executing Step #6.

Motor testing: While we waited for OpenCV to finish installing for a couple of hours, we connected the servo for the first time to experiment with its movement.

Serial communication testing: Next, we established serial communication between the Arduino and Raspberry Pi. We noted that the Arduino's serial device name is /dev/ttyACM0. Although this tutorial worked for us, its code sends data from the Arduino to the Raspberry Pi, and we need communication the other way. We tried searching for other tutorials, but have not found one that works yet. This will be our main focus for next time. We also plan on adding two LEDs to try to visualize the data sent from the Raspberry Pi and to the Arduino respectively.

April 23

Working with OpenCV: Today we completely finished installing OpenCV and verified that version 3.3.0 can be imported into Python on our Raspberry Pi! Prof. Thiebaut recommended that we change our plan to accommodate video stream recognition, so instead of the user pushing a button to take a picture, the Pi will be able to detect any face that appears in front of it and automatically test whether to accept or reject the person. Our plan for tomorrow is to get the real-time face detection program working (from this tutorial, written by Marcelo Rovai).

We also created a new directory for our project so that all of our relevant Python and C files will be in one place (and we'll be able to access the classifier files OpenCV needs). The new file structure is: pi --> 270 --> faceon --> and other files.

April 24

Facial detection: Our goal today was to use the Raspberry Pi to successfully detect human faces in a camera stream, and we accomplished that goal! We had to download a pre-trained "classifier" file that recognizes faces specifically, from this GitHub repo. The file (called haarcascade_frontalface_default.xml) is located within the Cascades directory inside of our faceon project folder. It is referenced as the classifer file in with the code below:

faceCascade = cv2.CascadeClassifier('Cascades/haarcascade_frontalface_default.xml')

Then, the classifier function is called as follows:

faces = faceCascade.detectMultiScale(
        minSize=(20, 20)

For our purposes, the most important parameter given in the classifier function is minSize, which specifies the smallest possible rectangle that can be considered a face. The program returns a rectangular region-of-interest (ROI) for each detected face, comprised of its coordinates, width, and height, and then uses that information to draw a blue rectangle outlining the detected face on the screen:

for (x,y,w,h) in faces:
    roi_gray = gray[y:y+h, x:x+w]
    roi_color = img[y:y+h, x:x+w]

The program can be quit cleanly by pressing Escape. To recognize specific facial features like eyes or a smile, we would need to download a separate classifier file and adjust the code accordingly. The tutorial notes that processing multiple classifiers at once makes run-time very slow, so we will stick to simple face recognition.

ElizabethsRecognizedFace.jpg JosephinesRecognizedFace.jpg

Observations: The video feed lags about 3 seconds behind. Also, we were disappointed to find that the camera does not recognize darker skin as well as pale skin (this is unfortunately a long-documented issue in the fields of data science and computer science; see this Business Insider article and this Towards Data Science article). To counteract this effect, we plan on taking the photos in bright lighting and placing the final project in a very well-lit area.

The next step is data gathering, which is also outlined in this tutorial. After we take pictures of ourselves, we will be able to compare the faces detected in the video feed to those ground-truth photos and decide whether or not they match!

April 25

Today we created (located in our faceon directory), which will be our main program running on the Raspberry Pi.

Data gathering: When we initially tried to run the data gathering code from the tutorial, called, it did not work as expected. Josephine figured out that the camera had been too close to Elizabeth, so it hadn't successfully detected a face. Once we held it further away, we were each able to capture our 30 images for the dataset. Elizabeth is User 1 and Josephine is User 2.

Training: Then we moved on to the next step: downloading Pillow (a.k.a. PIL, the Python Imaging Library) and the trainer file and running it on our dataset of photos. We changed the array called names so that its elements' indices match the dataset labels: index 0 is unknown, index 1 is 'Elizabeth,' and index 2 is 'Joe.'

Facial recognition: Finally, we were able to run the last program,, and it did work for the most part. The recognizer returns a "confidence" value of 0 on a perfect match and over 100 (usually) on a non-match. The recognition program we're using displays that confidence value as subtracted from 100, so for a very bad confidence level of 120, it displays -20. We like this transformation since it means a higher number represents a higher confidence and is easier to understand.

We did observe that it matched Prof. Thiebaut with both Elizabeth and Josephine at different points, though. We'll try to combat that by retraining OpenCV with 100 photos of ourselves instead of just 30, but we suspect that the algorithm is the main source of the problem and more training will only go so far. Ultimately, this discovery means that the program will be great at recognizing known people, but will not be as good at classifying people as "unknown."

EandJrecognized.jpg Unrecognized.jpg

Serial communication: Josephine set up serial communication from the Raspberry Pi to the Arduino, which we had had trouble with previously. We worked together to edit the receiver code so that when a '0' is sent, the motor turns one way, and when a '1' is sent, it turns back to its original position. It did display some buggy behavior that we'll try to correct next time.

April 27

Re-training: Last time, we observed that the recognizer sometimes labeled other people as Elizabeth and Josephine, so today we re-trained the software with 100 photos of ourselves instead of only 30. The recognizer seems a little more accurate now, but it still has issues. We don't think that we can improve it any further.

Signifying recognition: Our next big step was to use serial communication to translate the facial recognition into a physical action, so we began with lighting an LED and worked up to activating the Servo.

Using an LED: First, we had the Raspberry Pi send a '1' over the serial link when a face was recognized, and '0' when a face was unknown. We then wrote an Arduino program that would light the LED after receiving a '1' from the Raspberry Pi and turn it off after a '0'. It was a success! However, while testing, we realized that there were three possible cases instead of two: recognizing a face, not recognizing a face, and not detecting a face at all. We wanted the LED to turn off when no faces were detected, so we edited by adding a conditional. Detected faces are added to an array (called faces) in the program, but when none are detected the array returns an empty tuple. So, we tested for whether or not it was an empty tuple. If so, we turn the LED off; if not, and if a face is recognized, we light the LED. At this point, all was well. Whenever the program recognized one of our faces, the LED lit up, and whenever it saw an unknown person or was not pointed at any faces, the LED stayed off.

Using a Servo: We moved on to adding the Servo's motion, which we thought would be relatively simple, but the program immediately stopped working. The LED didn't even work anymore despite not changing its wiring or the code related to it. After some searching, we finally figured out why our Servo had been displaying such random behavior in the past: an online forum user said that you should always connect it to a separate power source instead of the Arduino's 5V pin because it could easily overload the Arduino (this explains why the LED stopped working when we tried to add the Servo to the circuit). Note that you always have to connect the other power source's GND with the Arduino's. For the time being, we just connected it to the Digital Kit's power supply, but in the future we want to switch to a battery for portability. When using the Kit, the motor exhibited model behavior.

A note to remember: Josephine's dongle doesn't work when uploading code to the Arduino anymore for some reason (Elizabeth's works fine).

Problem-solving with serial: Mid-way through testing, everything broke. The LED and motor both stopped responding when we ran the program. In order to debug, we needed to know what data the Arduino was receiving over the serial connection, so we got a second Arduino, hooked it up to a laptop, and sent data from the first to the second with modified sender/receiver code from Lab 8. We immediately saw that instead of 1s and 0s, the Arduino was receiving empty strings. This was a big worry, obviously. We experimented with only using the LED and running different programs that used serial communication, some of which worked and some of which didn't. After maybe 30 minutes, it started working again like normal, sending 1s and 0s at the appropriate times. The only change we made to the code was adding a variable called "data" that was set to 0 or 1 instead of directly writing the values in the serial function. We still do not know what happened, but will continue trying to figure it out.

Next steps: Tomorrow we'll lasercut our box and prototype the Servo mechanism. Then we'll write a (hopefully-successful) script to automatically write the necessary commands to run our program at power-on for both the Arduino and Raspberry Pi. After those two things are finished, our project will be almost complete!

April 28

Script on start-up: Today we learned how to execute commands at start-up by editing the rc.local file. We started by just running, which worked perfectly. It does take a minute to get going, but that's fine for our purposes. Then we planned to replace it with the commands to run our full program. We decided to test our program from the command line first, just to make sure that any bugs were resolved, but unfortunately it displayed the same confusing behavior that it did yesterday. We spent several hours trying to figure out why the facial recognition program was sending empty strings instead of 1s and 0s, when our program (blink over serial communication) worked perfectly with the same code. We could not figure it out and are meeting with Prof. Thiebaut tomorrow.

Lasercut box: We also designed and lasercut a prototype of our box today at the Design Thinking Initiative. It works well! Later on, we're considering making another out of a nicer material for the final product, perhaps with some engraving. We also want to add a "trick bottom" square to fit on top of the microprocessors.

April 29

Reducing serial traffic: Today we met with Prof. Thiebaut, which was very helpful. He saw that we were sending a continuous stream of bytes over the serial link and suggested that there might be too much traffic for the program to run as expected. Instead of sending a new byte every tenth of a second, we now only send the data if it will result in an action. For example, if the box is closed, we will not send any 0s (which would send the signal to close the box) since nothing would change; instead, we wait until the value changes to a 1, because that data is necessary to update the state of the box.

For our demo tomorrow, we will be using the LED to signify when a face is recognized, because we don't have time to construct the box-opening mechanism before then. Our final project will still include the box controlled by the Servo, though.

May 10

Yesterday we went in for office hours after running into a few bugs that we had not anticipated. After some time debugging, we figured that our program generated an error that we couldn't see since we were not logging errors and that rc.local was not working. With Dominique's help, we figured out the first problem, logged the errors into a text file and fixed the errors as they came up. To solve that we decided to have a bash script called . It contains the following commands:


/bin/sleep 10
date &>> /home/pi/270/faceon/error_output.txt
sudo modprobe bcm2835-v4l2
/usr/bin/python /home/pi/270/faceon/ &>> /home/pi/270/faceon/error_output.txt
In order to run the bash script, we run the command
 sudo crontab -e
and added the following command at the end of the file
 @reboot /home/pi/270/faceon/
Elizabeth cut out the box and put together the remaining hardware.

Final Project Demo


  1. Elegoo Arduino Mega2560 Tutorial Guide
    • Parts included in the Starter Kit as well as tutorials on how to use different sensors.
  2. Getting Started with PiCamera
    • Basic setup and commands for using the PiCam.
  3. Install OpenCV 3 + Python on your Raspberry Pi
    • Walkthrough of installing OpenCV for use with Python
  4. Real-Time Face Recognition
    • Includes OpenCV installation, camera testing, and face detection.
  5. Face Recognition with OpenCV, Python, and Deep Learning
    • Theory behind facial recognition.
  6. Raspberry Pi Face Recognition
    • Continuation of previous article, goes through configuration, gathering dataset, and face recognition.
  7. Raspberry Pi Face Recognition Treasure Box
    • A project similar to ours (but only using Raspberry Pi).
  8. RPi and Arduino
    • A tutorial on getting Arduino-Raspberry Pi serial communication to work.
  9. Face Recognition: Understanding LBPH Algorithm
    • A thorough explanation of the facial recognition algorithm our project uses.