How to use Face Recognition in python

img

When you look at an apple, your mind immediately tells you: that is an apple. 

This process is recognition in the simplest of terms. So, what’s facial recognition?
The same, but for faces, obviously.

But, the real question is:
How can a computer recognize a face?

Take a real life example:
When you meet someone for the first time, you don’t know who that person is at once, right?

While he's talking to you or shaking your hand, you’re looking at his face:
eyes, nose, mouth, skin tone… This process is your mind gathering data and training for face recognition.

Next, that person tells you that his name is Kirill (yes, our All-Star Data Science Mentor). So, your brain has already gotten the face data, and now it has learned that this data belongs to Kirill.

The next time you see Kirill or see a picture of his face, your mind will follow this exact process:

  • Face Detection: Look at the picture and find a face in it.
  • Data Gathering: Extract unique characteristics of Kirill’s face that it can use to differentiate him from another person, like eyes, mouth, nose, etc.
  • Data Comparison: Despite variations in light or expression, it will compare those unique features to all the features of all the people you know.

Our mind's Face Recognition Process

Then, the more you meet Kirill, the more data you will collect about him, and the quicker your mind will be able to recognize him.
Or, at least you should. Whether or not you are good with names is another story.
Here is when it gets better:
Our human brains are wired to do all these things automatically. In fact, we are very good at detecting faces almost everywhere:

Meet my (Loser) crew!
Computers aren’t able, yet, to do this automatically, so we need to teach them how to do it step-by-step.
But, you already knew that which is why you’re reading this article (duh).
However, you probably assumed that it’s incredibly difficult to code your computer to recognize faces, right? Well, keep reading my friend cause I am here to end with this creep and your superstitions as well.

THEORY OF OPENCV FACE RECOGNIZERS

Thanks to OpenCV, coding facial recognition is now easier than ever.
There are three easy steps to computer coding facial recognition, which are similar to the steps that our brains use for recognizing faces. These steps are:

  • Data Gathering: Gather face data (face images in this case) of the persons you want to identify.
  • Train the Recognizer: Feed that face data and respective names of each face to the recognizer so that it can learn.
  • Recognition: Feed new faces of that people and see if the face recognizer you just trained recognizes them.
    def detect_face(img):
#convert the test image to gray scale as opencv face detector expects gray images
 gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
 
#load OpenCV face detector, I am using LBP which is fast
#there is also a more accurate but slow: Haar classifier
face_cascade = cv2.CascadeClassifier('opencv-files/lbpcascade_frontalface.xml')
 
#let's detect multiscale images(some images may be closer to camera than others)
#result is a list of faces
faces = face_cascade.detectMultiScale(gray, scaleFactor=1.2, minNeighbors=5);
 
#if no faces are detected then return original img
if (len(faces) == 0):
return None, None
 
#under the assumption that there will be only one face,
#extract the face area
x, y, w, h) = faces[0]
 
#return only the face part of the image
return gray[y:y+w, x:x+h], faces[0]

It's that simple! And this is how our Face Recognizer will look once we finish coding it:
img
Hello, it’s me.

OpenCV has three built-in face recognizers and thanks to its clean coding, you can use any of them just by changing a single line of code.

Here are the names of those face recognizers and their OpenCV calls:

  • EigenFaces – cv2.face.createEigenFaceRecognizer()
  • FisherFaces – cv2.face.createFisherFaceRecognizer()
  • Local Binary Patterns Histograms (LBPH) – cv2.face.createLBPHFaceRecognizer() You might be wondering:

“Which Face Recognizer should I use and when?”
Here is a summary of each one that will answer that question. 
Let’s rock!

EIGENFACES FACE RECOGNIZER

This algorithm considers the fact that not all parts of a face are equally important or useful for face recognition.
Indeed, when you look at someone, you recognize that person by his distinct features, like the eyes, nose, cheeks or forehead; and how they vary respect to each other.
In that sense, you are focusing on the areas of maximum change.

For example, from the eyes to the nose there is a significant change, and same applies from the nose to the mouth.

When you look at multiple faces, you compare them by looking at these areas, because by catching the maximum variation among faces, they help you differentiate one face from the other.
In this way, is how EigenFaces recognizer works.
It looks at all the training images of all the people as a whole and tries to extract the components which are relevant and useful and discards the rest.
These important features are called principal components.

Note: We will use the terms: principal components, variance, areas of high change and useful features indistinctly as they all mean the same.
Below is an image showing the variance extracted from a list of faces.

AUTHOR

READ NEXT

Boostlog is an online community for developers
who want to share ideas and grow each other.

Delete an article

Deleted articles are gone forever. Are you sure?