Search

Translate

Real time face landmarks detection


Face landmark detection using opencv and dlib

first of all install the opencv and dlib libraries
open cv - used for computer vision, image processing.
dlib- used to detect the face and also works as classifier.
Note: you need to have cmake and c++ compiler installed for the dlib to work.

code with explanation:

import cv2 import dlib


To capture the real time video through webcam use this one line code
webcapture = cv2.VideoCapture(0)
in cv2.VideoCapture(0) put zero in the bracket if you use internal webcam or else put 1. 

now load the face detector and landmark detector from dlib
face_detector = dlib.get_frontal_face_detector() facial_landmarks = dlib.shape_predictor("shape_predictor_68_face_landmarks.dat")
here we are using shape_predictor_68_face_landmarks model to predict the landmarks available in github. 

until the webcam stops capturing we have to do some functions recursively
while True: #capture and read the video frame by frame(image by image) through webcam(image by image) _, frame = webcapture.read() #it's easy for the detector to detect the face from garyscale video #to convert the color of the video to grayscale gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) face_track = face_detector(gray) #x1,y1,x2,y2 are co-ordinates of rectangle for face in face_track: x1= face.left() y1= face.top() x2= face.right() y2= face.bottom() #create rectangular frame around the face cv2.rectangle(frame, (x1,y1), (x2,y2), (0, 235, 0), 3) #identifies the landmarks from the face in the gray frame landmarks = facial_landmarks(gray, face) #identifies x and y values corresponding to the 68 points in the shape_predictor_68_face_landmarks for i in range(0,68): x = landmarks.part(i).x y = landmarks.part(i).y cv2.circle(frame, (x,y),3, (0,0,255), -1) #diplaying the image cv2.imshow("face frame", frame) key=cv2.waitKey(1) #q represents quit here we are giving the ASCII value of q to stop showing the frames if key==ord('q'): break #we can also give this as if key ==81 or key==113: break
cv2.rectangle(frame, (x1,y1), (x2,y2), (0, 235, 0), 3) 
Where, frame is the image object, 
(x1,y1) and (x2,y2) are the vertices, 
(0, 235, 0) is RGB value that is the color of outline of the rectangle, 
3 is the width of the rectangle.

waitkey is most important without this the above functions will not respond waitkey(1) wait for one millisecond and changes the next frame so that you can see the video.

we must release the capture and close all the windows.
webcapture.release() cv2.destroyAllWindows()

Output:





Previous
Next Post »