Face recognition in the mobile journalism application

Video Analytics microservice is a part of the Media Use case. It opens new opportunities for mobile journalism because it enables automatic tagging of video streams recorded by freelancers.

Video Analytics microservice work is closely related to the image classification problem since we are looking for a particular person among all available video streams. One of possible solutions of face recognition is the use of Active Appearance Models (AAMs). AAMs are statistical models of an image, which can be fitted to this image by using different geometrical transformations such as shifting, scaling, rotation, skewing, etc. The AAM includes two types of parameters: (1) shape parameters and (2) appearance parameters. The shape parameters define the face contour and the appearance parameters concern image texture. To analyse the face, we use special markers (anthropometric points) attached to key elements of the face, viz. eyes, mouth, nose. In our face recognition module we use 68 markers.

The Video Analytics microservice has been developed in Python and it uses the OpenCV library.


The microservice works on the basis of a client-server architecture through asynchronous web sockets, which makes it possible to recognize several video broadcasts simultaneously.

The Video Analytics microservice interacts with the web-application of the Media Use Case as well as the Kurento Media Server. The components interaction in shown in Figure: