What is spoofing

Face Anti Spoofing


In the recent times as biometric recognition systems increase their performance, IT Managers have to take
in great consideration possible direct attack; potential intruders may gain access to the IT system by
directly interacting with the system input device, like a normal user would. Such attempts are referred as
spoofing attacks.
Our smartphone are generally, unlocked through face ID or finger. Both Biometrics are object of potential
attack.


Finger and Face are the easiest to spoof because for a simple attack no specific technical skills (i.e. some
attacks can be performed by ordinary people) and the biometric data is very easy to stole (e.g. taking
photos from online profiles).
In general, there are three possible ways to generate a face spoof attack:
– taking a photo of a valid user
– reproducing a video of a valid user
– 3D model of a valid user
These types of attacks can be detected with the help of specific hardware sensors (IR sensors, stereoscopic
cameras). However, a face recognition system should be built with very low cost hardware, and it should
be also used  for consumer applications, therefore, addition of specific hardware or interaction to ensure
reliability is not  a convenient solution.  This implies that a “simple” photo spoofing attack can represent a
security problem for a face recognition system. In fact, most of papers in the literature refer to the problem
as a task of photo attack detection as it represents a cheap and effective way to perform an attack.
Our technology
My-ID offers a platform based on biometrics multi factor analisys : Face , Voice and passive liveness.
So My-ID is based on verification techniques that can be used either as single biometrics or within a
multimodal fusion modality, where, after evaluating the face and voice samples independently, a unique
score is computed to take them into account together. Aim of this approach is to give the system
robustness and whenever is possible accuracy.
During enroll stage a snapshot of the user’s face and a some Voice sample are taken, processed and stored
into an encrypted and local database.
My-ID uses this data to compute two data structures, namely a face template and a Voice template,
containing only the distictive features extracted from face and Voice. Templates are strictly linked with the
use trough a user, which is also stored in the database along with the templates.
The device where enroll stage is made must be authorize by a management console.

During verification user has to declare an identity. New biometric samples are taken and processed to
produce a pair of patterns. The patterns are data structure containing distintive features extracted from the
new biometric samples and therefore can be compared to the templates belonging to the declared
identity. Two comparison are made and the results are two scores reppresenting the degree of similarity
between the user and the declared identity. Fusion module properly combines the two scores into one. If
the fusion score is above a decision threshold user is accepted (genuine user), otherwise is rejected
(impostor).
My-ID is a face and Voice detector which is able to evaluate the orientation of the head in the three-
dimensional space through the measure of Euler angles (Roll, Yaw, Pitch).
3D head pose estimation is performed by using informations until now supposed to be not enough, such as
just eyes and nose tip coordinates.
This result is obtained using a novel geometric approach based on the definition of a semi-rigid model of
frontal face, which uses golden section in order to approximate proportions among various facial points.
This kind of head pose estimation brings several advantages compared to the other suggested approaches,
such as:

  1. It’s hardware independent;
  2. It doesn’t need calibration of the scene;
  3. It needs a minimum number of informations as opposed to other methods;
  4. His output it’s a continuos level and isn’t a discrete orientation estimation, and consequently can
    be used as index for face assessment operations based on frontality evaluation;
  5. It’s accurate enough for a wide range of head rotations;
  6. It allows to define heuristics that can be used to improve the performances of facial features
    detectors.

SHARE THIS

Share on linkedin
Share on whatsapp
Share on facebook
Share on twitter
Share on email