-->

Thursday, August 6, 2020

McAfee finds security flaws in internal mechanics of facial recognition models

Security research should be at the heart of any programme when rolling out applications that use facial recognition technologies, McAfee’s chief scientist said.

“Adversarial machine learning needs to become an integral part of the rollout, and as you begin to develop these machine learning capabilities, it is important to understand how they can be misused or potentially misclassified,” Raj Samani, fellow and chief scientist at McAfee, told TechRadar Pro Middle East.

In its research, McAfee has found a way to bypass facial recognition technology using model hacking.

Model hacking, also known as adversarial machine learning, is the concept of exploiting weaknesses present in machine learning algorithms and evading artificial intelligence to achieve adverse results.

Samani said that model hacking can misclassify a person and a person who knows model hacking can easily bypass the facial recognition technology.

“We did some work on model hacking on Tesla cars recently in a bid to be able to misclassify and took a lot from that research and applied the same research in the current applications of facial recognition systems.

 “We have an incredible opportunity to influence the awareness, understanding and development of more secure technologies before they are implemented in a way that has real value to the adversary. We, as an industry can focus on to get ahead of the problem,” he said.

In a research conducted on the 2016 models of Tesla S and Tesla X cars, which had the MobilEye camera technology, McAfee showed that it can misclassify the speed signs that would be read by the system.

By making a tiny sticker-based modification to the speed-limit sign, McAfee team was able to cause a targeted misclassification of the MobilEye camera on a Tesla and use it to cause the vehicle to autonomously speeds up to 85 mph when reading a 35-mph sign.

The advancements in technology such as artificial intelligence and machine learning have enabled several novel applications for facial recognition.

Increasing the attack surface

Facial recognition can be used as a highly reliable authentication mechanism, and an outstanding example of this is the iPhone.

Beginning with the iPhone X in 2017, facial recognition was the new de facto standard for authenticating a user to their mobile device.

While Apple uses advanced features such as depth-sensing technology to map the target face many other mobile devices have implemented more standard methods based on the features of the target’s face.

Another emerging use case for facial recognition systems is for law enforcement at airports to aid or replace human interaction for passport and identity verification.

However, there is an unprecedented rush to implement touchless solutions such as biometrics due to Covid-19.

While this push may result in less physical contact and fewer infections, it may also have the side-effect of exponentially increasing the attack surface of a new target.

In its new research on facial technology, in the context of data science and security, McAfee used a more advanced and deep learning-based morphing approach, which is categorically different from the more primitive “weighted averaging” face morphing approach to creating “adversarial images” in a passport-style format, that would be incorrectly classified as a targeted individual.

New playground for cybercriminals 

If a passport-scanner were to replace a human being completely, in this scenario, Samani said that it would believe it had just correctly validated that the attacker was the same person stored in the passport database as the accomplice and the attacker can bypass the essential verification step and board the plane.

However, he said the reliance on automated systems and machine learning without considering the inherent security flaws present in the mysterious internal mechanics of face-recognition models could provide cybercriminals unique capabilities to bypass critical systems such as automated passport enforcement.

“The attacker does not require a high degree of knowledge and does not need to get under the skin of how facial recognition works. This shows that more research needs to be done in facial recognition technology,” he said.

Moreover, he said that security researchers can find ways to make sure that misclassification can be mitigated but, at the same time, “we know that facial recognition technology is not used widely but as they become more, security research should be at the heart of everything they do.”

He added that vendors and researchers need to accelerate discussions and awareness of the problems, identify and solve these problems in advance.



from TechRadar - All the latest technology news https://ift.tt/3gL1Np3
NEXT ARTICLE Next Post
PREVIOUS ARTICLE Previous Post
NEXT ARTICLE Next Post
PREVIOUS ARTICLE Previous Post
 

Sports

Delivered by FeedBurner