Facial recognition is used for public surveillance and business access control. The two are very different; but both functions suffer from a major drawback. People do not trust facial recognition.
Surveillance is the primary problem – it is intrusive and insufficiently secure. It is non-consensual and used in public spaces by unknown people for unknown purposes. Access authentication is more constrained. It is used in discrete buildings by individual known operators for a known purpose, and is consensual.
Facial recognition and surveillance
The primary concerns over surveillance-focused facial recognition are the still lingering memories of Edward Snowden’s revelations of widespread and hidden NSA and GCHQ surveillance. This is combined today with the lack of user consent for the collection and use of personal images, their storage in unknown databases, and their use by unknown entities for unknown purposes.
Clearview and GDPR is an example of the latter concern. Clearview web-scraped a large image database which it sells on to the FBI, DHS and local police departments in the US. Several European countries have levied fines (totaling more than €60 million) for breach of GDPR’s lawfulness, fairness, and transparency principles. Clearview hasn’t paid the fines, and with no formal establishment in the EU, those fines cannot be enforced. It did, however, settle a case in the US.
Surveillance by facial recognition is almost always in a public setting, so it’s one-to-many. There is a database and many cameras (usually a large number of cameras – an estimated one million in London and more than 30,000 in New York). These cameras capture images of people and compare them to the database of known images to identify individuals. The owner of the database may include watchlists comprising ‘people of interest’, so the ability to track persons of interest from one camera to another is included.
But the process of capturing and using the images is almost always non-consensual. People don’t know when, where or how their facial image was first captured, and they don’t know where their data is going downstream or how it is used after initial capture. Nor are they usually aware of the facial recognition cameras that record their passage through the streets.
Furthermore, the surveillance process itself has repeatedly proven insecure. The iconic example dates to 2018 and Mexico City. A hacker working for the Sinaloa drug cartel got hold of an FBI agent’s phone records, hacked Mexico City’s surveillance system, and was able to track, threaten and kill the agent’s informants.
Although this dates to 2018, the DOJ OIG used it in a July 2025 report, noting the basic security problem has not been solved but in fact exacerbated by modern technology. The Guardian commented (June 27, 2025), “The report said that recent technological advances ‘have made it easier than ever for less-sophisticated nations and criminal enterprises to identify and exploit vulnerabilities’ in the global surveillance economy.”
More recently, on November 3, 2025, lawmakers Ron Wyden and Raja Krishnamoorthi wrote to the FTC demanding an enquiry into Flock Safety (an operator of license plate-scanning cameras) for not requiring MFA. The letter notes, “A search by Congressional staff of a public tool operated by the cybersecurity company Hudson Rock documenting accounts compromised by a form of malware known as an ‘infostealer’ reveals that passwords for at least 35 Flock customer accounts have been stolen.”
In short, lax security practices within police departments allows criminals to gain access to LEA surveillance cameras. These same lax security practices likely occur elsewhere and with different types of surveillance cameras. Comparing this to the Mexico City incident, it could potentially allow criminals to track the progress of individual vehicles.
But just as modern technology can automate this weakness, so can modern technology harden it.
Hardening the surveillance infrastructure
ZeroTier is a software defined overlay network outside of the data center. It’s an encrypted end-to-end, peer-to-peer mesh overlay network. The encryption is based on cryptographic identities in the end devices. Since it is software, the make or model of the device is irrelevant. In the video world, the cameras could come from any manufacturer.
Viewed as a software agent installed in each device, the agent builds a tunnel to other specified devices. Operating at layer 2, it can multipath and hop between physical networks. It is always up, secure and robust.
On October 23, 2025, ZeroTier announced a partnership with Active Security, a firm that specializes in military C5ISR systems (the S is for Surveillance, including video surveillance of terrorist or criminal gangs ‘of interest’ on the street). Secure and flexible peer to peer networking has wide potential for military and federal application, but here we are focusing on the secure connection between camera and remote database. The adoption of ZeroTier by a defense contracting firm can be viewed as a vote of confidence in the technology.
(JP Rike, CTO at Active Security, told SecurityWeek, “I can’t specify who is using us, but we are used on both sides of the Atlantic by multiple different militaries.”)
In the view of Active Security, it is the current video surveillance architecture that is the risk (proven back in 2018 in Mexico City and still extant in the Flock incident), not the individual camera. Using ZeroTier’s networking, the threat is not totally eliminated but is minimized to insignificance. In the Active Security use of ZeroTier, every single camera is cryptographically independent of all others. If any one camera is hacked, the attacker gets only the single feed with no possibility of lateral movement to other cameras and other feeds – a repeat of Mexico City would be prevented.
This is important. Cameras are installed everywhere and in increasing numbers. They are being installed for public safety, but the danger is the systems could be hacked and accessed by criminals. The Active Security / ZeroTier solution minimizes this threat. If a camera is hacked, the criminal can only access the one video stream – that criminal could not hop between multiple cameras to follow a target. If the video stream is hacked, it is encrypted.
This system doesn’t eliminate the public distrust of ’surveillance’, but it does help ensure that only those eyes authorized to see the surveillance can do so.
Facial recognition for access authentication
The second use of facial recognition is for identity authentication within secured spaces (primarily, but not limited to, offices, data centers and other discrete buildings – it could even be a private dwelling house). It is the biometric credential (something you are, rather than something you have and can lose, or something you know and can forget) that allows easy access for those authorized to enter and move around within buildings.
Alcatraz.ai is one of the firms offering a facial biometric authentication solution, but with a difference.
“When we set out to start the company in 2016,” explains Tina D’Agostin, CEO of Alcatraz, “we knew the discussion around public surveillance had created privacy concerns. So, we set out to create a very privacy-first architecture.” In a nutshell, what she means by ‘privacy-first’ facial recognition is facial recognition with no facial images stored anywhere.
“When we enroll a new user,” she continued, “we take a facial representation, a map of the face, which basically becomes a digital blob simply comprising zeros and ones.” This blob can be likened to a cryptographic hash: each one is unique but meaningless on its own and cannot be reverse engineered to its original source. “We don’t store any image – it’s converted into this mathematical representation.”
This sets it apart from the facial recognition of public surveillance systems. It is also consensual (since the user chooses to work for the employer), it has a limited and known purpose (authentication only), and it is privacy-focused (no facial image is captured, stored or transmitted anywhere).
When a user, an employee or authorized visitor, needs authentication to enter a building or restricted area within the building, a camera rescans the face and recreates the same face-map-blob. If it matches the stored blob, the user is granted entry. Regardless of the individual’s physical identity (name), that person is authenticated.
Alcatraz is also well set to aid AI-inspired smart buildings’ predictive security. It already includes basic elements. It detects tailgating access attempts (where a non-authenticated person attempts to slip through immediately behind an authenticated person). This predicts a problem and immediately prevents access to the second person.
Current capabilities could be enhanced in the future. It could take note of the time and door accessed by an authenticated person, even though it is only aware of the blob and not the person’s physical identity. It could then leverage event-scoped pattern analysis to help security teams anticipate anomalies, perhaps through repeated failed access attempts, or unusual, perhaps after-hours access to a given door.
This would enable predictive access security. There is still no facial image recorded, nor any people-tracking, nor watch-lists involved – just a pattern of events that could warrant further investigation by the security team, combining both personal privacy and enhanced predictive building security.
Summary
Most people are wary of facial recognition systems. They are considered personally intrusive and privacy invasive. Capturing a facial image and using it for unknown purposes is not something that is automatically trusted. And yet it is not something that can be ignored – it’s part of modern life and will continue to be so.
In the two primary purposes of facial recognition – access authentication and the surveillance of public spaces – the latter is the least acceptable. It is used for the purpose of public safety but is fundamentally insecure. What exists now can be, and has been, hijacked by criminals for their own purposes. There is a possibility that it could be used by a future authoritarian government for dystopian purposes. And all we can do is make it as secure as possible so that only authorized people, whoever they are, can use it.
The access authentication purpose is easier to handle. Firms are striving to develop non-intrusive facial recognition systems for access control. It is a friction-free method of authentication, so is attractive for business. It is consensual and for a specified purpose. And Alcatraz has already combined these advantages with a privacy-focused method of facial recognition that requires no capture or storage of any facial image beyond an unintelligible blob of data.
Related: OneFlip: An Emerging Threat to AI that Could Make Vehicles Crash and Facial Recognition Fail
Related: Meta Agrees to $1.4B Settlement With Texas in Privacy Lawsuit Over Facial Recognition
Related: IRS to End Use of Facial Recognition to Identify Taxpayers
Related: EU Data Watchdogs Want Ban on AI Facial Recognition

