Featured Article : Would You Be Filmed Working At Your Desk All Day?

Following a recent report in the Metro that BT is carrying out research into continuous authentication software, we look at some of the pros and cons and the issues around employees potentially being filmed all day at their desks … under the guise of cyber-security. 

Why Use Continuous Authentication Technology? 

Businesses use continuous authentication technology to enhance security, i.e. to add an extra layer of protection. As the name suggests, this type of software continuously verifies users throughout their session, rather than relying solely on traditional one-time authentication methods like passwords or PINs. This approach is designed to mitigate risks such as session hijacking, whereby unauthorised users gain access after the initial login, or insider threats where someone might misuse another’s logged-in session. Continuous authentication essentially helps detect abnormal behavior in real-time, flagging up potential breaches or fraud by monitoring unique patterns such as typing style, mouse movements, or facial recognition. By integrating this technology, businesses may hope to reduce security vulnerabilities, safeguard sensitive data, and improve compliance with industry regulations, all while maintaining a seamless user experience, i.e. it’s happening automatically in the background. 

BT Trialling Continuous Authentication Technology 

BT is reported to be trialling BehavioSec’s behavioral biometrics technology at its Adastral Park science campus near Ipswich. This software is used for continuous authentication, where it monitors users’ unique behavior patterns, such as how they type, move the mouse, or interact with their devices, to confirm their identity. However, in the case of BehavioSec’s technology, it doesn’t usually require the use of a camera, i.e. the user doesn’t need to filmed by a webcam all day. Instead, it can rely on analysis of a user’s behaviour patterns by looking at factors such as keystroke dynamics, mouse movements, touchscreen gestures, and device interaction patterns (e.g. how the user holds their phone, scrolls through pages, or interacts with specific applications). In the recent Metro story however, the reporter witnessed a demonstration of the system that did use facial recognition and required continuous filming of the user with a webcam/front-facing camera to detect whether the user’s face was consistent with expected dimensions. 

BT is exploring this technology as part of its broader efforts to improve cybersecurity, particularly in response to the growing threat of cyberattacks and data breaches. The trials of BehavioSec’s behavioral biometrics technology are part of BT’s research into how it can use innovative technology to better protect digital assets and infrastructure, especially in enterprise and government contexts. For example, back in 2022, BT said it would be taking security to a new level so that even if an attacker obtained a device, any ongoing work session would end, locking the device, because their biometrics wouldn’t match that of the device user’s known biometrics. 

Systems Using Cameras? 

There are, however, many such continuous authentication systems now available which require a camera being trained on a user’s face. A few prominent examples include: 

– FaceTec’s ZoOm. This is a 3D facial recognition solution that uses the front-facing camera of devices (it can use a webcam) to authenticate users, e.g. by carrying out “Liveness Checks, Face Matches & Photo ID Scans”. It’s often used in applications requiring high security, such as financial services or identity verification systems, and biometric security for remote digital identity. 

– FacePhi. This (Spanish) biometric solution for facial recognition is widely used in the banking, healthcare, and fintech sectors for secure access to mobile banking apps and fraud prevention. The software uses a camera to identify users and offers continuous authentication by tracking facial features during interactions. 

– IDEMIA’s VisionPass. This system combines 3D facial recognition with AI and uses cameras to recognise faces and continuously verify identities, even in challenging conditions like low light or with face masks. It’s generally deployed in secure facilities, airports, and government buildings for access control and ongoing authentication. 

– Trueface. This AI-powered facial recognition technology integrates with existing security systems, such as cameras in corporate offices, to provide continuous authentication. Trueface can recognise and track users in real-time, improving access security and is used in corporate offices, airports, and law enforcement for continuous identification and authentication. 

Other popular systems that use similar methods include Clearview AI, Neurotechnology’s Face Verification System, AnyVision, and ZKTeco’s FaceKiosk. 

It’s also worth noting here that the “big tech” companies’ versions, such Apple’s Face ID, Google’s Face Unlock (Pixel Devices), and Microsoft Windows ‘Hello’ are also facial recognition-based authentication systems that are classed as continuous authentication technology. However, for the purposes of this overview, we’re focusing on the kinds of systems that businesses may use for their own employees. 

Issues 

The usage of facial recognition (e.g. by law enforcement) has had its share of criticism in recent years. However, the thought of businesses using a camera to continuously film an employee, even if it may be for security purposes, such as continuous authentication, raises several serious issues and concerns. For example: 

– An invasion of privacy. With constant surveillance, employees may feel that their privacy is being violated. Cameras can capture not only work-related activities but also personal moments, which may lead to discomfort and a sense of being micromanaged. Cameras might inadvertently record personal or sensitive information, such as confidential discussions, which could be accessed or potentially misused. 

– The effect on employee trust and morale. Continuous filming can create an atmosphere of distrust between employees and employers. Workers may feel they are being monitored for reasons beyond security, leading to an atmosphere of fear, plus a decrease in morale and engagement (and ‘quiet quitting’). 

– Psychological stress. Constant camera surveillance can lead to stress or anxiety among employees, affecting their overall well-being and productivity, which could obviously be counterproductive for the company. 

– Data security and misuse. For example, video recordings of employees can contain sensitive biometric data, which, if compromised through a data breach, could have serious consequences. Biometric data is immutable, i.e. once stolen, it cannot be changed (like a password). There is a risk of video footage being misused, either by internal parties or external hackers. The footage could be exploited for purposes other than security, such as inappropriate monitoring of behavior or harassment. 

– Ethical concerns. These could arise if employees are not fully aware of the extent and purpose of the surveillance, or if they feel coerced into accepting it as a condition of employment. Also, filming employees all day can be viewed as excessive (overreach), especially if less invasive alternatives exist. Monitoring behavior to this degree may cross ethical boundaries of acceptable workplace practices. 

– Legal implications. Many regions have strict privacy laws (e.g. GDPR in Europe, CCPA in California) that require companies to obtain explicit consent for continuous surveillance and ensure the proportionality and necessity of such measures. Non-compliance could lead to legal consequences, fines, or lawsuits for a business. In some countries (or US states, for example) there are labour laws that protect employees from invasive workplace monitoring. Continuous surveillance may violate these protections if it is deemed too intrusive.  

– The Potential for bias and discrimination. Among other things, this could include algorithmic bias. If the continuous authentication system relies on facial recognition, there is a risk of bias against certain groups, such as racial minorities or those with disabilities, due to known issues with facial recognition accuracy across diverse demographics. Also, employees may worry that the surveillance data could be used for purposes other than security, such as evaluating performance, which could lead to discrimination or unfair treatment. 

– Technical reliability, e.g. false positives/negatives. Continuous authentication systems relying on cameras may fail, leading to false positives (unauthorised users being granted access) or false negatives (legitimate users being denied access). This can disrupt work and erode trust in the system. 

While continuous authentication aims to enhance security, using cameras to film employees all day raises significant challenges. Companies need to carefully balance security needs with privacy rights, ethical considerations, and legal compliance to avoid potential negative consequences. For example, in 2020, H&M (the German multinational clothing retailer) was fined €35.3 million by the Hamburg Data Protection Authority in Germany for violating GDPR due to excessive and invasive surveillance of employees. 

What Is ‘Emotional Analysis’ And Why Is It Causing Concern? 

Some continuous authentication software can now use ‘emotional analysis’. This refers to the use of AI to detect and interpret human emotions through cues like facial expressions, voice tones, or body language. Its purpose is to monitor and assess workers’ emotional states, such as stress, engagement, or satisfaction. It could help a business by providing insights into employee well-being and productivity, identifying signs of burnout or disengagement, and enabling management to respond proactively to improve workplace morale, increase efficiency, and enhance overall performance through better support and tailored interventions. 

However, its usage also raises significant concerns around privacy, accuracy, and bias. The technology is often inaccurate, particularly across different demographics, leading to misinterpretation of emotions. Its use in workplaces for employee monitoring can create a sense of invasion and stress, eroding trust, and morale. There are also ethical and legal issues, with fears of misuse for micromanagement or even manipulation of behavior, making its widespread deployment highly controversial. 

Susannah Copson, legal and policy officer with civil liberties and privacy campaigning organisation Big Brother Watch has described ‘emotion recognition technology’ as “pseudoscientific AI surveillance” and has called for it to be banned. 

What Do Rights Organisations Say? 

Big Brother Watch is strongly opposed to the unchecked growth of workplace surveillance tools, calling them an invasion of privacy, harmful to employee well-being, and in need of stricter regulation to protect workers’ rights. Big Brother Watch recently held an event at the UK at the Labour Party conference to launch its report on workplace surveillance in the UK, highlighting its increasing use by bosses and their employers, and its negative effects on employees.  

Big Brother Watch argues that workplace surveillance technologies, such as keystroke logging and AI-powered emotional analysis, invade employee privacy, erode trust, enable micromanagement, and harm mental health, potentially violating privacy laws like GDPR, while calling for stricter regulation to protect workers’ rights. 

How Much Has Workplace Surveillance Increased? 

A recent report by ExpressVPN, titled the “2023 State of Workplace Surveillance,” highlights a significant increase in workplace surveillance. Some key findings include: 

– 78 per cent of employers are using some form of employee monitoring tools in 2023, up from 60 per cent before the COVID-19 pandemic. 

– 57 per cent of employers implemented new surveillance tools specifically due to remote work conditions caused by the pandemic. 

– 41 per cent of companies now use software to track keystrokes, screenshots, or record the activity of employees’ screens. 

– 32 per cent of employers monitor employee emails and messages, while 25 per cent track employee location using GPS or IP data. 

A Growing Market 

This surge in monitoring reflects the growing reliance on digital surveillance tools to manage remote workforces. Regarding the market for identity and access management (IAM) and cybersecurity solutions, Gartner reported in its “Market Guide for User Authentication” that continuous authentication is gaining traction due to increasing concerns about cybersecurity and the limitations of traditional login methods.  

A MarketsandMarkets report has also noted that the global user authentication market, which includes continuous authentication solutions, is projected to grow from $13.9 billion in 2022 to $25.2 billion by 2027. A 2022 Verizon Data Breach Investigations Report also noted that 61 per cent of breaches involve stolen credentials and pushed companies to adopt continuous authentication as a preventive measure. 

What Can Employees Do? 

If employees are concerned about continuous camera monitoring such as that used with some continuous verification systems, the (realistic) options they have are to: 

– Review company policies to understand the purpose and limits of the surveillance. 

– Raise concerns with HR or management to request less invasive alternatives, like fingerprint or password-based methods. 

– Seek legal advice if monitoring violates privacy laws, or report it to a regulatory body like the ICO (in the UK).  

– Consult with a union to negotiate privacy protections, if applicable. 

– Document their issues for potential disputes and familarise themselves with their rights under local privacy and employment laws. 

What Does This Mean For Your Business? 

The rise of continuous authentication software, particularly that using facial recognition and behavioural biometrics, highlights the tension between advancing cybersecurity and respecting employee privacy.

While the primary aim of these systems may be to offer ongoing, seamless security by monitoring users throughout their work sessions, the methods employed, such as continuous video surveillance or behavioural tracking, have raised significant ethical and privacy concerns. The promise of enhanced protection against cyberattacks, session hijacking, and insider threats is compelling, especially in industries where data security is paramount. However, the potential downsides of this technology can’t be ignored. 

One of the key concerns is the invasion of privacy. Employees may feel uncomfortable or even violated if they know that cameras or other tracking mechanisms are monitoring their every move. The potential for these systems to inadvertently capture non-work-related activities, or even sensitive personal interactions, adds to the unease. Continuous surveillance risks creating an atmosphere of distrust between employers and employees, fostering a sense of being constantly watched, which could have a detrimental effect on morale. In extreme cases, this might lead to disengagement, lower productivity, or even a rise in ‘quiet quitting,’ as employees withdraw emotionally from their work due to feeling over-monitored. 

Also, there are concerns about the psychological impact of constant surveillance. The knowledge that a camera or biometric system is perpetually tracking your behaviour can lead to stress, anxiety, and a feeling of being under perpetual scrutiny. This could, paradoxically, undermine the productivity gains that continuous authentication aims to protect. Employees working under these conditions might find it difficult to focus or perform optimally, especially if they perceive the surveillance as intrusive or excessive. 

In addition to these privacy and security concerns, there are ethical and legal considerations. In many jurisdictions, privacy laws require companies to obtain explicit consent for such monitoring and ensure that the measures are proportionate and necessary. Failure to comply with these regulations could lead to hefty fines or legal action (as seen in the case of H&M’s €35.3 million fine in Germany).  

There are also the issues of bias and discrimination. Facial recognition technologies have been shown to be less accurate across diverse demographic groups, potentially leading to unfair treatment of certain employees. If continuous authentication systems generate false positives or negatives due to these biases, it could create additional hurdles for employees from minority groups, further entrenching workplace inequalities. There is also the risk that the data gathered could be used for purposes beyond security, such as monitoring productivity or evaluating performance, which could lead to unfair assessments or discrimination. 

Despite these challenges, it is clear why businesses are keen to explore continuous authentication technology. The ever-present threat of cyberattacks, data breaches, and insider threats has made it essential for organisations to find new ways to secure their digital assets. Continuous authentication offers a promising solution by providing ongoing verification without disrupting the user experience. However, businesses must tread carefully, ensuring that these systems are deployed in ways that respect employee privacy, comply with legal requirements, and avoid creating a toxic work environment. 

As continuous authentication (seemingly inevitably) becomes more widespread, it will be crucial for businesses to engage in transparent communication with employees about how these systems work, why they are being implemented, and what safeguards are in place to protect their privacy. Offering alternative, less invasive methods, such as fingerprint recognition or password-based systems, may help alleviate some concerns. Ultimately, the successful adoption of continuous authentication will depend on striking the right balance between robust security measures and the protection of employee rights and well-being.