The age verification (AV) landscape looks to be on the verge of a seismic shift towards technology-supported decision-making and more long-term, automated AI decision-making. The market is now busy with technology providers with impressive solutions including age estimation (AE) technologies and digital ID.
As the government announces an upcoming consultation on the Alcohol Licencing Act 2003, it feels as though we are sitting on the precipice of change in the world of AV. We would expect after the consultation that the acceptance of digital ID for alcohol sales will be ratified and if that is true, large-scale adoption of digital ID is likely to commence. We know from our research that 94% of young people would prefer a digital ID solution and that the biggest driver for adopting the technology is acceptance for all use cases.
The Telegraph recently published a headline which we can only assume was aimed at causing a degree of public hysteria - “PO scandal firm to sell face scanners”. But now is not the time for hysteria, the technology not only works, but it also has huge potential for making our lives easier. Now is the time for learning, reflection and ensuring actions today mean the future is not a reflection of the past.
The Horizon scandal for us, acts as a cautionary tale of new technologies removing perceived accountability from people, and that where the people using the technology, don’t understand the mechanisms that drive the results, there lies a risk.
We don’t believe that everyone involved in this sad event had malicious intentions, we do however believe that some simply weren’t educated enough about the technology being used, that they could perceive the errors it might make.
The final text of the EU AI Act states that AI systems classified as high risk may not be treated as high risk if "the task performed by the AI system is intended to improve the result of a previously completed human activity…Considering these characteristics, the AI system only provides an additional layer to a human activity with consequently lowered risk", as such the regulation applied to those systems may be significantly reduced.
In light of recent events, is it always true that adding an additional layer to a human activity lowers the risk? Or does it mean that humans give less rigour to the task? And if that is so, what degree of scrutiny and accuracy must be applied to the additional layer before we rely on it? We want to define the benchmarks which will give the sector confidence. With 17 years of human AV performance data to use comparatively, we are perfectly placed to create a robust and data-driven performance metric which will clearly demonstrate if the level of risk is reduced through the use of the technology . There are many benefits to the technology being considered low risk by regulators, it may remove a lot of red tape but equally, if not self-regulated effectively, it does have the potential to inadvertently increase risk for the licence holder, retailer, or content provider by removing accountability from people.
If we have learnt anything it must be that systems that impact a person’s rights must be rigorously and independently tested, we must trial it thoroughly and listen to the feedback and concerns of the end user and it must be clear from the outset where accountability for correct decision-making lies. It should be recognised that many of the big players in the space, companies such as Yoti are already both publishing their internal testing results, as well as publishing independent results obtained through testing frameworks such as the one provided by the National Institute of Standards and Technology (NIST).
Here at Serve Legal, we are looking to further support the sector by addressing the concerns raised by some of the questions posed in this article; we’re working with both the providers of new AV technology and the retailers implementing it to ensure potential pitfalls are avoided.