Health scientific systems seen too slow in addressing digital technology harms

Experts issue a warning
Two leading researchers state that the rapid evolution of digital technologies, including artificial intelligence and social media, is outpacing the scientific systems meant to evaluate their risks.
Writing in the journal Science, Dr Amy Orben of the University of Cambridge and Dr J. Nathan Matias of Cornell University argue that the current infrastructure for studying the public health effects of technology is no longer fit for purpose, leaving society vulnerable to unchecked harms.
The researchers say that while big tech companies continue to roll out new technologies to billions of users, the burden of assessing their safety is outsourced to under-resourced independent scientists. This, they contend, has created a broken system where companies evade accountability, governments struggle to regulate, and researchers cannot keep up.
Orben and Matias are calling for an urgent overhaul of the production of scientific evidence on digital harms. They warn that the current system is moving too slowly to protect the public or influence meaningful policy.
“The scientific methods and resources we have for evidence creation at the moment simply cannot deal with the pace of digital technology development,” said Orben, who is based at Cambridge’s MRC Cognition and Brain Sciences Unit. “We must urgently fix this science and policy ecosystem to understand better and manage the potential risks posed by our evolving digital society.”
Users at risk
The pair argue that technology companies routinely launch new tools and platforms before complete safety evaluations, mirroring a ‘deploy first, test later’ culture that places users at risk. Generative AI, which has been released to millions without comprehensive safety checks, is cited as a prime example of this approach.
In contrast to other industries like pharmaceuticals or chemicals, where product safety is tested extensively before public release, tech companies place responsibility for safety research on academics and non-profit institutions. These researchers operate without adequate funding or access to internal company data, which further hampers their efforts to identify and respond to harms.
“Scientists like ourselves are committed to the public good, but we are asked to hold to account a billion-dollar industry without appropriate support for our research or the basic tools to produce good quality evidence quickly,” Orben said.
According to the researchers, this has created a damaging cycle. Tech companies deny researchers access to essential information, underfund studies, and then use the resulting lack of robust evidence to resist regulation. This, in turn, discourages further research and allows companies to continue operating with little oversight.
Matias, who heads Cornell’s Citizens and Technology Lab, said that many digital products are so complex and adaptive that even internal staff may not fully understand their functionality. The dynamic nature of digital tools—changing rapidly in response to user interaction—means that their findings may already be obsolete by the time a scientific study is completed.
“Technology products change daily or weekly, and adapt to individuals,” Matias said. “Scientific research can be out of date by the time it is completed, let alone published.”
This lag creates an opening for companies to challenge science-based regulation, using the absence of ‘causal evidence’ as a reason to delay or deflect responsibility. Matias likened this strategy to tactics used by the oil and chemical industries, which have historically exploited scientific uncertainty to avoid accountability.
‘Minimum viable evidence’ model
To counter this, the researchers propose developing a new ‘minimum viable evidence’ model. This would allow scientists and policymakers to act more quickly by lowering the threshold of proof needed to begin testing safety interventions. In this system, the bar for action would be based on preliminary data and the precautionary principle, especially when companies are unwilling to provide transparency or support research.
Crucially, Orben and Matias recommend shifting from a sequential approach, where evidence must be fully gathered before action, to a parallel model where harm mitigation strategies are tested alongside early data collection. This could include real-time trials of algorithmic changes or digital policy experiments to reduce user risks.
They also propose the creation of harm reporting registries, similar to those used in environmental protection and public health. These registries would collect user-submitted reports of digital harms, providing a more immediate way to track emerging risks from technologies such as AI, social platforms, and recommendation algorithms.
“We gain nothing when people are told to mistrust their lived experience due to an absence of evidence when that evidence is not being compiled,” said Matias.
Public health bodies, academic institutions, or civil society organisations could maintain such registries. They would draw on proven effective models in toxicology and road safety. Existing systems, such as mortality databases or domestic abuse reports, could also be expanded to include digital factors where relevant.
Expert panels
Another proposed solution is the establishment of expert panels or ‘science courts’ to make rapid evidence assessments and set provisional thresholds for action. These panels could include affected communities and members of the public, ensuring that diverse perspectives are represented in the decision-making process.
The authors point to the success of the ‘Green Chemistry’ movement as a model to emulate. In that field, chemicals are ranked by risk, and the marketplace is encouraged to innovate towards safer alternatives. A similar framework could be applied to digital tools, enabling companies to compete on performance, safety, and ethical design.
“Causal evidence of technological harms is often required before designers and scientists are allowed to test interventions to build a safer digital society,” Orben said. “Yet intervention testing can be used to scope ways to help individuals and society, and pinpoint potential harms in the process.”
The researchers argue that a fundamental shift is needed in how digital safety science is structured and supported. This includes building agile, well-funded, and publicly accountable systems that can operate as quickly as the technologies they try to understand.
Orben and Matias stress that as technologies like AI become further embedded in everyday life, from healthcare to education, the cost of scientific inertia will only increase. Without change, they warn, the public will become increasingly exposed to untested tools and platforms, while policymakers will remain ill-equipped to intervene.
“When science about the impacts of new technologies is too slow, everyone loses,” Matias concluded.
The report underscores growing concern among academics and digital rights experts that the current model of technological governance is insufficient to keep up with the demands of a fast-moving digital age. As debate over AI safety, algorithmic bias, and mental health impacts continues, the need for reformed science-policy frameworks is becoming more urgent.
For now, Orben and Matias are urging policymakers to prioritise reforms that enable quicker, more transparent evaluations of digital harms before the gap between innovation and regulation grows.
Image: Big tech companies introduce new technologies to billions of users, while the responsibility of evaluating their safety is passed on to under-resourced independent scientists. Credit: Nataliya Vaitkevich