Royal Society Warns Of Opaque AI Research Tools Risks
January 12, 2025
Tech

Royal Society Warns Of Opaque AI Research Tools Risks

In Newly Released Report

The Royal Society has warned of the potential dangers of relying too heavily on opaque artificial intelligence (AI) systems in scientific research.

The comprehensive report, titled ‘Science in the Age of AI,’ delves into the transformative potential of machine learning and large-language models while highlighting the risks they pose to the reliability and trustworthiness of scientific findings.

AI tools have already begun revolutionising various fields, from drug discovery to climate modelling, by facilitating tasks ranging from statistical analysis to generating insights from vast datasets.

However, the report, released on May 28, 2024, cautions that the complex and ‘black box’ nature of advanced machine-learning models often renders their outputs unexplainable to researchers, raising concerns about the reproducibility and accuracy of AI-based studies.

Vigilance Urged

While AI systems can offer valuable insights, the growing body of irreproducible research based on AI and machine learning underscores the need for vigilance.

The report underlines that unreliable or untrustworthy AI technologies jeopardise scientific inquiry’s integrity and erode public trust in research findings.

To address these challenges and harness the full potential of AI in research, the report proposes several recommendations:

  1. Establish open science, environmental, and ethical frameworks for AI-based research to ensure the findings’ accuracy, reproducibility, and societal benefit.
  2. Investing in regional and cross-sector AI infrastructure akin to CERN to facilitate rigorous research across scientific disciplines and support non-industry researchers.
  3. Promoting AI literacy among researchers and fostering collaboration with developers to enhance the accessibility and usability of AI technologies.

The report, led by an expert working group comprising academics and industry figures, draws on evidence reviews, interviews, and workshops to examine emerging applications and trends in AI-supported research, safety risks, and the patent landscape.

Professor Alison Noble CBE FREng FRS, Vice President of the Royal Society and Technikos Professor of Biomedical Engineering at the University of Oxford, emphasised the importance of ensuring transparency in AI systems’ development, particularly in healthcare research.

Dr Peter Dayan, FRS, Director of the Max Planck Institute for Biological Cybernetics, highlighted the need for equitable access to high-quality data, processing power, and researcher skills in light of AI’s transformative potential.

The report also highlights the rapid evolution of the AI landscape, with an analysis of international patent filings revealing a significant surge in AI-related patents in recent years. While China and the US currently lead in patent filings, the UK emerges as a promising player in this domain, poised for growth and innovation.

As the scientific community grapples with the implications of AI-driven research, the Royal Society’s report serves as a timely call to action, urging stakeholders to navigate AI’s transformative power with caution and foresight.

Featured image: The growing body of irreproducible research based on AI and machine learning underscores the need for vigilance. Credit: Jason Goodman

News Desk 2

News Desk 2 produces the latest news for the Middle East region, with a key focus on the six GCC nations: UAE, Saudi Arabia, Qatar, Bahrain, Kuwait, and Oman. News Desk 2: press@menews247.com
Follow Me:

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *