Fringe Box

Socialize

Twitter

Are We Trusting AI Too Much? University of Surrey Study Calls for Greater Accountability

Published on: 24 Feb, 2025
Updated on: 26 Feb, 2025

As artificial intelligence becomes increasingly embedded in our daily lives—from banking and healthcare to crime detection—a new study from the University of Surrey warns that we may be placing too much trust in systems we don’t fully understand.

Researchers are calling for greater transparency and accountability in AI models, highlighting the potential risks of “black box” decision-making, where algorithms make crucial choices without clear explanations.

The study comes at a time when AI-driven systems are making high-stakes decisions that directly impact people’s lives. Misdiagnoses in healthcare, incorrect fraud alerts in banking, and errors in crime detection have already demonstrated the serious consequences of flawed AI decision-making.

One of the key challenges, researchers say, is that many AI models lack the ability to explain why they make specific decisions. Fraud detection, for example, relies on datasets where only 0.01 per cent of transactions are fraudulent, making it difficult for AI to learn accurate fraud patterns. While these models can detect suspicious activity with high precision, they fail to adequately justify their conclusions, leaving users uncertain and potentially vulnerable.

Dr Wolfgang Garn, Senior Lecturer in Analytics at the University of Surrey and co-author of the study, warns that AI systems must become more user-friendly and transparent:

“We must not forget that behind every algorithm’s solution, there are real people whose lives are affected by the determined decisions. Our aim is to create AI systems that are not only intelligent but also provide explanations to people—the users of technology—that they can trust and understand.”

The SAGE Framework: A New Approach to AI Transparency

To address these concerns, Surrey’s research team has proposed a new framework called SAGE (Settings, Audience, Goals, and Ethics), designed to ensure that AI-generated explanations are clear, meaningful, and relevant to users. The framework encourages AI developers to consider who will be using the system, what information they need, and how ethical considerations should shape AI decision-making.

Alongside SAGE, the study also advocates for the use of Scenario-Based Design (SBD), a technique that examines real-world user interactions to determine what people truly need from AI explanations. By stepping into the shoes of end-users, researchers and developers can build AI models that prioritise human understanding and trust.

Dr Garn emphasises that AI development must evolve to focus on human-centric design principles. He believes that engaging with industry specialists, policymakers, and everyday users is essential to ensuring that AI systems are safe, ethical, and reliable.

“We also need to highlight the shortcomings of existing AI models, which often lack the contextual awareness necessary to provide meaningful explanations. By identifying and addressing these gaps, our paper advocates for an evolution in AI development that prioritises user-centric design principles.

“The path to a safer and more reliable AI landscape begins with a commitment to understanding the technology we create and the impact it has on our lives. The stakes are too high for us to ignore the call for change.”

The University of Surrey’s research underlines the urgent need for AI to provide explanations in a way that all users can understand, whether through text-based descriptions or graphical representations. Without this shift, AI risks alienating users and making critical errors that could have far-reaching consequences.

As AI continues to shape the modern world, researchers argue that ensuring transparency and accountability is not just a technological challenge but a moral obligation. With the implementation of frameworks like SAGE and the promotion of user-first AI development, the study suggests that the future of artificial intelligence could be both powerful and responsible—but only if urgent changes are made now.

Share This Post

Leave a Comment

Please see our comments policy. All comments are moderated and may take time to appear. Full names, or at least initial and surname, must be given.

Your email address will not be published. Required fields are marked *