top of page

The "black box" problem in Artificial Intelligence

Miena Amiri

10 Jul 2025

How to create transparency when working with AI

Ever gotten an AI answer to a complex question - and then didn't fully trust it and researched it again yourself? This experience points to a fundamental challenge in healthcare: the "black box" problem of AI.

 

This matters particularly in Medical Affairs, where transparency and traceability aren't just nice-to-haves, but essential requirements.

 


■ The Black Box Problem  

Most AI solutions operate as "black boxes" - we see the input and output, but not the reasoning process. In healthcare, understanding the 'why' behind AI suggestions is as crucial as the suggestions themselves. Without visibility into AI's decision-making process, we can't validate it's scientific accuracy or regulatory compliance. 

 

⛳️ The Single-Step Dilemma

Generic AI tools are designed for quick, one-shot answers, but communicating scientific evidence requires documented reasoning. Each step needs to be traceable and justified with sources - just like human expert analysis.

 

💡The Path Forward: Tailored Solutions

What we've learned is clear: While generic AI solutions impress with their speed and breadth, regulated environments like Medical Affairs require a different approach. Success lies in developing tailored systems, human-in-the-loop and careful AI implementation into multistep workflows that enable transparency and scientific rigor.


Want to dive deeper? Read our full position paper, co-authored with MILE and MSL Society.



bottom of page