top of page

New study: GenAI and Medical References

Miena Amiri

5 Apr 2024

Generative AI has shortcomings when dealing with Medical content. We show you how MedPro addresses these challenges.

The adoption of large language models (LLMs) like ChatGPT in healthcare is growing rapidly, with one in ten doctors already using these tools in their daily work. However, a recent study from Stanford University raises concerns about the reliability and accuracy of the information these models provide, especially in the medical field.


The study found that even the most advanced models, such as GPT-4 with Retrieval Augmented Generation (RAG), struggle to support a significant portion of the medical statements they generate. This poses a serious challenge for Medical Affairs teams, who need to base their decisions on accurate, evidence-backed information.


At AVAYL, we’ve developed MedPro, an AI-based tool designed specifically to meet the needs of Medical Affairs professionals. Unlike general LLMs, MedPro is built to address the shortcomings highlighted in the Stanford study, offering a more reliable and trustworthy solution.


1. Relying on Trusted Sources

A major issue with general LLMs is their tendency to produce unsupported or misleading information. The Stanford study found that up to 30% of statements from GPT-4 RAG were unsupported. MedPro tackles this problem by exclusively using trusted, peer-reviewed medical sources. MedPro only uses reliable databases, ensuring that the information provided is both accurate and verifiable.


2. Ensuring Transparency and Traceability

Traditional LLMs often lack transparency in how they generate information. In contrast, MedPro provides clear, traceable references for every claim it makes. Users can click on footnotes in the generated text to directly access the original sources, with relevant sections highlighted. This transparency allows Medical Affairs teams to verify information easily, ensuring that their decisions are grounded in solid evidence.


3. Tailored for Medical Affairs

General-purpose LLMs can struggle with complex medical queries that require deep understanding and context. MedPro is designed specifically for Medical Affairs, ensuring that the information it generates is precise, relevant, and fully supported by credible sources. Whether you’re drafting a publication, answering a medical inquiry, or preparing a training for your colleagues, MedPro provides reliable and accurate information.


4. Reducing the Risk of Misinformation

The Stanford study also highlights the risks of LLMs when dealing with less structured or layperson inquiries, which can lead to misinformation. MedPro mitigates this risk by focusing on high-quality medical sources and ensuring that all claims are backed by verifiable evidence. This reduces the chances of spreading incorrect information, protecting both healthcare providers and patients.


The Future of AI in Medical Affairs

While LLMs have made significant advances, the Stanford study makes it clear that there is still work to be done to ensure their reliability in the medical field. At AVAYL, we believe AI can revolutionize Medical Affairs, but only if it’s developed with a strong focus on evidence-based practices.


MedPro is our answer to these challenges—a tool that combines AI’s power with the rigor of medical science. By providing a reliable, transparent, and tailored solution for Medical Affairs, MedPro is helping to ensure that AI becomes a trusted partner in healthcare, supporting better outcomes for patients and professionals alike.

bottom of page