Close Menu
  • Breaking News
  • Business
  • Career
  • Sports
  • Climate
  • Science
    • Tech
  • Culture
  • Health
  • Lifestyle
  • Facebook
  • Instagram
  • TikTok
Categories
  • Breaking News (5,511)
  • Business (327)
  • Career (4,647)
  • Climate (222)
  • Culture (4,628)
  • Education (4,862)
  • Finance (220)
  • Health (883)
  • Lifestyle (4,482)
  • Science (4,551)
  • Sports (348)
  • Tech (184)
  • Uncategorized (1)
Hand Picked

Briarcliff High School Hosts Alumni Career Day – River Journal Online

December 8, 2025

New Brunswick – Six new school projects announced in largest education budget to date

December 8, 2025

Marjorie Taylor Greene fires back after President Trump lambastes her online

December 8, 2025

Light from satellites will photobomb many space telescope images, study says : NPR

December 8, 2025
Facebook X (Twitter) Instagram
  • About us
  • Contact us
  • Disclaimer
  • Privacy Policy
  • Terms and services
Facebook X (Twitter) Instagram
onlyfacts24
  • Breaking News

    Marjorie Taylor Greene fires back after President Trump lambastes her online

    December 8, 2025

    Salvadoran attorney forced into exile by home raid | Al Jazeera

    December 8, 2025

    Confluent stock soars as IBM announces $11 billion deal to acquire it

    December 8, 2025

    Clyburn agrees with Michelle Obama that America is not ready for woman president

    December 8, 2025

    Japan summons China envoy over ‘fighter jet radar lock’ as tensions surge | News

    December 8, 2025
  • Business

    AI investment is a hot topic in the business community and policy authorities these days. As global ..

    November 26, 2025

    Hedy AI Unveils ‘Topic Insights’: Revolutionizing Business Communication with Cross-Session Intelligence

    November 25, 2025

    Revolutionizing Business Communication with Cross-Session Intelligence

    November 25, 2025

    Parking top topic at Idaho Springs business meeting | News

    November 25, 2025

    Why YouTube Star MrBeast and Netflix Are Launching Theme Parks

    November 23, 2025
  • Career

    Briarcliff High School Hosts Alumni Career Day – River Journal Online

    December 8, 2025

    Washington County Career Center selects Student Ambassadors | News, Sports, Jobs

    December 8, 2025

    Virtual career fair – education employment

    December 8, 2025

    Find your next career at a rural VA 

    December 8, 2025

    Former U.S. Ambassador to the Czech Republic Talks Career Journey – The Heights

    December 8, 2025
  • Sports

    Fanatics Launches a Prediction Market—Without the G-Word

    December 5, 2025

    Mark Daigneault, OKC players break silence on Nikola Topic’s cancer diagnosis

    November 20, 2025

    The Sun ChronicleThunder guard Nikola Topic diagnosed with testicular cancer and undergoing chemotherapyOKLAHOMA CITY (AP) — Oklahoma City Thunder guard Nikola Topic has been diagnosed with testicular cancer and is undergoing chemotherapy..3 weeks ago

    November 19, 2025

    Olowalu realignment topic of discussion at Nov. 18 meeting | News, Sports, Jobs

    November 19, 2025

    Thunder guard Nikola Topic, 20, undergoing treatment for testicular cancer | Oklahoma City Thunder

    November 18, 2025
  • Climate

    PA Environment & Energy Articles & NewsClips By Topic

    December 8, 2025

    ‘Environmental Resilience’ topic of Economic Alliance virtual Coffee Chat Dec. 9

    December 7, 2025

    Insights from World Bank Group Country Climate and Development Reports covering 93 economies

    December 3, 2025

    PA Environment & Energy Articles & NewsClips By Topic

    November 24, 2025

    Environmental Risks of Armed Conflict and Climate-Driven Security Risks”

    November 20, 2025
  • Science
    1. Tech
    2. View All

    Off Topic: Vintage tech can help Gen Z fight digital fatigue

    December 6, 2025

    Snapchat ‘Topic Chats’ Lets Users Publicly Comment on Their Interests

    December 5, 2025

    AI and tech investment ROI

    December 4, 2025

    Emerging and disruptive technologies | NATO Topic

    November 20, 2025

    Light from satellites will photobomb many space telescope images, study says : NPR

    December 8, 2025

    Snap! Jupiter spacecraft captures brand new image of 3I/ATLAS. Here’s why it’s got scientists excited

    December 8, 2025

    Strangely bleached rocks on Mars hint that the Red Planet was once a tropical oasis

    December 8, 2025

    SpaceX gets approval to build Starship launch complex at Cape Canaveral

    December 8, 2025
  • Culture

    China’s Tourism Surge: A Growing Global Destination for Culture, Adventure, and Luxury Travel

    December 8, 2025

    Workshops aim to change ‘boys will be boys’ culture in construction sector

    December 8, 2025

    Fox NewsIt's traditional to be an immigrant and try to 'fit in' to American culture: Brian KilmeadeFox News host Brian Kilmeade discusses the recent trend of immigrants not assimilating into American culture and more on 'One Nation.'.7 hours ago

    December 8, 2025

    Atlanta synagogue marks 111 years with Hanukkah Bazaar celebrating food, culture and community

    December 8, 2025

    Explore China: The Fast-Growing Tourism Hub with Rich Culture and Scenic Landscapes

    December 8, 2025
  • Health

    Watch Out For Media Rage-Baiting About The Topic Of AI For Mental Health

    December 5, 2025

    U.S. Department of Health and Human Services (HHS) | Secretaries, Administration, & Facts

    December 4, 2025

    International day of persons with disabilities 2025

    December 3, 2025

    Ηow air pollution affects our health | Air pollution

    December 2, 2025

    Public health hot topic: Happy and healthy holidays

    December 2, 2025
  • Lifestyle
Contact
onlyfacts24
Home»Health»The Health Misinformation Monitor: AI Chatbots as Health Information Sources
Health

The Health Misinformation Monitor: AI Chatbots as Health Information Sources

September 20, 2024No Comments
Facebook Twitter Pinterest LinkedIn Tumblr Email
240822 Hmm Fi.png
Share
Facebook Twitter LinkedIn Pinterest Email

Deeper Dive: How AI Chatbots Are Changing in Handling Health Misinformation

Screenshots of three different chatbot models responding to a question about Ivermectin as an effective COVID-19 treatment.

While some research suggests AI chatbots are just as accurate medical professionals in answering health queries, concerns about biased or inaccurate information persist. To enhance accuracy and reliability, AI chatbots are regularly updated to improve the chatbot’s ability to identify and correct misinformation. Over the past year, developers have trained AI models on larger and more diverse data sets of information, improving AI’s ability to cross-reference information from multiple reliable sources to verify claims and detect inconsistencies.

While some platforms focus on user experience and management tools, the general trend is to use advanced AI techniques to better understand context, protect data accuracy, and provide more reliable information. Both Google and Microsoft have recently renamed their AI chatbots to reflect these improvements: Google’s Bard is now called Gemini, and Microsoft’s Bing Chat has been renamed Copilot. OpenAI has also upgraded ChatGPT, including a new real-time voice interactions, which Axios notes could make people more comfortable using the AI chatbot for health information.

To understand how three well-known AI chatbots – ChatGPT, Google Gemini (formerly Google Bard), and Microsoft CoPilot (formerly Bing Chat) – have changed in how they handle health-related questions, KFF’s Hagere Yilma asked each of the chatbots in November 2023, March 2024, and again in August 2024 whether the 11 false claims examined in the KFF Health Misinformation Tracking Poll were true or false. Below is a summary of her observations (full responses from AI chatbots can be found here). Her observations shared here provide a glimpse into the accuracy and reliability of these chatbots, but only reflect the experience of a single chatbot user and are not generalizable scientific research. Chatbots may give different answers depending on the individual user, the questions asked, and updates to the AI models.

Chatbots Differ in Directness When Addressing False Claims, Often Highlighting Complexity

For the most part, each chatbot pointed out false claims, but sometimes they explained that the statement’s accuracy was more complicated instead of just saying it was false. When we first tested the chatbots, both Google Gemini and Microsoft CoPilot directly refuted false claims, while ChatGPT tended to approach these claims with more caution. Rather than definitively labeling some claims as false, ChatGPT noted the complexity of the issue and the need for further research. For example, when asked if the claim that ivermectin as an effective COVID-19 treatment is true, ChatGPT said that there is still some debate about ivermectin’s effectiveness and suggested that more research is needed, without directly calling the statement false. When we revisited these chatbots in March and August 2024, ChatGPT became more assertive, labeling more claims as false, but still labeled two of the statements about firearms as “not entirely accurate” or “complex” rather than outright refuting it. In March 2024, CoPilot also labeled the same two statements about firearms as “not entirely accurate” or “lacks conclusive evidence.”

Challenges in Citing Sources

The chatbots had different approaches to sharing scientific evidence when supporting their responses. In November 2023 and March 2024, ChatGPT usually mentioned that there is scientific evidence refuting the tested claims but didn’t cite specific studies. For example, when asked if COVID-19 vaccines have caused thousands of deaths in otherwise healthy people, ChatGPT said “The overwhelming evidence from clinical trials and real-world data indicates that the benefits of COVID-19 vaccination in reducing the risk of severe illness, hospitalization, and death far outweigh any potential risks” but did not offer any details about the trials or data it was referring to. On the other hand, Gemini and CoPilot cited specific studies as evidence, but Gemini typically did not provide links and sometimes provided inaccurate details about the studies. CoPilot provided links, but these sometimes led to third-party summaries instead of the actual research, which could make it difficult for users to verify the information for themselves.

Chatbots’ Use of Public Health References Evolves Over Time

Over time, the chatbots showed notable changes in how they reference public health institutions to support their answers. In 2023, ChatGPT took a cautious approach, only citing specific agencies like the CDC or FDA for COVID or vaccine-related questions. For most other health claims, it would generally suggest consulting trusted sources without naming them. For example, when asked if the Affordable Care Act established a government panel to make decisions about end-of-life care for people on Medicare, ChatGPT mentioned “It’s important to rely on accurate and credible sources when evaluating claims about healthcare policies and to avoid misinformation…” but didn’t cite any credible sources. Google Gemini and Microsoft CoPilot, on the other hand, initially referenced specific institutions as trusted sources for most questions in 2023.

By 2024, we observed a shift: ChatGPT began referencing specific institutions across a broader range of health topics, while Gemini shifted to providing general resource links and only for some questions. However, CoPilot maintained consistency throughout the entire period, referencing statistics and recommendations from public health organizations while also including links to a broader range of sources, such as news articles, fact-checking resources, research studies, and practice guidelines.

The Bottom Line

While our observations reflect our own limited test and are not generalizable, there are still a few takeaways to consider. AI chatbots can be a convenient starting point for quick health info, thanks to their speed and ease of use. But they’re not perfect or always reliable. Sometimes these tools give misleading information, misrepresent sources, or leave out important context. To be on the safe side, it’s a good idea to double-check chatbot answers by looking at multiple sources. You should also stay informed about system updates, as chatbot responses may change with each update.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

Watch Out For Media Rage-Baiting About The Topic Of AI For Mental Health

December 5, 2025

U.S. Department of Health and Human Services (HHS) | Secretaries, Administration, & Facts

December 4, 2025

International day of persons with disabilities 2025

December 3, 2025

Ηow air pollution affects our health | Air pollution

December 2, 2025
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

Briarcliff High School Hosts Alumni Career Day – River Journal Online

December 8, 2025

New Brunswick – Six new school projects announced in largest education budget to date

December 8, 2025

Marjorie Taylor Greene fires back after President Trump lambastes her online

December 8, 2025

Light from satellites will photobomb many space telescope images, study says : NPR

December 8, 2025
News
  • Breaking News (5,511)
  • Business (327)
  • Career (4,647)
  • Climate (222)
  • Culture (4,628)
  • Education (4,862)
  • Finance (220)
  • Health (883)
  • Lifestyle (4,482)
  • Science (4,551)
  • Sports (348)
  • Tech (184)
  • Uncategorized (1)

Subscribe to Updates

Get the latest news from onlyfacts24.

Follow Us
  • Facebook
  • Instagram
  • TikTok

Subscribe to Updates

Get the latest news from ONlyfacts24.

News
  • Breaking News (5,511)
  • Business (327)
  • Career (4,647)
  • Climate (222)
  • Culture (4,628)
  • Education (4,862)
  • Finance (220)
  • Health (883)
  • Lifestyle (4,482)
  • Science (4,551)
  • Sports (348)
  • Tech (184)
  • Uncategorized (1)
Facebook Instagram TikTok
  • About us
  • Contact us
  • Disclaimer
  • Privacy Policy
  • Terms and services
© 2025 Designed by onlyfacts24

Type above and press Enter to search. Press Esc to cancel.