AI-generated Citations: Lessons from the White House’s MAHA report
✅ Paper Type: Free Essay | ✅ Subject: Referencing |
✅ Wordcount: 2399 words | ✅ Published: 03 Sep 2025 |
Part of: AI Written Essays ChatGPT

A recently published White House commission report—titled Make America Healthy Again (MAHA)—has come under scrutiny for including fabricated academic citations. The Department of Health and Human Services touted this report on U.S. children’s health as having a “clear, evidence-based foundation,” yet it cited fake publications that did not actually exist (Inside Higher Ed, 2025).
Investigative reporting revealed multiple references to studies and papers that no researcher ever wrote. For example, one citation named Columbia University professor Katherine Keyes as author of a study on adolescent mental health—but Keyes confirmed she never wrote such a paper, and no record of it could be found (Inside Higher Ed, 2025).
In total, at least seven citations in the MAHA report were flagged as problematic, including four referencing studies that don’t exist at all and three that seriously misrepresented real research (Politifact, 2025). Experts quickly noted that these bizarre errors were no coincidence; the patterns of nonexistent titles and distorted findings were hallmarks of AI-generated references (Politifact, 2025). The incident has sparked a high-profile controversy, raising questions about the report’s credibility and the processes behind its creation (Politifact, 2025).
AI hallucinations: how false citations arise
The MAHA report highlights a known pitfall of generative AI language models: their tendency to “hallucinate” plausible-sounding content, including references. AI systems like ChatGPT are trained to predict fluent text, but they “often fail to ensure that what they’re saying is factual.” In practice, a chatbot may confidently produce a citation that looks scholarly even when no such source exists (University of Missouri Library, 2025) (Politifact, 2025). Researchers explain that when an AI is prompted to generate an academic reference, it will “make something up” if it cannot find an exact match, especially if pressed to support a specific claim (Politifact, 2025).
These fabricated citations typically mimic the structure of real references—for instance, listing authentic-seeming journal names, authors, and Digital Object Identifiers (DOIs) (Politifact, 2025). In the MAHA report, many of the false references were formatted perfectly, complete with reputable journals and realistic DOIs (Politifact, 2025). Such polish made them look credible at a glance, masking the fact that the cited studies were imaginary. Investigators even spotted telltale technical markers: some broken URLs contained the string “oaicite,” a quirk known to appear in ChatGPT’s auto-generated citations (Politifact, 2025).
These clues strongly suggest that generative AI was used to compile the bibliography, and in doing so the AI invented citations out of thin air. In short, the MAHA report’s creators appear to have unwittingly relied on an AI tool that fabricated academic sources, illustrating how AI hallucination can directly translate into false citations in scholarly work (Politifact, 2025).
Reference formatting tools vs. source verification
Modern scholars often use automated referencing tools to manage and format citations. Tools like these – including the citation formatter on our website – ensure correct styling of references in APA, MLA, Harvard, or other formats. The MAHA report, for instance, featured technically well-formatted but spurious references, which adhered to academic style and listed seemingly legitimate journals. (Politifact, 2025).
However, it is critical to understand that formatting tools do not verify content. They diligently arrange whatever information you provide, but cannot check whether a source actually exists or says what you claim it does. As AI researcher Oren Etzioni observes, AI-generated citations often “replicate the structure of academic references without linking to actual sources” (Politifact, 2025). In other words, a reference can look perfectly credible on paper while being completely fraudulent.
This means that the onus is on the user to vet each reference’s authenticity. An automated tool might correctly italicise a journal title and place commas in all the right spots, but it won’t alert you if the article is made-up. Thus, while well-formatted citations are important for professionalism and clarity, one must remember that no software can replace human judgment when it comes to confirming the validity of a source. Always treat a nicely formatted reference as appearance, not proof, of credibility.
To overcome this challenge, specialised tools like Uniwriter have emerged, distinguishing themselves from generic ChatGPT-based services. Uniwriter’s premium essay-writing service includes an integrated reference checker, specifically designed to detect and eliminate AI-generated hallucinations.
Unlike typical citation formatters, Uniwriter actively verifies each source’s authenticity, ensuring your references not only appear credible but truly exist and accurately support your arguments. Investing in such verification protects academic integrity, saving researchers and students from the pitfalls exemplified by the MAHA report controversy.
INFOGRAPHIC: Incidents of Fraudulent AI Citation1

Verifying sources and reference authenticity
Given these risks, students and researchers must adopt a habit of verifying every source they plan to cite. University guidelines strongly advise that students “double-check any output retrieved from AI tools, including citations, to verify accuracy” (University of Mary Washington 2025). In practice, verifying a reference means ensuring the source exists, is accessible, and says what you think it says.
The MAHA incident vividly demonstrates why this step is essential. For example, when journalists scrutinised one suspicious citation from the report, they found that: searching for the cited title returned no results, the supposed DOI led to a “DOI not found” error, and the journal issue and volume cited actually corresponded to a completely different article (Politifact, 2025). These red flags instantly exposed the reference as fake.
Steps to take for reference verification:
To avoid ever being in a similar situation, consider the following verification steps before trusting a citation:
- Search for the source title and author – Use academic databases or Google Scholar to see if the cited work is indexed. If nothing shows up, that’s a major warning sign.
- Check the DOI or URL – Follow the DOI link or reference URL. If it fails to resolve to a valid article (for example, returning a “DOI not found” message), treat the citation as suspect (Politifact, 2025).
- Cross-check publication details – Compare the citation’s journal name, publication year, volume, and issue with the journal’s official listings. Any mismatch (e.g. an issue number that doesn’t contain that title or authors) indicates a fabricated or erroneous reference.
- Obtain and read the source – If the reference is real, access the paper through your library or an online repository. Skimming the abstract or relevant sections will confirm whether its content actually supports the claim you are citing. This also helps catch instances where an author cites a real study but mischaracterises its findings.
By following these steps, students can confidently authenticate their references. It may take extra time, but it is time well spent to ensure your work rests on genuine evidence. Remember that unverified citations can derail an assignment or argument – a lesson underscored by the MAHA report saga. Verifying sources is not just an academic formality; it is a safeguard of honesty and accuracy. (Notably, outside academia, even attorneys have faced sanctions for submitting legal briefs with nonexistent cases cited by ChatGPT (Washington Post, 2025), underscoring how serious the ramifications of fake citations can be.) Diligent source-checking will protect you from similar embarrassment and maintain the integrity of your scholarship (University of Mary Washington 2025).
Well-formatted references and academic credibility
While factual accuracy is paramount, the presentation of references also plays a key role in academic credibility. In scholarly writing, a well-formatted bibliography and consistent in-text citations signal to readers that the author is conscientious and professional. Proper citations allow readers to trace ideas back to reliable sources, thereby building trust in the work.
Conversely, errors or inconsistencies in references can cast doubt on an otherwise sound paper. In the case of the MAHA report, the revelation of “garbled scientific citations” immediately undermined the report’s credibility (Washington Post, 2025). As Georges C. Benjamin, executive director of the American Public Health Association, noted, such flawed citations “betray subpar science” and erode confidence in the document’s findings (Washington Post, 2025). Indeed, a report containing bogus references (no matter how neatly formatted) quickly loses academic and public trust.
The Importance of Real Evidence
That said, well-formatted references significantly boost perceived credibility when backed by real evidence. One reason the MAHA authors’ claims initially gained any traction was that the report looked heavily researched. Dozens of references were listed, giving an impression of scientific rigor. It was only upon closer inspection that this facade crumbled.
The lesson here is twofold. First, meticulous citation formatting does matter: sloppy or missing citations can undermine your work even if your facts are correct. Second, formatting alone is not enough: the references must be authentic and relevant to genuinely support your arguments. Students should strive to present references in a recognised style and double-check every detail. This means ensuring author names, titles, page numbers are correct. By doing so, you demonstrate attention to detail and respect for scholarly conventions, which bolsters your reader’s confidence. Yet you must also ensure those references withstand scrutiny. In sum, a strong academic work pairs impeccable reference formatting with verified, high-quality sources. Achieving both is essential to maintaining credibility in the eyes of your audience and mentors.
Conclusion: embracing tools while upholding integrity
The controversy surrounding the MAHA report serves as a cautionary tale in the digital age of research. It underscores that even at the highest levels of publication, blindly trusting AI-generated content can lead to serious lapses in quality and integrity. At the same time, we need not reject technology outright. Generative AI and reference management tools, used wisely, can greatly assist academic work. As an example, they can quickly summarise literature or format citations in seconds. In fact, AI can legitimately help researchers survey large bodies of knowledge at speed (Washington Post, 2025). The key is to use these tools as assistants, not authorities. Human judgment and expertise must remain at the core of scholarly writing.
Going forward, students and researchers should take advantage of reliable resources to improve their work without compromising integrity. This means always pairing the efficiency of tools with thoughtful verification. For example, you can use the reference formatting tool and style guides available on our website to ensure your citations meet the highest presentation standards. These tools will handle the tedious formatting details, freeing you to focus on content.
However, remember that it is ultimately your responsibility to ensure each reference is valid, accurate, and supports your point. By rigorously checking your sources and then using our guides and tools to present them correctly, you harness the best of both worlds. You will be able to streamline the mechanics of referencing while upholding the essential academic values of honesty and rigor. The result is work that not only looks credible but truly earns credibility – a combination that is indispensable in academic and professional writing. Use these tools, stay vigilant, and let the MAHA episode remind us all to “trust, but verify” in the era of AI-assisted research.
References for AI-generated Citations: Lessons from the White House’s MAHA report
- Inside Higher Ed (2025) ‘White House report on children’s health includes fabricated academic citations’. Available at: https://www.insidehighered.com (Accessed: 2 June 2025).
- PolitiFact (2025) ‘White House health report cited fake studies, experts say’. Available at: https://www.politifact.com (Accessed: 2 June 2025).
- Library of the University of Missouri (n.d.) ‘AI hallucinations: how false citations arise’. Available at: https://library.missouri.edu (Accessed: 2 June 2025).
- Academics at University of Mary Washington (n.d.) ‘Guidelines for verifying AI-generated citations’. Available at: https://academics.umw.edu (Accessed: 2 June 2025).
- Washington Post (2025) ‘Legal briefs with nonexistent cases cited by ChatGPT lead to sanctions’. Available at: https://www.washingtonpost.com (Accessed: 2 June 2025).
Infographic References
- The Independent (2025) MAHA report AI false studies. Available at: https://www.independent.co.uk/news/world/americas/us-politics/maha-report-ai-false-studies-rfk-b2760764.html (Accessed: 2 June 2025).
- Jones Walker LLP (2025) Court slams lawyers for AI-generated fake citations. Available at: https://www.joneswalker.com/en/insights/court-slams-lawyers-for-ai-generated-fake-citations.html?id=102k9h3 (Accessed: 2 June 2025).
- Headline Club (2024) AI scandal hits Wyoming paper. Available at: https://headlineclub.org/2024/08/21/ai-scandal-hits-wyo-paper/ (Accessed: 2 June 2025).
- White & Case LLP (2025) Evolution of AI washing enforcement: DOJ enters the picture. Available at: https://www.whitecase.com/insight-alert/evolution-ai-washing-enforcement-doj-enters-picture (Accessed: 2 June 2025).
- PMC (2002) Article on AI and related topics. Available at: https://pmc.ncbi.nlm.nih.gov/articles/PMC12107892/ (Accessed: 2 June 2025).
- Incode (2024) Top 5 cases of AI deepfake fraud from 2024 exposed. Available at: https://incode.com/blog/top-5-cases-of-ai-deepfake-fraud-from-2024-exposed/ (Accessed: 2 June 2025).
- Duke University Libraries (2023) ChatGPT and fake citations. Available at: https://blogs.library.duke.edu/blog/2023/03/09/chatgpt-and-fake-citations/ (Accessed: 2 June 2025).
- IT Governance (2025) AI scams: real-life examples and how to defend against them. Available at: https://www.itgovernance.eu/blog/en/ai-scams-real-life-examples-and-how-to-defend-against-them (Accessed: 2 June 2025).
- Business Insider (2025) Increasing AI hallucinations, fake citations in court records and data. Available at: https://www.businessinsider.com/increasing-ai-hallucinations-fake-citations-court-records-data-2025-5 (Accessed: 2 June 2025).
- FMG Law (2025) Was that real or an AI hallucination? The case for checking that research twice. Available at: https://www.fmglaw.com/financial-services-and-banking-litigation/was-that-real-or-an-ai-hallucination-the-case-for-checking-that-research-twice/ (Accessed: 2 June 2025).
1: Case Summaries
MAHA Report (2025, USA, Government/Public Health)
- Fraud: AI-generated citations referencing studies that did not exist, with URLs containing “oaicite” (an OpenAI marker). At least 21 dead links and multiple fabricated references were found in a White House-commissioned report on children’s health.
- Consequences: The report’s credibility was publicly questioned, leading to official corrections and undermining its use in policymaking. Experts called for the report to be “junked” as an evidence base1.
Coomer v. Lindell Legal Brief (2025, USA, Legal)
- Fraud: Lawyers submitted a federal court filing containing nearly 30 defective citations, including entirely fabricated cases created by generative AI. One example merged elements from real cases to create a plausible but fake precedent.
- Consequences: The court issued a scathing order, sanctioned the attorneys, and used the case to warn the legal profession about unverified AI use2.
GIJIR Predatory Journal Scandal (2025, Global, Academic Publishing)
- Fraud: The Global International Journal of Innovative Research published dozens of AI-generated articles with fake citations and misattributed authorship, including falsely attributing work to respected academics.
- Consequences: Reputational harm to named scholars, increased scrutiny of predatory journals, and calls for enhanced AI detection and verification in academic publishing5.
Cody Enterprise Journalism Scandal (2024, USA, Journalism)
- Fraud: A reporter used AI to insert fabricated, yet believable, quotes and misattributed statements into published news stories. The AI also assigned incorrect roles and titles to individuals.
- Consequences: The reporter resigned, the editor issued a public apology, and the incident sparked industry-wide debate about the risks of AI in newsrooms3.
Cite This Work
To export a reference to this article please select a referencing stye below:
Related Services
View allRelated Content
CollectionsContent relating to: "AI Written Essays"
AI Essay Examples: AI written essays and ChatGPT are two emerging artificial intelligence technologies. AI-generated essays can be used for a variety of purposes, including academic writing, business writing, and creative writing. ChatGPT is a natural language processing system that generates human-like conversations using a deep learning algorithm.
Related Articles
DMCA / Removal Request
If you are the original writer of this essay and no longer wish to have your work published on UKEssays.com then please click the following link to email our support team:
Request essay removal