Hallucifact
/ˈhæluːsɪfækt/
noun (plural: hallucifacts)
- A specific, discrete falsehood generated by an artificial intelligence system that is presented as a true fact.
- An instance of fabricated data within an otherwise non-fictional context, produced by an AI to fill a gap in its knowledge.
"The summary was accurate overall, but it contained one glaring hallucifact about the treaty's date of signing."
"Researchers are developing methods to detect and flag potential hallucifacts in real-time before the user sees the output."
Etymology: A portmanteau of hallucination (AI context) + fact. Coined circa 2024–2025.
Usage Notes: A hallucifact is the tangible output of a hallucifactual process. It functions as a noun, allowing users to identify and label individual pieces of AI-generated falsehoods. The term is most often applied to incorrect names, dates, statistics, citations, or events that an AI asserts with confidence. The identification of a hallucifact requires external verification and is a key challenge in ensuring the reliability of AI-driven research and information systems.
Related Terms: artifact, confabufact, illusofact, phantom citation, pseudofact, synthofact.
Hallucifactual
/həˌluːsɪˈfæktʃuəl/
adjective
- Of or relating to a statement, narrative, or piece of data generated by an artificial intelligence that is presented as factual but is verifiably false.
- Characterised by the seamless blending of fabricated information with genuine facts, creating a plausible but misleading output that appears authoritative.
"The AI-generated biography was dangerously hallucifactual, correctly listing the author's published works but inventing a fictional university degree and several awards."
Etymology: A portmanteau of hallucination (in its AI context) + factual. Coined circa 2024–2025.
Usage Notes: The term hallucifactual gained prominence in the mid-2020s with the widespread adoption of large language models (LLMs). It is used to describe a specific type of AI error known as a "hallucination," where the model produces confident falsehoods. Unlike simple errors or bugs, hallucifactual content is problematic because it mimics the structure and confidence of true statements, making it difficult for users to detect without rigorous fact-checking. The term is distinct from misinformation or disinformation as it specifically denotes an automated, non-sentient origin rather than human intent to deceive.
Related Terms: artificial intelligence (AI), confabulation, deepfake, disinformation, large language model (LLM), misinformation, synthetic media.