What is it about?
Without manual correction AI-analysis often produces accidental 'fake' information, called hallucinations, making it unreliable. However, there is a new analysis process called LlamaParsing that is meant to correct this problem. This article tests that process using a worked example of social media discussion of Scottish independence. The findings show that the process is viable, the unified model of activism is validated, and it helps make 'big data analysis' more theoretically useful.
Featured Image
Read the Original
This page is a summary of: Hallucination or vision? Establishing reliability and validity of AI-based qualitative analysis of the unified model of activism in the Scottish independence Twitter debate, Corporate Communications An International Journal, October 2024, Emerald,
DOI: 10.1108/ccij-08-2024-0139.
You can read the full text:
Contributors
The following have contributed to this page







