All Stories

  1. AIDME: A Scalable, Interpretable Framework for AI-Aided Scoping Reviews
  2. PILs of Knowledge: A Synthetic Benchmark for Evaluating Question Answering Systems in Healthcare
  3. The Magnitude of Truth: On Using Magnitude Estimation for Truthfulness Assessment
  4. Efficiency and Effectiveness of LLM-Based Summarization of Evidence in Crowdsourced Fact-Checking
  5. Agent-Based Healthcare Chatbots for Regional System Services: A Case Study
  6. Search Trajectory Networks Applied to a Real-world Parallel Batch Scheduling Problem
  7. Report on the 14th Italian Information Retrieval Workshop (IIR 2024)
  8. Hands-On PhD Course on Responsible AI from the Lens of an Information Access Researcher
  9. Crowdsourced Fact-checking: Does It Actually Work?
  10. Understanding the Barriers to Running Longitudinal Studies on Crowdsourcing Platforms
  11. Cognitive Biases in Fact-Checking and Their Countermeasures: A Review
  12. Crowdsourcing Statement Classification to Enhance Information Quality Prediction
  13. How Many Crowd Workers Do I Need? On Statistical Power When Crowdsourcing Relevance Judgments
  14. Transparent assessment of information quality of online reviews using formal argumentation theory
  15. Using Computers to Fact-Check Text and Justify the Decision
  16. The Effects of Crowd Worker Biases in Fact-Checking Tasks
  17. Crowd_Frame: A Simple and Complete Framework to Deploy Complex Crowdsourcing Tasks Off-the-Shelf
  18. The many dimensions of truthfulness: Crowdsourcing misinformation assessments on a multidimensional scale
  19. Can the crowd judge truthfulness? A longitudinal study on recent misinformation about COVID-19
  20. Assessing the Quality of Online Reviews Using Formal Argumentation Theory
  21. The COVID-19 Infodemic
  22. Can The Crowd Identify Misinformation Objectively?
  23. Crowdsourcing Peer Review: As We May Do
  24. Reproduce and Improve
  25. Effectiveness Evaluation with a Subset of Topics