All Stories

  1. Large Language Models for Combinatorial Optimization: A Systematic Review
  2. Large Language Models as Assessors: On the Impact of Relevance Scales
  3. Analyzing AI Evaluation Benchmarks Through Information Retrieval and Network Science
  4. AIDME: A Scalable, Interpretable Framework for AI-Aided Scoping Reviews
  5. PILs of Knowledge: A Synthetic Benchmark for Evaluating Question Answering Systems in Healthcare
  6. The Magnitude of Truth: On Using Magnitude Estimation for Truthfulness Assessment
  7. Efficiency and Effectiveness of LLM-Based Summarization of Evidence in Crowdsourced Fact-Checking
  8. Agent-Based Healthcare Chatbots for Regional System Services: A Case Study
  9. Search Trajectory Networks Applied to a Real-world Parallel Batch Scheduling Problem
  10. Report on the 14th Italian Information Retrieval Workshop (IIR 2024)
  11. Hands-On PhD Course on Responsible AI from the Lens of an Information Access Researcher
  12. Crowdsourced Fact-checking: Does It Actually Work?
  13. Understanding the Barriers to Running Longitudinal Studies on Crowdsourcing Platforms
  14. Cognitive Biases in Fact-Checking and Their Countermeasures: A Review
  15. Crowdsourcing Statement Classification to Enhance Information Quality Prediction
  16. How Many Crowd Workers Do I Need? On Statistical Power When Crowdsourcing Relevance Judgments
  17. Transparent assessment of information quality of online reviews using formal argumentation theory
  18. Using Computers to Fact-Check Text and Justify the Decision
  19. The Effects of Crowd Worker Biases in Fact-Checking Tasks
  20. Crowd_Frame: A Simple and Complete Framework to Deploy Complex Crowdsourcing Tasks Off-the-Shelf
  21. The many dimensions of truthfulness: Crowdsourcing misinformation assessments on a multidimensional scale
  22. Can the crowd judge truthfulness? A longitudinal study on recent misinformation about COVID-19
  23. Assessing the Quality of Online Reviews Using Formal Argumentation Theory
  24. The COVID-19 Infodemic
  25. Can The Crowd Identify Misinformation Objectively?
  26. Crowdsourcing Peer Review: As We May Do
  27. Reproduce and Improve
  28. Effectiveness Evaluation with a Subset of Topics