What is it about?

The open nature of the Web enables users to produce and propa- gate any content without authentication, which has been exploited to spread thousands ofunveri??ed claims via millions ofonline doc- uments. Maintenance of credible knowledge bases thus has to rely on fact checking that constructs a trusted set offacts through cred- ibility assessment. Due to an inherent lack of ground truth infor- mation and language ambiguity, fact checking cannot be done in a purely automated manner without compromising accuracy. How- ever, state-of-the-art fact checking services, rely mostly on human validation, which is costly, slow, and non-transparent. This paper presents FactCatch, a human-in-the-loop system to guide users in fact checking that aims at minimisation of the invested e??ort. It supports incremental quality estimation, mistake mitigation, and pay-as-you-go instantiation of a high-quality fact database.

Featured Image

Why is it important?

Credibility assessment can use automated classi??cation meth- ods [10].While these methods scale to the volume of Web data, they are hampered by the inherent ambiguity of natural language, deliberate deception, and domain-speci??c semantics [11]. Hence, algorithms often fail to decipher complex contexts of claims. Au- tomatic methods further require large amounts of curated data, which is typically not available since such data quickly becomes outdated. Moreover, having algorithmic models judge the truth of claims raised ethical concerns on fairness and transparency [5]. Against this background, several state-of-the-art fact checking services such as Snopes, PolitiFact, and FactCheck, rely on human feedback to validate claims [1]. However, eliciting user input is challenging. User input is expensive, in terms of time and cost. Hence, a timely validation ofcontroversial claims quickly becomes infeasible, even if one relies on a large number of users and ig- nores the overhead to achieve consensus among them. Also, claims published on the Web are typically not independent and any user- based assessment of their credibility shall be propagated between correlated claims. Finally, user input is commonly limited by some e??ort budget, which bounds the number of claims to be validated.

Perspectives

This paper presented FactCatch, a system to overcome the limita- tions of existing methods for automatic and manual fact checking. The system is not limited to experts, but enables any user to partic- ipate in incremental, pay-as-you-go fact checking in a transparent and guided manner. Highlights ofFactCatch are: (i) Claims are not analysed individually but their complex network structure through documents and data sources is incorporated; (ii) claims are auto- matically ranked for validation to minimise user e??orts; (iii) a sin- gle dashboard gives full control over the fact checking process; and (iv) a trusted set offacts may be instantiated at any time in the pro- cess. The system is further optimized for scalability through meth- ods for early termination, batching validation, and online learning. In future work, we intend to extend FactCatch with crowdsourc- ing functionality. By relying on mass fact checking, controlled by a cost-pro??t model that prioritises highly contagious rumours, we strive for timely damage mitigation of emerging false claims.

Thanh Tam Nguyen
Griffith University

Read the Original

This page is a summary of: FactCatch: Incremental Pay-as-You-Go Fact Checking with Minimal User Effort, July 2020, ACM (Association for Computing Machinery),
DOI: 10.1145/3397271.3401408.
You can read the full text:

Read

Resources

Contributors

The following have contributed to this page