What is it about?

The success of neural methods for image captioning suggests that similar benefits can be reaped for generating captions for information visualizations. In this preliminary study, we focus on the very popular line charts. We propose a neural model which aims to generate text from the same data used to create a line chart. Due to the lack of suitable training corpora, we collected a dataset through crowdsourcing. Experiments indicate that our model outperforms relatively simple non-neural baselines.

Featured Image

Why is it important?

Although this problem has been tackled in the past by developing engineered captioning techniques [16], datadriven infoVis captioning has received limited attention. Furthermore, there are no existing datasets to train and test our datadriven captioning approach, our first contribution was to create a seed version of such dataset via crowdsourcing.

Perspectives

Even though the collected pilot is a pilot, the results show how this kind of neural data-driven approach would have a great potential and value for the task of data chart images captioning.

Andrea Spreafico
Universita degli Studi di Milano-Bicocca

Read the Original

This page is a summary of: Neural Data-Driven Captioning of Time-Series Line Charts, September 2020, ACM (Association for Computing Machinery),
DOI: 10.1145/3399715.3399829.
You can read the full text:

Read

Resources

Contributors

The following have contributed to this page