What is it about?
We focus on real estate appraisal, that is, the task of estimating the price of a house. For this, we use multi-view neural networks to incorporate implicit information about houses into appraisal based solely on hard facts (e.g., size, age). The implicit information includes location and aesthetics and is derived from satellite images of the property. Although the additional implicit information increases the accuracy of our models by 34%, the neural networks used are black boxes. To better understand which visual features are important, we developed a new explainable AI (XAI) method called Grad-Ram, which specializes in visual data and regression tasks.
Photo by Michael Tuszynski on Unsplash
Why is it important?
Deep learning models are at the core of many decision support systems today because they provide high predictive performance. However, one downside of deep learning is their obfuscated decision-making, leading to a trade-off between predictive performance and interpretability. Thus, the applicability of such models is often limited in situations where justification is necessary. Providing suitable explainability methods can help to tackle the accuracy-interpretability trade-off and consequently enable the usage of deep learning models in high-stake decision scenarios.
Read the Original
This page is a summary of: Tackling the Accuracy-Interpretability Trade-off: Interpretable Deep Learning Models for Satellite Image-based Real Estate Appraisal, ACM Transactions on Management Information Systems, January 2023, ACM (Association for Computing Machinery), DOI: 10.1145/3567430.
You can read the full text:
The following have contributed to this page