What is it about?

Improving Floating-Point Numbers (IFN) is a numerical computation library for Haskell. It allows the user to directly specify the accuracy of the result, making accurate computation easy. However, due to the fine granularity of computations, programs using the existing IFN library often encounter efficiency issues regarding memory consumption and execution time. This paper presents the computation granularity control mechanism for resolving these problems through a program transformation technique called fusion. Experiments with numerical computation programs confirmed its effectiveness.

Featured Image

Why is it important?

Performing numerical computations with sufficient accuracy on a computer requires careful precision settings to account for numerical errors. In many cases, the user needs to explore the necessary and sufficient precision by trial and error until the user obtains the results that meet the required accuracy. The use of the IFN library simplifies the programming of accurate numerical computation. Its computation process is based on adaptive control of accuracies, which propagates the demands for more accurate values from an expression to its appropriate subexpressions. The inefficiency of the existing IFN library lies in the fact that each demand it propagates, i.e., computation granularity, is too fine. Fusion transformation alleviates the performance problem by controlling computation granularity. Since the choice of fusion transformation targets significantly affects the program's efficiency, this paper proposes two effective choosing strategies. An automatic fusion system that applies these two strategies is successfully implemented in Haskell. This allows the user to write programs that use the IFN library in the same way as ordinary numerical computations without specifying the fusion transformation targets.

Read the Original

This page is a summary of: Controlling Computation Granularity through Fusion in Improving Floating-Point Numbers, September 2024, ACM (Association for Computing Machinery),
DOI: 10.1145/3677999.3678281.
You can read the full text:

Read

Contributors

The following have contributed to this page