Publication not explained

This publication has not yet been explained in plain language by the author(s). However, you can still read the publication.

If you are one of the authors, claim this publication so you can create a plain language summary to help more people find, understand and use it.

Featured Image

Read the Original

This page is a summary of: Sustainable LLM Inference for Edge AI: Evaluating Quantized LLMs for Energy Efficiency, Output Accuracy, and Inference Latency, ACM Transactions on Internet of Things, November 2025, ACM (Association for Computing Machinery),
DOI: 10.1145/3767742.
You can read the full text:

Read

Contributors

The following have contributed to this page