What is it about?

This research provides a privacy-preserving framework for the Artificial Intelligence of Things (AIoT). It allows smart devices to learn and evolve without exposing personal user privacy to the cloud. In modern smart systems, devices at the "edge" (close to the user) often need to send data to powerful cloud servers to help the AI learn from new information. However, this raises a major privacy risk: how can the cloud "learn" from the data without actually "seeing" it? We developed a lightweight encryption method that hides recognizable details from humans but allows AI models to process the information directly. Specifically, AI can perform personalized context learning and incremental learning that is, AI can "learn" and "stay up-to-date" from encrypted data. Our results show that this method not only protects privacy during transmission and computation but actually improves AI performance by up to 18%, ensuring the model stays accurate even as the environment changes. Keywords: AIoT Privacy-Preserving Learning, Cloud-Edge Collaborative AI, Lightweight Image Encryption, Personalized Context Learning, Incremental Learning in IoT, Cybersecurity for Smart Devices, Secure Computation in the Cloud.

Featured Image

Why is it important?

This work solves the "privacy-utility" paradox that currently hinders the growth of the Artificial Intelligence of Things (AIoT). While existing methods like Federated Learning or Homomorphic Encryption offer privacy, they are often too slow for small devices. This study is unique because: 1. Dual-Phase Protection: Our lightweight cryptosystem protects data during both transmission (from device to edge) and computation (on the cloud), a combination rarely addressed in current research. 2. Adaptability: It features "personalized context learning," allowing a general AI model to be tailored to specific local environments (like a specific smart home or factory) while improving accuracy by 13-18%. 3. Scalability: By keeping the encryption "lightweight," we ensure it can run on resource-constrained IoT devices without draining battery or processing power. 4. Future-Proof Learning: The use of incremental learning ensures the AI does not become "stale" as new data arrives, allowing for a "lifelong learning" system that remains private.

Perspectives

We were motivated by a gap in how we protect data: most researchers treat 'secure transmission' and 'secure computation' as two separate problems, but for a user, the data is at risk in both places. Our goal was to move beyond the idea that 'smart' must mean 'public.' We believe we can build a world of collaborative intelligence that still respects the individual's right to digital privacy. When developing AI for the Internet of Things, we often face a choice: either we have great privacy but slow performance, or great performance with high privacy risks. We wanted to find a middle ground. By using "Perceptual Encryption," we can hide the visual details that humans would recognize as private, while keeping the "mathematical essence" of the image intact so the AI can still do its job. By combining this with incremental learning and personalized context learning, we have created a system that grows and learns alongside the user, but never "knows" more than it needs to. It’s a human-centric approach to a very technical problem. If you are working on privacy-preserving AIoT or secure cloud-edge collaboration, I would welcome a discussion on how this lightweight cryptosystem can be adapted to your specific use case.

Dr Ijaz Ahmad
Korea University

Read the Original

This page is a summary of: Privacy-Preserving Uncertainty Calibration Using Perceptual Encryption in Cloud–Edge Collaborative Artificial Intelligence of Things, IEEE Internet of Things Journal, July 2025, Institute of Electrical & Electronics Engineers (IEEE),
DOI: 10.1109/jiot.2025.3558289.
You can read the full text:

Read

Contributors

The following have contributed to this page