All Stories

  1. EFT-LR: Benchmarking Learning Rate Policies in Parameter-Efficient Large Language Model Fine-tuning
  2. LATTICE: Efficient In-Memory DNN Model Versioning
  3. CE-CoLLM: Efficient and Adaptive Large Language Models Through Cloud-Edge Collaboration
  4. NeRF-APT: A New NeRF Framework for Wireless Channel Prediction
  5. Security and Privacy Challenges of Large Language Models: A Survey
  6. Accessible Health Screening Using Body Fat Estimation by Image Segmentation
  7. Boosting Imperceptibility of Stable Diffusion-based Adversarial Examples Generation with Momentum
  8. Effective Diversity Optimizations for High Accuracy Deep Ensembles
  9. On the Efficiency of Privacy Attacks in Federated Learning
  10. Backdoor Attacks Against Low-Earth Orbit Satellite Fingerprinting
  11. ZipZap: Efficient Training of Language Models for Large-Scale Fraud Detection on Blockchain
  12. Adaptive Deep Neural Network Inference Optimization with EENet
  13. Demystifying Data Poisoning Attacks in Distributed Learning as a Service
  14. Individual Fairness with Group Awareness Under Uncertainty
  15. Privacy Risks Analysis and Mitigation in Federated Learning for Medical Images
  16. Exploring Model Learning Heterogeneity for Boosting Ensemble Robustness
  17. Model Cloaking against Gradient Leakage
  18. Hierarchical Pruning of Deep Ensembles with Focal Diversity
  19. Amplifying Object Tracking Performance on Edge Devices
  20. Rethinking Learning Rate Tuning in the Era of Large Language Models
  21. Invisible Watermarking for Audio Generation Diffusion Models
  22. Securing Distributed SGD Against Gradient Leakage Threats
  23. STDLens: Model Hijacking-Resilient Federated Learning for Object Detection
  24. Selecting and Composing Learning Rate Policies for Deep Neural Networks
  25. Learning TFIDF Enhanced Joint Embedding for Recipe-Image Cross-Modal Retrieval Service
  26. Learning Text-image Joint Embedding for Efficient Cross-modal Retrieval with Deep Feature Engineering
  27. A Comparative Measurement Study of Deep Learning as a Service Framework
  28. Transparent Network Memory Storage for Efficient Container Execution in Big Data Clouds
  29. Boosting Deep Ensemble Performance with Hierarchical Pruning
  30. Parallel Detection for Efficient Video Analytics at the Edge
  31. RDMAbox: Optimizing RDMA for Memory Intensive Workload
  32. Gradient-Leakage Resilient Federated Learning
  33. Boosting Ensemble Accuracy by Revisiting Ensemble Diversity Metrics
  34. Memory Abstraction and Optimization for Distributed Executors
  35. Adversarial Deception in Deep Learning: Analysis and Mitigation
  36. Adversarial Objectness Gradient Attacks in Real-time Object Detection Systems
  37. Cross-Modal Joint Embedding with Diverse Semantics
  38. Promoting High Diversity Ensemble Learning with EnsembleBench
  39. Efficient Orchestration of Host and Remote Shared Memory for Memory Intensive Workloads
  40. Cross-Layer Strategic Ensemble Defense Against Adversarial Examples
  41. A Framework for Evaluating Client Privacy Leakages in Federated Learning
  42. Understanding Object Detection Through an Adversarial Lens
  43. Demystifying Learning Rate Policies for High Accuracy Training of Deep Neural Networks
  44. Denoising and Verification Cross-Layer Ensemble Against Black-box Adversarial Attacks
  45. Deep Neural Network Ensembles Against Deception: Ensemble Diversity, Accuracy and Robustness
  46. Memory Disaggregation: Research Problems and Opportunities
  47. Experimental Characterizations and Analysis of Deep Learning Frameworks
  48. Benchmarking Deep Learning Frameworks: Design Considerations, Metrics and Beyond
  49. CCAligner