Loading Events

« All Events

  • This event has passed.

SPADE: 1st International Workshop on Scheduling & Parallelism in AI for Distributed Edges

11/19/2025 - 11/21/2025
SPADE 2025 workshop banner: Scheduling & Parallelism in AI for Distributed Edges.

Co-located with: 15th International Conference on the Internet of Things (IoT 2025).

Date: To be announced

Location: Vienna, Austria

1. Workshop Scope & Motivation

As cloud, edge, and mobile computing continue to converge into a unified computing ecosystem, there is a growing opportunity for edge platforms to support computationally intensive AI models as services similar to the cloud AI services. However, constraints on scheduling, distribution, and resource managements making large scale adoption slow. Considering these challenges Scheduling and Parallelism for AI in Distributed Edge (SPADE) aims to collect novel contributions on the architecture, algorithmic, and practical challenges of deploying computationally intensive applications (Deep learning, Generative AI, LLM, etc) across distributed edge computing. While edge AI offers low-latency and privacy-preserving inference for IoT, the inability to efficiently parallelize tasks often limits system performance. This workshop targets the combination of computationally intensive models handling parallelism limits, AI computation and scheduling optimization with resource-constrained, and heterogeneous edge networks. SPADE will bring together researchers and practitioners working on model partitioning, decentralized scheduling, execution frameworks, hybrid AI pipelines, and benchmarking testbeds. In addition, it will address bottlenecks in distributed inference and highlight new strategies for intelligent scheduling and task coordination. The contributions on model slicing, distributed inference, workload orchestration, task-to-node mapping, and scalable deployment of AI for IoT systems through presentations, discussion sessions, and short papers is welcome.

2. Topics of Interest

We will consider submissions on a wide range of topics in these domains including (but not limited to):

  • Task distribution and scheduling for computationally intensive models on edge devices
  • Parallelism-aware model design for edge AI and TinyML
  • Resource-aware AI deployment strategies
  • Hybrid edge-cloud execution frameworks
  • Model partitioning and distributed inference
  • Federated and decentralized scheduling approaches
  • Real-time and latency-sensitive computing at the edge
  • Benchmarking and testbeds for edge-based AI
  • Energy-efficient task coordination in IoT systems
  • Middleware and OS-level support for distributed edge AI
  • Applications in industrial IoT, surveillance, transportation, health, and smart cities
  • Compiler support and hardware-aware optimizations for edge AI models
  • Scheduling algorithms for constrained and heterogeneous environments
  • Containerization and orchestration tools (e.g., Docker, Kubernetes) for edge deployments
  • Cross-layer design strategies for scheduling and data placement
  • Adaptive and real-time load balancing across edge nodes and TinyML
  • Distributed training and continual learning at the edge
  • Privacy-preserving computation and secure model execution
  • Communication-efficient algorithms for collaborative edge inference
  • Use of reinforcement learning for task offloading and scheduling

3. Important Dates

Paper submission starts 10 September 2025
Paper submission end 20 October 2025
Author Notification 1 October 2025
Camera-ready Submission 20 October 2025
Workshop Date TBD (19–21 November 2025)

4. Submission Guidelines

Submissions to SPADE 2025 must be in PDF format and use the official

ACM conference style templates
for MS Word and LaTeX.

We welcome two types of submissions:

  • Short papers (Work-in-progress): up to 3 pages (excluding references)
  • Full papers: up to 6 pages (excluding references)


Submit your paper

(Submit at https://easychair.org/my/conference?conf=spade2025)

6. Session Chairs

  • Dr. Dinesh Kumar Sah: Received his Ph.D. in Computer Science and Engineering from IIT (ISM), Dhanbad and is currently a researcher at the University of Oulu, Finland. His expertise includes edge computing, distributed systems, deep learning, and AI-driven systems for industrial and healthcare domains. More at dineshkumarsah.com.
  • Dr. Praveen Kumar Donta: Associate Professor at Stockholm University. His research spans distributed computing, continuum systems, and intelligent data protocols. He is an IEEE Senior Member and serves on editorial boards for journals like IEEE IoT, Computing, and Elsevier’s Computer Communications.
  • Dr. Lauri Lovén: Leads the Future Computing Group at the University of Oulu and is Vice-Director of the UBICOMP center. He coordinates distributed intelligence in the 6G Flagship program and is an Associate Editor at SpringerNature Computing journal.
  • Dr. Priyanka Verma: Lecturer at University of Galway. She specializes in AI, cybersecurity, and smart systems and has received multiple grants and awards. She is a Senior Member of IEEE, Fellow of IETE, and ACM member.

7. Technical Program Committee (TPC) Members

    • Prof. Satish Srirama, University of Hyderabad, India
    • Prof. Sindri Magnússon, Stockholm University, Sweden
    • Prof. Dr. Chinmaya Dehury, University of Tartu, Estonia
    • Dr. Naser Hossein Motlagh, University of Helsinki, Finland
    • Prof. Qiang He, Northeastern University, Shenyang, China
    • Dr. Tri Nguyen, Aalto University, Finland
    • Dr. Mainak Adhikari, IIIT Lucknow, India
    • Dr. Andrea Morichetta, TU Wien, Austria
    • Prof. Ihsan Ali, Southeast Missouri State University, USA
    • Dr. Qiyang Zhang, Peking University, China
    • Prof. Pablo Fernandez, Universidad de Sevilla, Spain
    • Utsav Patel, Senior Data and Applied Scientist at Microsoft

Contact

For inquiries, please contact: dinesh.sah@oulu.fi

Details