Communication-Oriented Model Fine-Tuning for Packet-Loss Resilient Distributed Inference Under Highly Lossy IoT Networks

The distributed inference (DI) framework has gained traction as a technique for real-time applications empowered by cutting-edge deep machine learning (ML) on resource-constrained Internet of things (IoT) devices. In DI computational tasks are offloaded from the IoT device to the edge server via lossy IoT networks. However generally there is a communication system-level trade-off between communication latency and reliability; thus to provide accurate DI results a reliable and high-latency communication system is required to be adapted which results in non-negligible end-to-end latency of the DI. This motivated us to improve the trade-off between the communication latency and accuracy by efforts on ML techniques. Specifically we have proposed a communication-oriented model tuning (COMtune) which aims to achieve highly accurate DI with low-latency but unreliable communication links. In COMtune the key idea is to fine-tune the ML model by emulating the effect of unreliable communication links through the application of the dropout technique. This enables the DI system to obtain robustness against unreliable communication links. Our ML experiments revealed that COMtune enables accurate predictions with low latency and under lossy networks.