Performance and Efficiency Optimization of Multi-layer IoT Edge Architecture

The recent IoT applications set strict requirements in terms of latency, scalability, security and privacy. The current IoT systems, where computation is done at data centers, provide typically very high computational and storage capacity but long routes between computational capacity and sensors/actuators make them unsuitable for latency-critical applications and services. Mobile Edge Computing (MEC) can address these problems by bringing computational capacity within or next to the base stations of access networks. Furthermore, to cope with access network problems, the capability of providing the most critical processes at the local network layer is also important. Therefore, in this paper, we compare the traditional cloud-IoT model, a MEC-based edge-cloud-IoT model, and a local edge-cloud-IoT model with respect to their performance and efficiency, using iFogSim simulator. The results complement our previous findings that utilizing the three-tier edge-IoT architecture, capable of optimally utilizing the computational capacity of each of the three tiers, is an effective measure to reduce energy consumption, improve end-to-end latency and minimize operational costs in latency-critical IoT applications.