Quantifying the Latency Benefits of Near-Edge and In-Network FPGA Acceleration

Abstract

Transmitting data to cloud datacenters in distributed IoT applications introduces significant communication latency, but is often the only feasible solution when source nodes are computationally limited. To address latency concerns, cloudlets, in-network computing, and more capable edge nodes are all being explored as a way of moving processing capability towards the edge of the network. Hardware acceleration using Field Programmable Gate Arrays (FPGAs) is also seeing increased interest due to reduced computation latency and improved efficiency. This paper evaluates the the implications of these offloading approaches using a case study neural network based image classification application, quantifying both the computation and communication latency resulting from different platform choices. We consider communication latency including the ingestion of packets for processing on the target platform, showing that this varies significantly with the choice of platform. We demonstrate that emerging in-network accelerator approaches offer much improved and predictable performance as well as better scaling to support multiple data sources.

Publication
In International Workshop on Edge Systems, Analytics and Networking (EdgeSys), April 2020, pp. 7–12
Suhaib A. Fahmy
Suhaib A. Fahmy
Associate Professor of Computer Science

Suhaib is Principal Investigator of the Accelerated Connected Computing Lab (ACCL) at KAUST. His research explores hardware acceleration of complex algorithms and the integration of these accelerators within wider computing infrastructure.

Related