Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
Software packet-processing platforms--network devices running on general-purpose servers--are emerging as a compelling alternative to the traditional high-end switches and routers running on specialized hardware. Their promise is to enable the fast deployment of new, sophisticated kinds of packet processing without the need to buy and deploy expensive new equipment. This would allow us to transform the current Internet into a programmable network, a network that can evolve over time and provide a better service for the users. In order to become a credible alternative to the hardware platforms, software packet processing needs to offer not just flexibility, but also a competitive level of performance and, equally important, predictability. Recent works have demonstrated high performance for software platforms, but this was shown only for simple, conventional workloads like packet forwarding and IP routing. And this was achieved for systems where all the processing cores handle the same type/amount of traffic and run identical code, a critical simplifying assumption. One challenge is to achieve high and predictable performance in the context of software platforms running a diverse set of applications and serving multiple clients with different needs. Another challenge is to offer such flexibility without the risk of disrupting the network by introducing bugs, unpredictable performance, or security vulnerabilities. In this thesis we focus on how to design software packet-processing systems so as to achieve both high performance and predictability, while maintaining the ease of programmability. First, we identify the main factors that affect packet-processing performance on a modern multicore server--cache misses, cache contention, load-balancing across processing cores--and show how to parallelize the functionality across the available cores in order to maximize the throughput. Second, we analyze the way contention for shared resources--caches, memory controllers, buses--affects performance in a system that runs a diverse set of packet-processing applications. The key observation is that contention for the last-level cache represents the dominant contention factor and the performance drop suffered by a given application is mostly determined by the number of cache references/second performed by the competing applications. We leverage this observation and we show that such a system is able to provide predictable performance in the face of resource contention. Third, we present the result of working iteratively on two tasks: designing a domain-specific verification tool for packet-processing software, while trying to identify a minimal set of restrictions that packet-processing software must satisfy in order to be verification-friendly. The main insight is that packet-processing software is a good fit for verification because it typically consists of distinct pieces of code that share limited mutable state and we can leverage this domain specificity to sidestep fundamental scalability challenges in software verification. We demonstrate that it is feasible to automatically prove useful properties of software dataplanes to ensure a smooth network operation. Based on our results, we conclude that we can design software network equipment that combines both flexibility and predictability.
Louis-Henri Manuel Jakob Merino
Aurélien François Gilbert Bloch