Projects

The OpenBox Controller - Northbound API

By Dan Shmidt (Supervisor: Prof. Anat Bremler-Barr)

Most modern networks nowadays contain a massive amount of appliances, each appliance typically executes one network function (NF) (eg. Firewall). Each such appliance is bought, configured and administered separately. Most NFs perform some kind of Deep Packet Inspection (DPI). OpenBox provides a framework for network-wide deployment and management of NFs which decouples the NFs control plane from NFs data plane. OpenBox consists of three logic components. First, user-defined OpenBox Applications that provide NF specifications. Second, a logically-centralized OpenBox Controller (OBC) which serves as the control plane. Finally, OpenBox Instances (OBI) constitute OpenBox’s data plane.

This work presents a design and implementation [2] for the user facing interface of the OpenBox Controller, which allows network administrators to efficiently create and manage their NFs. The implementation supplies users with a framework from which they can build and experiment with NFs, as well as a functioning OpenBox Controller which loads NFs and manages the OpenBox control plane. The design is extensible and allows OpenBox future developers to quickly add more functionality and retrieve more data from the control plane.

Project manuscript: click here

Project code: click here

Implementing a prototype for the Deep Packet Inspection as a Service Framework

By Lior Barak (Supervisor: Prof. Anat Bremler-Barr)

Today, most of the network traffic need to traverse through several middleboxes before it can reach its destination. Common operation between the many of these middlebox is DPI – Deep Packet Inspection, which allows to perform different actions based on patterns in the packets content.
DPI consumes many of the middlebox resources during its operation. In addition, each packet usually traverses several middleboxes which causes the same packet to be scanned by different DPI engines over and over again. As a result the network becomes less efficient, which affects directly its total bandwidth.
One solution for those issues is a system that provide DPI as service. Means, the different middleboxes in the network that need DPI, can register to the service and expose their desired patterns. The System will direct the packets to a designated DPI engine instances across the network and pass the pattern matches, if exists, to the relevant middlebox.
There are many advantages in such system, among others: a single scan of every packet, the ability to upgrade to latest DPI algorithms, better partition of packets between DPI engines and increasing middlebox development innovation. Developing such a system is more simple today than ever with the emerging of SDN, which allows dynamic routing of the network traffic using a centralized controller.
The goal of this work is to implement a prototype of the DPI as a service system and to provide a realistic as possible environment to evaluate it. This paper documents the design and implementation of the system and other tools which are needed to deploy functioning network that uses this system.
Finally, the paper describes the experiments done to prove the system correctness and effectiveness and discusses their results.

Project manuscript: click here
Project slides: click here
Project code: click here

Implementing Scalable URL Matching with Small Memory Footprint

By Daniel Krauthgamer (Supervisor: Prof. Anat Bremler-Barr)

URL matching lies at the core of many networking applications and Information Centric Networking architectures. For example, URL matching is extensively used by Layer 7 switches, ICN/NDN routers, load balancers and security devices (1), (2), (3), (4). Modern URL matching is done by maintaining a rich database that consists often of millions of URLs and consuming a large amount of memory.
Reducing the URL matching algorithm’s memory footprint enables these systems to handle a larger sets of URLs. The paper (5) introduces a generic framework for accurate URL matching that aims to reduce the overall memory footprint, while still having low matching latency.
The framework's input is set of URLs, and the output: a DFA like data structure that encodes any URL to a compressed form. The encoded form of the URL then can be used as a key to a database such as hashtable. Therefore the DFA like data structure is a dictionary-based compression method that compresses the database by 60%, while having only a slight overhead in execution time.
The framework is very flexible and it allows hot-updates, cloud-based deployments. Moreover it can deal with strings that are not URLs.

Project manuscript: click here
Project slides: click here
Project code: click here

Design and Implementation of a Data Plane for the OpenBox Framework

By Pavel Lazar (Supervisor: Prof. Anat Bremler-Barr)

The OpenBox Framework is framework that effectively decouples the con- trol plane of NFs from their data plane. Similarly to SDN solutions that address only the networks forwarding plane (e.g., switching, routing), OpenBox provides a framework for network-wide deployment and management of NFs. The OpenBox framework is composed of three logic components: OpenBox Application, OpenBox Controller and OpenBox Instances used as the data plane.
This project presents a design of a general Open Box Instance that can be used as the data plane of the OpenBox Framework. The suggested architecture is modular in nature and allows the easy replacement of its packet processing engine. This feature allows a lot of improvement and innovation in the way packets are processed with an OBI and between them.
We also present a reference implementation of the suggested architecture which shows its usability as an OpenBox Instance and integrates it inside a working OpenBox Framework. Our reference implementation uses Click as its packet processing engine and explains how it can be easily replaced.

Project manuscript: click here
Project slides: click here
Project code: click here

Efficient Automated Signatures Extraction Implementation

By Golan Parashi (Supervisor: Prof. Anat Bremler-Barr)

This work describes a code implementation of a tool for zero day attack signature extraction based on the work "Automated signature extraction for high volume attacks"[1]. The code implementation offers a more correct and faster implementation than the code used to initially verify the work in [1] - the new code implementation offers an increase in throughput and offer more correct signatures.
Given two large sets of messages, P of messages captured in the network at peacetime (i.e., mostly legitimate traffic) and A captured during attack time (i.e., contains many attack messages), the tool extracts a set S of strings, that are frequently found in A and not in P. Therefore, a message containing one of the strings from S is likely to be an attack message. This tool finds popular strings of variable length in a set of messages, using a modified implementation [4] of the Heavy Hitters (Finding Frequent Items) algorithm [3]. This implementation is used as a building block to extract the desired signatures.
Using the attack signatures found by the tool in conjunction with a network traffic- filtering device, a yet unknown attack could be automatically detected and stopped within minutes from attack start time.
The development focused on creating a fast implementation in order to achieve high throughput, which is very important when operating in large traffic networks environment. The development methodology included repeated inspection of code sections, by using CPU/Memory profilers and static code analysis tool. These tools helped finding issues in the code. Specifically, the CPU profiler helped finding code sections with high latency. Once an issue was found it was resolved. A performance evaluation was a major part of the development lifecycle.
The tool is offered as a command line utility and a website was created in order to make it accessible for testing.

Project manuscript: click here
Project slides: click here
Project website: click here

Mitigating Layer 2 Attacks: Re-Thinking the Division of Labor

By Nir Solomon (Supervisor: Prof. Anat Bremler-Barr)

We provide an overview of Layer 2 attacks in OpenFlow: ARP Poisoning and a new DDos attack on the Controller, both implemented by us. We will describe our approach to mitigate these attacks, called Switch Reactive ARP-query. The key idea is to shift responsibilities back from the control-plane to the data-plane in order to reduce the load on the Controller.
ARP Poisoning is the kind of attack in which an attacker is able to alter or change the victim’s ARP cache in order to leverage it to Man in the Middle (MitM) attack or a Denial of Service (DoS) attack. A Distributed Denial of Service (DDoS) is a form of attack in which the victim’s resources are being depleted by multiple adversaries.
Both of these attacks are relevant in an OpenFlow-managed SDN network, where the contradicting relationship between the whole view of the network and the centralized Controller may clash.
In this paper, we have successfully mitigated ARP Poisoning attacks and have decreased dramatically and bounded the number of packet-ins, the main cause for the DDoS on the Controller.

Project manuscript: click here
Project slides: click here
Project code: click here

Aho-Corasick for Compressed HTTP in Snort

By Adir Gabai (Supervisor: Prof. Anat Bremler-Barr, Dr. Yaron Koral)

In this project we implemented the Aho-Corasick for Compressed HTTP (ACCH) as a pattern matching engine for Snort.

Project manuscript: click here
Project slides: click here
Project code: click here

MCA^2 - Multi-Core Architecture for Mitigating Complexity Attacks

By Yehuda Afek, Anat Bremler-Barr, Yotam Harchol, David Hay, Yaron Koral

Abstract:
This work presents a system and a multi-core architecture to defend from complexity attacks. The application of this system to mitigate the complexity attacks on the DPI engines is provided. We show how a simple low bandwidth cache-miss attack takes down the Aho-Corasick (AC) pattern matching algorithm that lies at the heart of most DPI engines. As a first step towards mitigating the attack, we have developed a variant of AC algorithm that improves the worst case performance (under an attack). Still, its running-time under normal traffic is worse than classical AC implementations. To overcome this problem, we take advantage of a multi-core architecture. We introduce MCA^2 —Multi-Core Architecture for Mitigating Complexity Attacks, which dynamically combines the classical AC algorithm with our compressed implementation to provide a robust solution to mitigate this cache miss attack. We demonstrate the effectiveness of our architecture by examining cache-miss complexity attacks against DPI engines and show a goodput boost of up to 73%. Finally, we show that our architecture may be generalized to provide a principal solution to a wide variety of complexity attacks.

Project web page: click here

TCAMimic: A TCAM Software Simulation

By Adam Mor

Abstract:
TCAM is a fast ternary associative cache memory, which is common in today’s routers, mainly in order to perform high rate packet-classification. Recently we are witnessing many works that use this powerful memory type to solve other hard problems that require high-speed solutions for example, pattern matching, regular expression matching, and heavy hitters analysis. While TCAM is a very powerful off-the-shelf new type of memory, currently it still requires hardware specialties to use it. Thus the research works are evaluated and analyzed using synthetic model only.
We show that understanding the detailed design of TCAM is important in order to understand the limitations and power of TCAM, as many works ignore the fact that while TCAM has a high throughput, it also has, by design, a high latency. Thus, many of the previously-proposed works which assumed closed-loop lookups (namely, where the input of a TCAM lookup depends on the result of a previous TCAM lookup) cannot be efficiently implemented as is and require a modification to their algorithm.
We present TCAMimic, a TCAM simulator that addresses the need of an easy-to-use software simulator of TCAM hardware. Using the TCAMimic simulator, we run intra-flow interleaving, where queries from different parts of the same flow are interleaved, and show that this significantly reduces the latency with only a marginal reduction in the throughput.

Project web page: click here