Tel-Aviv University
Aadvisors: Prof. Yehuda Afek and Prof. Anat Bremler-Barr
Graduation 2003
Tel-Aviv University
Aadvisors: Prof. Yehuda Afek and Prof. Anat Bremler-Barr
Graduation 2003
The objective of this study is to propose an efficient solution for Low-Rate Attacks (LRA), such as scraping attacks that aim to download all the Uniform Resource Identifiers (URIs) of a website. Attackers attempt to evade detection by behaving like regular users while browsing a small set of distinct pages (URI) at small time scales. However, at larger time scales, the attacker becomes a distinct heavy hitter that requests numerous distinct URIs. Although there are several space-efficient and time-efficient methods to detect distinct heavy hitters, they still require excessive memory to track all users over a large time scale. In this research, an innovative streaming algorithm is proposed to detect the attacker.
With the continuous increase in reported Common Vulnerabilities and Exposures (CVEs), security teams are overwhelmed by vast amounts of data, which are often analyzed manually, leading to a slow and inefficient process. To address cybersecurity threats effectively, it is essential to establish connections across multiple security entity databases, including CVEs, Common Weakness Enumeration (CWEs), and Common Attack Pattern Enumeration and Classification (CAPECs). In this study, we introduce a new approach that leverages the RotatE [4] knowledge graph embedding model, initialized with embeddings from Ada language model developed by OpenAI [3]. Additionally, we extend this approach by initializing the embeddings for the relations.
With the advent of cloud and container technologies, enterprises develop applications using a microservices architecture, managed by orchestration systems (e.g. Kubernetes), that group the microservices into clusters. As the number of application setups across multiple clusters and different clouds is increasing, technologies that enable communication and service discovery between the clusters are emerging (mainly as part of the Cloud Native ecosystem).
In such a multi-cluster setting, copies of the same microservice may be deployed in different geo-locations, each with different cost and latency penalties. Yet, current service selection and load balancing mechanisms do not take into account these locations and corresponding penalties.
We present \emph{MCOSS}, a novel solution for optimizing the service selection, given a certain microservice deployment among clouds and clusters in the system. Our solution is agnostic to the different multi-cluster networking layers, cloud vendors, and discovery mechanisms used by the operators. Our simulations show a reduction in outbound traffic cost by up to 72% and response time by up to 64%, compared to the currently-deployed service selection mechanisms.
To fully understand the root cause of the NRDelegationAttack and to analyze its amplification factor, we developed mini- lab setup, disconnected from the Internet, that contains all
the components of the DNS system, a client, a resolver, and authoritative name servers. This setup is built to analyze and examine the behavior of a resolver (or any other component) under the microscope. On the other hand it is not useful for performance analysis (stress analysis).
Here we provide the code and details of this setup enabling to reproduce our analysis. Moreover, researchers may find it useful for farther behavioral analysis and examination of different components in the DNS system.
Today’s software development landscape has witnessed a shift towards microservices architectures. Using this approach, large software systems are composed of multiple separate microservices, each responsible for specific tasks. The breakdown to microservices is also reflected in the infrastructure, where individual microservices can be executed with different hardware configurations and scaling properties. As systems grow larger, incoming traffic can trigger multiple calls between different microservices to handle each request.
Auto-scaling is a technique widely used to adapt systems to fluctuating traffic loads by automatically increasing (scale-up) and decreasing (scale-down) the number of resources used.
Our work shows that when microservices with separate auto-scaling mechanisms work in tandem to process ingress traffic, they can overload each other. This overload results in throttling (DoS)
or the over-provisioning of resources (EDoS).
In the lecture we will demonstrate how an attacker can exploit the tandem behavior of microservices with different auto-scaling mechanisms to create an attack we denote as the Tandem Attack. We demonstrate the attack on a typical and recommended serverless architecture, using AWS Lambda for code execution and DynamoDB as database. Part of the results will be presented as an IEEE INFOCOM’23 poster.
DEEPNESS Lab 2022 © all rights reserved