Flow-based Networking and Quality of Service
During the past two decades the Internet has been widely deployed and integrated into the society, radically altering the way people communicate and exchange information. Through cheap, fast and reliable information transfer, the Inte
...
Flow-based Networking and Quality of Service
During the past two decades the Internet has been widely deployed and integrated into the society, radically altering the way people communicate and exchange information. Through cheap, fast and reliable information transfer, the Internet interconnects billions of users covering practically the entire (developed) world. Although the Internet was intended as a research network between few a institutions in the United States, it has grown to take a central role in day-to-day communications and is by now considered indispensable. The last decade we have seen a migration of the classic telecommunication services, telephony, radio and television, to the Internet and the emergence of new services, such as online gaming and e-government. Hence, the Internet is replacing the existing, dedicated telecommunication networks and embodies a single medium for all (wired) communications. However, since these dedicated networks are tailored for one type of service, the migration to the Internet, which design goals are primarily scalability and reliability, creates a ¿eld of tension. The low latencies and losses for which the dedicated networks were optimized, cannot be guaranteed by the Internet, which philosophy relies on best-effort routing. Packets in the Internet can literally travel across the entire network, making it very hard to predict where the packets may travel, yielding tremendous uncertainties regarding packet-delays and loss. Consequently, in order to provide guarantees regarding the quality of a service, new techniques need to be implemented, which are commonly denoted by ¿Quality of Service¿ (QoS). Various implementations have been proposed in literature that aim at providing QoS in the Internet. One of these implementation relies on ¿ow-based communication between end-hosts. In ¿ow-based communications, the packets from a source to a destination that belong the same stream are forced along the same path, offering a better control over the packets and an improved quality control.
In this thesis we study the performance of networks using ¿ow-based communication and we show by means of measurements that the current Internet can yield highly unpredictable behavior in packet delivery. We will discuss measurements performed on the Internet that serve as a motivation for the ¿ow-level approach that is used throughout the thesis. Based on traceroute measurements we show that the best-effort environment of the Internet can lead to highly unpredictable behavior of packets. The measurements indicate that the routes that the packets in the Internet follow are very changeable due to dynamics in the routing plane. The lifetime of a route, i.e. the time between the ¿rst and last consecutive occasion that the particular route is used, appears to follow a power-law distribution. This unpredictable behavior may hamper the performance of real-time applications and serves as a motivation for the ¿ow-based networking used in this thesis.
We introduce a new model that describes the network performance by using an analogy with queueing theory. The analysis of network performance is complicated by the many dependencies between the properties of a network. The objective is to express the network performance in terms of these properties and reveal the root of the problem. The model facilitates to express the emergent performance characteristics, such as the blocking rate of traf¿c ¿ows and the maximum throughput of the network, in terms of network parameters that are given by design, such as the number of nodes. The model considers the network as a black box and minimizes the degrees of freedom in order to reduce the dependencies and improve the comprehensibility of the model. Due to the small number of parameters, we are able to discern the in¿uence of these parameters on the performance. In addition, we address the dif¿culties of studying network dynamics, which can be traced back to the dependencies between the links and the importance of traf¿c ¿uctuations in the network on the overall performance.
In a more static scenario where ¿ows are assumed to have an in¿nite duration, we study the maximum throughput of a network. Now, the network performance is measured in terms of the average number of ¿ows that can be allocated in a network before rejection occurs. Through the use of the Erdös-Renyí random graph model [17] we can accurately compute the ratio of the available links as a function of the number of allocated ¿ows in the full-mesh and we present an upper-bound on the maximum number of ¿ows that can be allocated in the full-mesh.
For real networks, the performance analysis of network dynamics is often too complex to model. The heterogeneity of these networks and the complexity of the network protocols, including Quality-of-Service mechanisms, may prohibit the use of mathematical models, such that simulation seems the only viable option. We introduce DeSiNe, a ¿ow-level network simulator with a special focus on Quality-of-Service, which can study and compare the performance of various Quality-of-Service implementations at a system level. In this thesis we detail the architectural and functional design and illustrate the use of DeSiNe by means of several examples. In particular, DeSiNe supports constraint-based routing and dynamic Quality-of-Service routing. The strength of DeSiNe lies in its ability to simulate classes of networks in an automated fashion. This is particularly useful when studying the performance of e.g. a new routing protocol and compare it between different classes of networks.@en