How Finely Do We Need to Control Internet Traffic? ================================================== The Internet has grown tremendously both in the capacity of traffic that it can carry and in the actual traffic that it does carry. At the present, the cost of this capacity, measured either in the rates that ISPs charge or in the cost of leasing dark fiber, is at the lowest that it has ever been. With this increase in capacity and drop in cost, one would expect to see a corresponding lack of interest in controlling small aggregates of traffic. Yet, a desire for higher performance, increased reliability and new services is driving a curious trend toward controlling finer and finer amounts of traffic on the Internet. One such example is the almost religious debate between MPLS and traditional IP routing. MPLS offers fine grained control over traffic, with the ability to dictate the specific path through a network for traffic from an ingress interface on one end to an egress interface on another end of the network. IP routing proponents often cite the common practice of over-provisioning networks given current market conditions, which seems counter to the need for fine-grained traffic control. A second such example is the common practice of multihoming, which has led to de-aggregation. As more and more stub networks purchase connectivity from more than one ISP, they find they have a choice of multiple paths for sending and receiving traffic. Many companies such as NetVMG, Opnix, Proficient Networks, Routescience, and Sockeye provide devices that control the paths of egress traffic to individual IP addresses. It is commonly believed that stub networks are purposely de-aggregating their network block announcements to split ingress traffic between inter-AS paths. This has brought out the worries of associated routing table growth and protocol overhead. A more direct example of this phenomenon is the current topic in networking research of overlay networks. Many overlay networks rely on application layer forwarding and providing better customized paths for individual traffic flows than the underlying Internet can provide. However, given all the feverish research and industrial activity in all these areas of networking, their advantages in terms of performance, reliability and enabling of new applications are still debated. Future thrusts into finer control of Internet traffic will undoubtedly be influenced by the structure of the Internet. Certainly a possible future is one where nothing is different - since the introduction of SS7, the PSTN has not changed for over 20 years. Alternatively, overlay networking may become common, and/or the current ISP model of the Internet may change. Two core research issues need to be pursued in this area. The first will determine how overlay networks should evolve to the future Internet. Many overlay networks cannot scale to all the hosts on the current Internet. However, what happens when all the hosts on the Internet are part of multiple disjoint overlay networks? Peering agreements between nodes at the overlay level may become commonplace. Routing decisions by different overlay networks may interfere with each other by changing traffic patterns in the underlying network. This can lead to an unstable system or one that is not much better than the underlying Internet. Can overlay networks co-exist or will measurements and routing decisions have to be coordinated to still promise improvements over the current Internet? Instead, should we abandon overlay networking but use the techniques developed for it to improve routing in the underlying IP network? A global, distributed measurement infrastructure can be built to detect the capacities and utilization of various Internet paths. A control network that reconfigures IP routes dynamically based on these measurements can then be put in place. Such an approach can improve on what IP routing offers today but less than what overlay networks promise. However, this approach can be more scalable than overlay network forwarding. The other long term research issue considers Internet routing if the current hierarchical nature of the AS topology no longer holds in the future. Given the turmoil that many ISPs are facing, one can imagine a future Internet without a core consisting of a few large ISPs. In order to send a packet from California to New York, a path traversing several small networks may be employed, instead of a path through a single continental ISP's network. This Internet may be composed of a large number of small ASes that peer with each other for transit, with no clear hierarchy. Peering may be dramatically different where ASes no longer determine peering tactics based on size or AS hierarchy position. No longer will most of the traffic traverse a few global sized, well engineered ISPs. If instead the majority of traffic traverses the same few paths, and if smaller, less-provisioned networks comprise these paths, will congestion occur more rapidly? We may need to rethink the fundamental decisions of the current wide-area routing architecture. Will fast routing convergence or agility in re-routing around congestion become even more critical? Will multi-path routing become a necessity, and hot potato routing a more common occurrence? Perhaps the current two level IGP/EGP hierarchy will not be sufficient. We may have to consider a third level, or current overlay networks may fill that need. A more traditional peering hierarchy may then appear at the overlay network level. Clearly other issues will also be involved in determining how finely we need to control traffic in the future. New services may dictate stringent performance or security requirements that may require fine control. Without any current, compelling services with these requirements, we should be careful to not dismiss technologies for fine traffic control : we may have a chicken and egg problem. Acknowledgements : I want to thank Supratik Bhattacharyya, Chen-Nee Chuah, Adam Costello and Gianluca Iannaccone for their feedback.