A Case for RIP (Re-architecting the Internet Protocols)
Tom Anderson
University of Washington
September 2003
This position paper starts from the premise that we are not in
control. The primary determining factors for how Internet
routing will evolve over the next decade are the long term
trends in the relative cost-performance of communication,
computation, and human brainpower. Academic research can help
optimize solutions to match these trends, but it can't buck
them. Even the tussles between competing vendors and interest
groups, issues that can have substantial impact in the short
term, are over the long term steamrollered by technology
trends.
What are these trends? Averaged over the past 30 years, wide
area communication has improved in cost-performance at roughly
60% per year. While prices are never simply a direct
reflection of costs, reflecting the ebb and flow of monopoly
positions, over the long term they track fairly closely. And
it is this long term improvement in cost-performance, rather
than any intrinsic nature of the Internet, which drives the
long term trends in Internet usage and operations. For
example, the transmission bandwidth for an hour-long
TV-quality teleconference would have cost $500 a decade ago,
while 10 years from now it will cost a nickel. Of course this
difference will result in a vast increase in the amount of
multimedia content distributed over the Internet.
While the long term improvement in WAN cost-performance seems
impressive, it pales compared to computing, local area
communication, and DRAM (each of which has improved at between
80-100% per year for the past 30 years). Moore's Law gets the
publicity (the 60% per year improvement in circuit density),
but that figure misses a key factor - volume manufacturing.
Roughly ten billion microproces-sors were manufactured last
year, compared to only a handful of wide area communication
line cards; thirty years ago, the numbers were closer to
parity. High volume technologies have a significant long term
edge in cost-performance. While a gap of 20-40% may not seem
like much in any given year, over the long term it adds up to
about an order of magnitude per decade. (To the extent that
prices diverge from costs, it is accentuating this effect -
the Internet is a less efficient market than CPUs and DRAM,
and thus is scaling even less quickly in the near term.)
One consequence is that the Internet was designed for a far
different world than the one we have today or will have in ten
years. Thirty years ago, human time was cheap, and
computation and communication were expensive. Today's
Internet, and increasingly so in the future, is one where
humans are expensive, wide area communication is cheap, and
computation is virtually free. Indeed, the Internet became
possible at the point that computation became cheap enough
that we could afford to put a computer at the end of every
wide area link - that is, at the point that computation and
communication reached parity. The Internet would not have
been feasible, purely from a cost standpoint, in 1960. Even
fifteen years ago, TCP congestion control was carefully
designed to minimize the cycles needed to process each packet;
few would claim that TCP packet processing overhead is the
limiting factor for practical wide area communi-cation today.
Recall that firewalls were considered too slow a decade ago;
today, they still are, but only for LAN traffic. These trends
will continue - activities such as routing overlays, link
compression, and traffic shaping, considered perhaps too slow
to be practical today, will eventually become commonplace.
This suggests that we should answer two questions. How will
the Internet evolve in response to these trends, and what can
we do as researchers to leverage them to make the Internet
more efficient, more reliable, and more secure? We make
several observations:
Ubiquitous optimization of backbone hardware. BGP is
explicitly designed for scalability over performance, and thus
is ill-suited for the kinds of optimizations that are likely
in the future. It is often impossible even to express optimal
policies in BGP. Similar problems occur at the intradomain
level; it is idiotic to have an architecture that requires
humans in the back room to twiddle link weights for good
perform-ance. The research challenge will be how to adapt our
routing protocols to accommodate ubiquitous op-timization.
Fortunately, networks will be run at the knee of the curve -
it makes no sense to run a network at high utilization if that
delays end users. The control theory problems of managing
traffic flows over large, heterogeneous networks become much
simpler at low to moderate utilization.
Cooperation as the common case. A widespread myth is that
Internet routing is dominated by competition - the "tussle"
between competing providers. In the short term, the tussle
seems paramount, but over the long term, delivering good
performance to end users matters, and that is only possible
when providers cooperate. Indeed, measurement studies have
shown that even today cooperation heavily influences the
selection of Internet routes. Unfortunately, BGP is
ill-designed for cooperation - even something as sim-ple as
picking the best exit, as opposed to the earliest or latest,
is a management nightmare in BGP. How can we re-design our
protocols to make cooperation efficient, and unfriendly
behavior visible and penalized?
Accurate Internet weather. Many ISPs like to think of their
operations as proprietary, but information necessarily leaks
out about those operations along a number of channels. Recent
measurement work has shown that it is possible to infer almost
any property of interest, including latency, capacity,
workload, policy, etc. We believe an accurate hour by hour
(or even minute-by-minute) picture of the Internet can be
cost-effectively gathered from a network of vantage points.
Leveraging this information in routing and congestion control
design is a major research challenge.
Sophisticated pricing models. Pricing models will become much
more complex, both because we'll be able to measure and
monitor traffic cost-effectively at the edges of networks, and
because the character of traffic affects how efficiently we
can run a network. Smoothed traffic will be charged less than
bursty traffic, since it allows for higher overall utilization
of expensive network hardware with less impact on other users.
Internet pricing already reflects these effects at a
coarse-grained level, as off-peak bandwidth is essentially
free. The trend will be to do this at a much more
fine-grained level. Smoother traffic makes routing
optimizations easier, but perhaps the more interesting
question is how traffic shapers interoperate across domains to
deliver the best performance to end users - in essence, how do
we take the lessons we've learned from interdomain policy
management in BGP and apply them to TCP?
Interoperable boundary devices. Far from being "evil" and
contrary to the Internet architecture, they are a necessary
part of the evolution of the Internet, as the cost-performance
of computation scales better than that of wide area
communication. Even today, sending a byte into the Internet
costs the same as 10000 instructions (at least in the US, the
ratio for foreign networks is even higher). The challenge is
making these edge devices interoperate and self-managing - the
only way to build a highly secure, highly reliable, and high
performance network is to get humans out of the loop. The
end to end principle in particular is a catechism for a
particular technology age - instead of thinking of how a huge
number of poorly secured end devices can work together to
manage the Internet, we will instead ask how a smaller number
of edge devices can cooperate among themselves to provide
better Internet service to their end users.
High barriers to innovation. As we help evolve the Internet
to better cope with the challenges of the future, it is
important to remember that routers are a low volume product.
As typical of any niche software system, this makes them
resistant to change, since engineering costs can dominate. As
researchers, we can help by redesigning protocols so that they
are radically easier to implement, manage, and evolve.
These observations and research challenges are animating our
work on RIP at UW.