From: Brian D Williams (talon57@well.com)
Date: Fri Sep 24 1999 - 08:33:44 MDT
From: "Robert J. Bradbury" <bradbury@www.aeiveos.com>
>The digital gridlock is primarily due to 2 things:
> (a) The end point downstream link (the last mile to the
>consumer)
> (b) The source upstream link (providers not having the bandwidth
> or computer power to feed their clients).
There is a third link, the Internet itself, which in a tragedy of
the commons situation lacks sufficient bandwidth.
(Ameritech is an Internet NAP)
>Wavelength Division Multiplexing (WDM) that allows multiple
>frequencies to be sent over the same optical fiber means that the
>existing optical networks can be leveraged to a significant
>degree. You can take an existing optical fiber and transmit 2, 4,
>8, 16 times as much data over it simply by installing different
>transmitting/receiving equipment.
We use WDM now, there isn't sufficient fiber in ground, or
equipment installed, to begin to handle the bandwidth everyones
talking about, and the BIG point is that it's currently to
expensive to install.
>The only realistic solutions are optical fiber to the
>source site, or putting the source server at a site that has
>direct optical connections. Optical switches would help
>in decreasing delays across the net (which are already small)
>or increasing reliability (by providing rapid switching away
>from "broken" optical links) but they will do little for the
>overall perception of "digital gridlock".
All optical is the way to go, unfortunately no ones willing to
build it. (too expensive).
>For example, I'm in Seattle, on an ADSL line to US-West (the local
>Telco). I can get to www.mci.net in Reston, Virginia over 13
>routers with an average packet transfer time of 98 milli-seconds.
>The longest average segment of that is the ~67 milli-seconds it
>takes to go from the Seattle Node to the Washington DC Node.
>In contrast, I can get to Anders machine (nada.kth.se) in 18 hops
>with an average ~200 ms with the largest hop being the
>Teleglobe.net to nordu.net link of ~100 ms which is presumably the
>cable running from New York to the Norway.
Don't confuse pinging trace-routes with bandwidth availability, try
running traffic at say T-1 speed 1.536 MBS (framed) and you'll see
what I mean...
>There is no "digital gridlock". Its a specious meme. There are
>only local gridlocks caused by too little upstream bandwidth from
>the servers or downstream bandwidths to your machine. Nanovation
>Tech can do little to solve this problem because the costs of
>installing fiber over the final mile to the customer is very high.
I can't agree, the inter NAP trunks often run at 100% bandwidth,
and everything slows down. There just isn't anywhere near the
backbone to support millions of connections running at ADSL
bandwidth.
I've argued for years that what we need are central office switches
that can provide point to point ADSL connections, i.e circuit-
switched-packet-switching.
Brian
Member, Extropy Institute, www.extropy.org
Life Extension Foundation, www.lef.org
National Rifle Association, www.nra.org, 1.800.672.3888
Mars Society, www.marssociety.org
Ameritech Data Center Chicago, IL, Local 134 I.B.E.W
Reading: Lipstick Traces by Greil Marcus
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:05:16 MST