[p2p-research] From the document web, via the data web, to the active web
Henrik Ingo
henrik.ingo at avoinelama.fi
Sat Mar 29 22:43:41 CET 2008
Michel,
It's in the middle of the night, but I just feel compelled to add some
points. This is not good enough for a blog, but may be freely
copypasted by anyone...
My impression of the article is that it is written by some academic
non-programmer, who has tried to study the HTTP protocol and some W3C
standardisation efforts, but has no experience in actually producing
web applications. As first impressions go I could of course be very
wrong, I didn't even bother to read some of the middle parts of the
article!
The Document web definition is fine, it is what anyone would consider
"Web 1.0". What I strongly disagree with is the authors criticism or
belittlement of current "Web2.0". In my opinion a significant shift in
the web happened with the maturation of the Firefox browser, which
released an avalanche of web based applications and portals that made
heavy use of JavaScript and CSS. (If someone wouldn't like the term
"Web2.0" it may be better and clearer to call this "The advent of
AJAX".)
Before Firefox there where 2 browsers, Internet Explorer and Netscape
that supported advanced JavaScript, but they supported totally
different versions of it (the standardised version today is the IE
one, a testament to the fact that MS indeed employes some very good
programmers, the ones that happened to work on IE from 4.x to 6.x
before 2001). Therefore most pages that tried to do anything with
JavaScript or advanced CSS supported only one of these browsers, or
sometimes tried to support both of them, often with poor results. And
many in the universities or Open Source crowds for instance were still
using text-based browsers - which is notable because at the time this
group had significant mindshire in the web's development. For all of
these reasons use of JavaScript was considered evil by (in my opinion)
a majority of web developers and what was then called "Dynamic HTML"
was mostly a phenomenon of the Microsoft camp. (Even today if you use
the web interface to Microsoft Exchange email, it is very nice on IE
but barely usable on Firefox.)
With the advent of Firefox - which supported the then standardised IE
style of JavaScript - the situation started changing, since there now
was a standard, and a free multiplatform browser to support the
standard. Quite soon very cool web based apps were born, led by Google
maps, Google mail... This was called AJAX programming, as in
Asynchronous JavaScript and XML. Compared to Microsofts DHTML
evangelisation this was much cooler technology than anyone had ever
dreamt of, and availability of an Open Source browser to support it
made also the opposition vanish. So imho this, and not IE4.x with
DHTML support was the de fact next phase of the web.
At the same time we had developed some additional techniques - most
signicant would prehaps be RSS and the family of XML markups used to
provide blog feeds. This lead to a collaboration between websites
beyond linking: You could provide parts of another blog or newssite on
your own page, for instance. Or to take a very different example,
BookMooch uses Amazon to provide data and cataloguing of books. Yet,
BookMooch is a site for free sharing of old books, you'd think Amazon
wouldn't like "helping out" such a project. Not so, in reality lots of
BookMooch users end up buying books on Amazon. In fact, BookMooch
probably makes most of its income based on money they get from Amazon
for these referrals.
AJAX combined with RSS and some other by then standard tools (wiki is
a significant one) is in my opinion rightly called Web2.0. This is
very different from the original document based web and rightly has
been given its own name.
Web2.0 is NOT the social web (like FaceBook, LinkedIn). The social web
is merely an application of Web2.0, technically it doesn't contribute
anything new. (Well, apart from FaceBooks innovation of letting 3rd
parties develop applications embedded in its own site, that is a great
innovation, but it is not "THE social web".) Why the social web is so
much hyped is in this context in fact a good question, I believe there
is in fact a little pyramid scheme to it all. I mean Facebook is fun
and all, but it isn't THAT fun, I think the effective inviting
mechanism plays a part.
This is the point we are now. Now for my own predictions:
Next we will see the advent of the Single sign-on web, most likely
emodied in the form of OpenID. (SSO means you don't have to create new
logins for every site, you just use one main identity and password to
log in to each site. Obviously the sites you log in to don't get to
know your password, they just accept the referral from your ISP, mail
provider, or other OpenID provider you are using.) This imho will add
further granularity to the web, in that users can come and go more
fluidly than today, where you make a choice to register and join
FaceBook but not something else. This in turn should foster a
development where we can again have smaller sites providing one small
funny little piece of the social web, instead of the monolithic
FaceBooks of today. This would be in line with what Web2.0 was all
about, Facebook et al are in fact a countertrend to the Web2.0 trend
if seen in this light.
Whether a "decentralised social web" will arise from this is a good
question, and whether the Global Giant Graph will emerge from that is
an even better question. It might, but it might end up something
entirely different. The GGG is technically possible today, and how
OpenID works there are some similarities to the RDF used in GGG, so
once OpenID becomes popular, the next step might be to not just
externalise (or decentralise) your login credentials but also your
social connections. But we will know the answer to this in something
like 5 years.
The proposal in the end on new HTTP commands is just pure folly (it is
just the wrong place to do it, period), which underlines that the
author wasn't just slightly off with his Web2.0 comments, but in fact
knows nothing at all about the technology he is talking about. To
implement such functionality by extending HTTP would imho be quite
silly, and in fact a peer-to-peer protocol like SIP would probably be
a better starting point in the first place, and even then you wouldn't
do it by commands like those, but you'd develop an XML based document
language to transmit this kind of information.
So, I guess it turned out to a semi-good commentary after all. OTOH I
think you stole my evening with this link so I'll have to do what I
really was going to do tomorrow. Good night!
henrik
On Sat, Mar 29, 2008 at 8:29 AM, Michel Bauwens <michelsub2004 at gmail.com> wrote:
> This is a great article to understand the technical evolution of the web, see
> http://www.dur.ac.uk/j.r.c.geldart/essays/there_again/towards_the_active_web.html
>
> Any comments about this to our blog would be most appreciated,
>
> Michel
>
> --
> The P2P Foundation researches, documents and promotes peer to peer alternatives.
>
> Wiki and Encyclopedia, at http://p2pfoundation.net; Blog, at
> http://blog.p2pfoundation.net; Newsletter, at
> http://integralvisioning.org/index.php?topic=p2p
>
> Basic essay at http://www.ctheory.net/articles.aspx?id=499; interview
> at http://poynder.blogspot.com/2006/09/p2p-very-core-of-world-to-come.html
> BEST VIDEO ON P2P:
> http://video.google.com.au/videoplay?docid=4549818267592301968&hl=en-AU
>
> KEEP UP TO DATE through our Delicious tags at http://del.icio.us/mbauwens
>
> The work of the P2P Foundation is supported by SHIFTN, http://www.shiftn.com/
>
> _______________________________________________
> p2presearch mailing list
> p2presearch at listcultures.org
> http://listcultures.org/mailman/listinfo/p2presearch_listcultures.org
>
--
email: henrik.ingo at avoinelama.fi
tel: +358-40-5697354
www: www.avoinelama.fi/~hingo
book: www.openlife.cc
More information about the p2presearch
mailing list