From: Eugene Leitl (eugene.leitl@lrz.uni-muenchen.de)
Date: Fri Dec 24 1999 - 03:12:21 MST
Robert J. Bradbury writes:
> In which case, I would only say
> it isn't a "programming" language. Now, if you want to look
What you said. I dropped "programming".
> at "programming" languages for typesetting you have to go *way*
> back to ROFF and NROFF. Those did a pretty good job too.
PostScript is a true all-purpose programming language, a Forth
derivate in fact.
> So, the popularity of HTML could only be due to perhaps 3 things
> (a) open source -- people could easily extend it
PostScript is/has been available in source. PostScript can be easily
extended in PostScript itself. It is an environment for defining
problem-specific languages.
> (b) timing -- it got popularized right when the technology was
> becoming cheap enough for internet growth & client/server
> applications to really take off.
PostScript was here before world wide web.
> (c) simplicity -- because it had no variables it was easy to learn
PostScript has a simple syntax. You can define in a few lines
something which is even easier for braindead webmonkeys to parse.
> > What's the point in creating content which grows unreadable really
> > quick?
>
> Old content doesn't become "unreadable". Have you thrown out all
Yes, it does. Obsolete media or obsolete encoding renders it
effectively unreadable. Does dusty deck ring a bell? (I think I just
heard an archive librarian cursing before a stack of magnetic
tapes. Naw, must be a hallucination.)
> of your books? Conversion between languages (even ill-defined
Books are not an electronic medium. It relies on the human visual
system and the used language to remain essentially constant over
decades.
Digital media are all that not.
> human languages) is becoming easier all the time bacause there
> is an ever cheaper amount of processing power available to do the
> conversion. What *is* true would be a statement like:
The problem is 0) reading the damn thing in from a possibly obsolete
medium, formatted in a weird file system 1) recognizing the encoding
utilized (provided it is documented, and you don't have to
reverse-engineer it) 2) obtaining (or writing) a piece of code that
translates it and converting the file. The farther back you the more
difficult it starts to get. You might eventually running a hierarchy
of emulators to get at the content.
> "What's the point of creating content that *looks* really antiquated
> really quick?"
I would like to see whether an off-shelf browser can parse vanilla
HTML a decade from now. It might, but I doubt it.
> For my purposes I've got hundreds of (old) documents that are nothing
> but relatively flat HTML because its the *content* and the links that
> are important. These documents will always look "flat" because they
> aren't cartoons or movies or things that blink or jump around on the
> screen. On the other hand I'm not implementing a "purchasing" system
> where security, advertising and sales-volume are primary motivators.
A better solution seems to write stuff in a rigidly defined (and
keeping the written definition along with demo parser code snippets in
various languages), easily parsable format, and generate the requried
format dynamically.
> Vanilla HTML (with a few enhancements for table formating is enough
> for most of my needs), for others is may be very insufficient.
I think database-backed websites have their uses.
> Robert
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:06:11 MST