modified 2003-12-20.
Musings on the geometry of thought-space: Consider *all* knowledge (known and unknown). What sort of framework does it have ? How many dimensions are needed ?
Pardon the construction. This is a extremely rough draft. Suggestions and links to relevant pages are welcome.
This includes:
everything
related pages:
disciplines which study everything
Being a Buckminster Fuller fan 3d_design.html#synergetics , I (DAV) like to consider ``everything'' a lot. When I look at all the fields of study available at a university, I wonder ``What is the complete list of all fields of study ? How do I learn about the interactions between each field of study ?''.
Sometimes I see a tree diagram, with extremely specialized classes at the leaves like ``VLSI design'' or ``electrical machines'' where we learn details of how people currently use some bit of knowledge. Moving towards the root, both of these classes have the same prerequisite ``Intro to Electrical Science''. In turn, that requires ``General Physics''.
... | | | General Physics ------------------------ | | | | | | \ \ / | \ ... ... ... | Intro to Electrical Science -------------------- | | | | | | | | / | | \ / | \ \ / \ \ ... / \ \ / \ ... / \ | | electrical VLSI machines design
As we move towards the root of the tree (``meta''), we get to classes which cover more general-purpose sorts of topics, and (at least in areas like VLSI design which is changing rapidly), even when leaf classes change rapidly from year to year (because new tools and ideas are being developed), the ``meta'' classes tend to change more and more slowly. (VLSI didn't even exist until late 1900s, yet a lot of the material in physics and calculus have not changed much from their development by Sir Isaac Newton (1642-1727) ).
So what is at the root ?
I think I was reading Douglas Hofstadter
when I had the epiphany that what is the root
is subjective.
There are a surprisingly large number of fields of study that can be considered the root:
-- David Cary
related web pages:
Here's the top N natural languages I would really like to learn. [FIXME: just list their names and a pointer to where I moved the rest of the information to http://www.worldwidewiki.net/ ?]
natural languages I want to learn:
"Life is too short to learn German." -- Porson, Richard (1759 - 1808)
By 2007, Chinese will be the #1 web language-- http://www.walid.com/
related links:
An opensource software translation project, translate.org.za aims to make software available in the eleven South African languages.http://translate.org.za/
Ethnologue: Languages of the Worldhttp://ethnologue.org/ ...
SIL International serves the peoples of the world through research, translation, and literacy.http://sil.org/
I've noticed for some time now that TV and radio news segments often feature heavily-accented English voice-overs when translating foreign speech. ... it doesn't matter who you use. Then again, maybe it does. The advantage of hiring an actor is that she or he can add any number of subtle inflections and modulations to manipulate our reception of speech. Can we ever know what the other is really saying?-- http://www.stingykids.net/archives/2003_03.html#000024
-- Dr. Philip Emeagwali http://emeagwali.com/interviews/mandate-the-future/While English is the language of choice on the Internet, it will hasten the extinction of thousands of indigenous languages. By the end of this century, 90 percent of the world's language could become extinct. The culture, customs and knowledge embedded in these languages will also become extinct. As we embrace the languages of former colonial masters, the world losses valuable information passed down by word of mouth over several generations. The extinction of any language is an irretrievable loss to humanity. If the early years of educational instruction are not in an indigenous language, then that language is headed for extinction.
The Sapir-Whorf-Korzybski Hypothesis, and related links.
"We lack decisive tests for distinguishing between nonsense babble, crafty cipher, and language." http://www.berkshire.net/~ifas/wa/glossolalia.html [offline ?]
short summary (probably inaccurate): The English language (and all other natural languages) is built of words and concepts created by people who assumed every "thing" was alive (has free will, wants and desires) and believed in (a) God . Therefore, a description of any "thing" using English (or other natural languages) is forced to use terms that carry the connotation that (a) the thing being discussed is alive, has free will, a soul, etc. and (b) the thing was designed by some intelligence.
However, since certain "things" are *not* alive (rocks, water, magnets, snowflakes, etc) and were *not* designed, it is extremely difficult, if not impossible, to discuss these things in a natural language free from inappropriate connotations. ( DAV: even if one does believe in (a) God, some things -- random permutations, emergent properties, etc. -- cannot be "designed". )
Karl Javorszky suggests using a synthetic language (mathematics) to describe things without using terms that carry inappropriate connotations.
Some sound-bites points I find interesting:
The Sapir-Whorf Hypothesis, codified by linguist Benjamin Whorf, in its most extreme and simplistic form states that human behavior is determined by the structure and lexicon of the language in which the person in question actually thinks. To illustrate: a person whose language contains no word for falsehood cannot tell a lie; he cannot even understand the concept. The idea has been a popular one for many years, especially with science-fiction authors; it formed the basis of Jack Vance's excellent science-fantasy _The Languages of Pao_./* was http://www.webcom.com/~donh/conlang2.html */
Re:Blah Blah Blah (Score:1) by orcrist (christopher.kuhi@stud.uni-*blah*muenchen.de) on Tuesday November 30, @05:05PM EST (#349) (User Info) I don't really need huge advances in interface efficiency. The needs you specify are cool by me. I'd love to be able to dictate my email, navigate the web via voice, click 'Forward' without having to actually click... This was somewhat along the lines of what I was thinking of. and hey, tell me speech rec in FPS games wouldn't be cool as heck You've got me there. (maybe not JUST, but in addition to the other controllers This is my point; not a replacement, rather, in addition to the traditional controls. I could handle hitting Enter to CR to the next line, then saying the line of code Like I said, try dictating code to someone else and you'll see what I mean. Programming languages aren't spoken languages, and aren't meant to be. Maybe if the technology allowed it, someone might develop a spoken programming language, but frankly (speaking as a Linguistics student) I doubt it. Even Mathematicians tend to show each other their formulas rather than say them, and I think we can assume their speech recognition works fine (despite how it seems when you're talking to one ;-) I want to be able to talk to my puter like they do in Star Trek someday. That's what I meant by language recognition, or better comprehension. In Star Trek they use colloquial language with the computer. This requires a great deal more than just the one-to-one relationship represented by replacing typed commands with spoken ones. For this the computer needs to understand what you mean not just what you say. If the computer's that smart, you probably don't need to tell it what to do :-) Chris
(archive version; current version at http://visual.wiki.taoriver.net/moin.cgi/SpeedTalk ) Somewhere I thought I read about a language designed to be spoken very rapidly. Maybe it was called "speedtalk", I can't remember. Elsewhere I read about machines designed to "speed up" any spoken language, but avoiding the "chipmunk effect". [FIXME: I've lost the links -- please help me find more information]. If teachers could communicate 4 times as fast, then 4 years of college classes could be compressed into 1 year. (Even if it took the same amount of time to *understand* something, so it still required 4 years, perhaps you could squeeze a week's worth of classes into 2 days, and then spend the rest of the week *doing* useful but routine stuff that left your mind free to ponder what you heard.) (see data_compression.html#source_compression for basically the same idea applied to human-to-computer communication rather than human-to-human communication ). Other minor benefits: books require less paper to print (see http://visual.wiki.taoriver.net/moin.cgi/TerseWriting , http://papertalk.wiki.taoriver.net/ )
While "word rate" varies somewhat from culture to culture, "information rate" is basically a constant. To express "The little boy was hit by a blue ball and started to cry, but his mother cheered him up with some cookies." will take about the same amount of time in spoken langauge in all languages (meant for face-to-face interaction).http://science.slashdot.org/comments.pl?sid=86294&cid=7504638
I suspect over the next hundred years some of the more verbose letter-based written languages will start condensing down to be more like English, which is one of the more compact letter-based languages.
mikebelrose:
No wA! ppl wl nvr tlk lk dat! w@ r U, %-)?
http://science.slashdot.org/comments.pl?sid=86294&cid=7511496
(What are you, crazy ?)
``... the languages of mathematics and technology permit and promote different and more effective styles of thinking. ...
... Supertongue ... Terseness is itself a virtue insufficiently appreciated. Who has not had the experience of reading a long sentence, involving difficult concepts, or complex relations, and found that at the end of the sentence he had forgotten the beginning? If you cannot express an idea briefly, then a combination of ideas may become so awkward that its expression is not just difficult, but impossible. Yet in English, for example, we use unnecessarily long words because most of the one-syllable words have not been allocated.
... linguists believe that with moderate ease we can speak and hear at least 100 simple sounds (40) whence it follows, with reasonable assumptions, that the entire unabridged English vocabulary could be reduced to words of one syllable-still leaving plenty of room for redundancy, synonyms, poetic variation, and the monosyllabic rendition of a large store of common phrases! ...
Our languages are exceedingly weak also in the description of contoured surfaces, including faces; we can easily recognize differences of physiognomy and expression that we are nearly helpless to communicate verbally. ...
... ''
http://www.orionsarm.com/eg/b/Br-Bt.html#BrevBrev
Artificial language designed to allow a baseline or near-baseline to communicate the maximum amount of information in the briefest amount of time.
Brev makes use of the entire range of word sounds that the baseline voice is capable of, not just those belonging to a particular language group. Mostly used by those hu who prefer to avoid using direct mind to mind communication links. The written version of the language employs graphic ideograms. Speakers of Brev brag that they can compress an entire life history into into 5 sentences or 3 lines of ideographs.
Brev was invented during the early Empires period by a splinter faction of the Refugium Federation, Ecos Ascending
The language is so compact that a speaker can repeat themself three times before a "slow-language" speakere can say the same thing once. This repetition allows the listener to pick up the patterns of the sentence/paragraph more easily. Sometimes - especially among the cyborg augmented clades, there's a sort of back and forth patter in each exchange like information packets being sent:
C: Mary had a little lamb :receipt?: D: Mhall :receipt.: C: Whose fleece was white as snow :receipt?: D: Wffwwas :receipt.: C: :incorrect receipt.: :correction: Whose fleece was white as snow :receipt?: D: Wfwwas :receipt.: etc.
Because of Brev's vulnerability to errors/noise, a lot of effort has been put into reducing or eliminating such problems as the language was developed. One simple method was to eliminate the use of words that sound the same but mean different things. (deer/dear, to/to/too are examples in Information age English). Speed of delivery would not be as much of a factor as tone, prefix and suffix placement and the use of words built around all the phonemes to clearly get a point across without using a lot of individual words to do so. Which is not to say that it would be impossible to be misunderstood, just harder and with a lot more information being misunderstood in one chunk.
-- Todd Drashner
Of course, every profession has its jargon, but Silicom is sillier than most. Whereas doctors, musicians, and mechanics invent terms for concepts unique to their professions, computer-industry nerds merely substitute words for which perfectly good English equivalents already exist.'' Pretty funny.
Exactly the opposite direction from #speedtalk .
... two principles for defining spoken editing commands. ... experimental results ... spoken editing language ...
The first principle is to identify more basic editing concepts than found in most editors and to construct a notation that allows for the orthogonal composition of such concepts.
The second principle is to encode each editing concept as a single syllable (with a few exceptions).
... ShortTalk, an editing language built through extensive experimentation over a period of six years. It contains thousands of commands constructed according to the principles just formulated. Still, its complete syntax can be described on a two-page quick reference card. The prototype system called EmacsListen is written in 10,000 lines of Lisp on top of GNU Emacs[4].
Data gathered over a two-month period, following several months of training, show that the author of the present article was able to actively use some 1000 ShortTalk commands. The information content, also called entropy is estimated to 7.3 bits/command. More precisely, this number is the -p log p summation over command occurrence frequencies p stemming from a total of about 30,000 commands recorded.
...
Our results indicate that editing (including punctuation and other symbol entry) may in fact on average be accomplished more effectively by speech than by keyboard, since the use of editing keys tends to be slow, involving either keys far from the keyboard center or key combinations. ...
Since command syllables can be chosen carefully (out of some 15,000 to 30,000 possible syllables) to not generate collisions with English, recognition accuracy of commands can be made much better than dictation accuracy, often reported to be somewhere between 90 and 95 percent. In contrast, studies of keyboard use during editing show that operator errors are very common, frequently around 20% to 30%. These errors include keys hit by mistake. ...
It follows that error correction and a way of easily backing up, undoing the last utterance, is very important also for spoken commands. ...
There is a great amount of work in the area of using natural language for commanding small devices, like PDAs. This work is also not relevant to editing, since it assumes relatively simple tasks that are carried out after little or no training. Editing, is a professional activity and a process so complicated that it requires substantial training -- no matter what. Thus, our results and philosophy only superficially contradict research in speech user interfaces.
...
... the principle we advocate: that humans through their intellectual superiority are better served by a strange-sounding, but precisely delineated set of primitive concepts. ...
...
Our results show that editing commands may be transmitted about as quickly to the computer as English dictation, measured in terms of information-theoretic entropy. Our assumption that editing by speech demands a substantial learning effort is contrary to conventional wisdom about the role of speech recognition. Editing is so complicated that innate naturalness of the UI does not exist. The rational approach is to let efficiency, the amount of editing information that can be transmitted per second, drive the development of a spoken interface. For the user, efficiency is the strongest motivation for learning the complex tool any unfamiliar command language is. And we argued that natural language -- being verbose, ambiguous, and vague -- may be a poor underpinning for such a tool (even if it could be understood by a very intelligent machine). Our perspective and results demonstrate that the natural match between human and machine may be the one that recognizes the superiority of the human mind over computational capabilities of machines.
-- http://anvilwerks.com/index.php/TheShadowOfYesterday/IntroductionTheir native tongue, which they kept highly secretive, was different than any other known language. It was built not of words, but of things called zu, tiny discrete bits of ideas, each pronounced as one syllable, which were combined in a complex method that could convey any idea depending on the zu used and in what order. Best of all, this language had a unique power: anyone who heard it understood the zu and in prolonged exposure gained knowledge of how to speak the language. Emperor Absolon commanded his advisors to spread the language of zu throughout his Empire.
...
highly sketchy - both because I don't know any better and often because nobody knows any better. Among the unexplained things are the fact that the ear produces noise out of its own accord, but also generates an echo when fed a click - but too late to be a natural echo, it is actively generated.
...
let's try to design a protocol that meets all these design goals, but is more limited than real speech in many ways ...
constructed languages similar to German, English, Hebrew, etc. in that they can be written down with a small alphabet, communicated verbally and sequentially, etc.
See also modified_english for smaller modifications to English.
From: John Eaton Subject: Re: universal spoken language??? Date: 01 Nov 1999 00:00:00 GMT Newsgroups: sci.electronics.design ... I have heard that when real linguists get together for their world wide conference that the official proceedings are in the one language that all linguists know: latin. John Eaton
``English dictionaries for ispell(1) supports 3 prefix and 14 suffix flags.
Prefixes: *A - re *I - in *U - un Suffixes: V - ive *N - ion, tion, en *X - ions, ications, ens H - th, ieth *Y - ly *G - ing *J - ings *D - ed T - est *R - er *Z - ers *S - s, es, ies *P - ness, iness *M - 's''
Similar lists of common prefixes and suffixes can be generated from any word list (any alphabetic language) using the ``munchlist'', ``findaffix'', and ``icombine'' utilities (part of ``ispell'').
see also pictoral languages
``Sign Language to Learn the basic finger language and to communicate with the deaf'' http://www.dictionary.com/Dir/Reference/Dictionaries/Sign_Language/
Q: Why ? :
A:
`` Without a written language, a heritage dies.'' --
http://www.signwriting.org/forums/email/email040.html
Charles Butler 1998-02-19
http://www.SignWriting.org/read.html | http://www.wycliffe.org/pray/HTPsignlang.htm
[FIXME: this is old. Moving to http://www.worldwidewiki.net/wiki/VisualLanguages ]
ways of communicating that involve 2D relationships (non-sequential).
(pictorial ?)
``post-literate'' visual languages that do not depend on an alphabet. A few of these require animation, which is simply not possible with a static printed book.
[FIXME: there's one item here that claims to be a 3 D language ... perhaps I should rename this "nonlinear language" ? If I did that, would I have to include hypertext ?]
These are not merely a way of transcribing some spoken (or even sequential audible) language. ``Symbols are meaning referenced (can be interpreted without reference to sounds or words.)''
-- Kevin Kelly, executive editor of _Wired_, quoted in "Interview with the Luddite" article in _Wired_ magazine Jun 1995. http://www.wired.com/wired/archive/3.06/saleskelly.html?pg=7Technology is a language. Technology is a language of artifacts. And when you have a bad thought, when you have a stupid thought, the answer to that is not to be silent. The response to a stupid thought is a wiser thought. Since technology is a language of artifacts, the response to "this technology is stupid" is to make smarter technology, not to withdraw from it.
Ontrario Crippled Children's centere
Bliss True Type Font http://www.symbols.net/zips/fonts.htm
also has links to Theory, points to other ``Indices and Collections'' of symbols and constructed languages, points to (non-pictoral) minority and endangered natural languages, Writing systems (including pictographs, petrographs, Hieroglyphics).
[FIXME: crosslink from data_compression.html]... not even the author knows what an idea is until it is clearly expressed in words, a quality which makes language the essence of understanding. (Diagrams and pictures may be used to support words, but it is the words that contain the idea) Hence:
- Language is the expression of thought, and the act of translating thoughts into words is the refining of understanding.
- The understanding of an idea can be improved by simplifying the words used to express the idea.
- The understanding of an idea can be improved by shortening the number of words used to express the idea.
- The exercise of improving the expression of an idea, is the improvement of the understanding of that idea.
- The more plain the use of language, the more clearly an idea is revealed. The more clearly an idea is revealed, the better the understanding of that idea.
- If an idea cannot be expressed in plain English, it cannot be understood.
-- which is why all English speaking citizens should strive to use plain English in their thoughts as well as their communications.
From: "Kevin McGee" Subject: Re: LOGO-L> Re: Fear of strong steam Date: 22 Jul 2000 00:00:00 GMT Organization: Posted via Supernews, http://www.supernews.com X-MSMail-Priority: Normal Newsgroups: comp.lang.logo "Brian Harvey" wrote in message news:8l9t3p$aej$1@agate.berkeley.edu... > "Adele Woods" writes: > So far, at least, I see old-fashioned literacy as a tool for liberation, > and newfangled multimedia pseudo-literacy as a tool for enslavement. Oh, puh-lease. :-) I have a pretty good idea of the things you could say to support this; one could also make as valid a case that the history of text/literacy is one of alienation (yes, in the same Marxist spirit that you seem to mean the above). My point wasn't to make an argument for or against such assertions. My point, baldly stated is: certain forms of representation and interaction make certain things easier and more powerful than others. Text and textual (read/write) approaches make some things easier -- and others harder. Conversation does the same. "Multimedia" does the same. The important and interesting work, to my mind, is not to decide which one is "better" in some absolute sense -- but to explore which representations and modes of interaction best facilitate which activities and goals, and to design to support them. To give a concrete example, they keyboard makes certain activities easier (typing text) and the mouse makes others easier (drawing images). You could use *either* form of interaction to perform *both* tasks -- but why would you? Yes, there are cases where it is useful to use the keyboard for graphics -- and a mouse for text; but the point is to look at the *cases*, not proclaim by fiat that one should only ever use one form of interaction or representation. k
... Picsyms (Dynasyms) ... Picsyms appear to be similar to or slightly more difficult than both PCS and rebus symbols and superior to Bliss. ...
Ideas for making minor modifications to English. Perhaps English could be improved incrementally.
Reasons to keep English just the way it is.
Its aim is the use of good, clear language by the legal profession.
Ideas for making minor changes to English to "improve" it.
``The combination "ough" can be pronounced in nine different ways. The following sentence contains them all: "A rough-coated, dough-faced, thoughtful ploughman strode through the streets of Scarborough; after falling into a slough, he coughed and hiccoughed." '' -- unknown.
...
OK, I have it! (Notice the absence of the construction "I've got it!" Got is an ugly little word that is redundant when used with the word have. I have formerly campaigned to have got kicked out of the English language ... )...
...
(apparently an excerpt from "Chapter 13: E and E Prime", _Quantum Psychology_ by Robert Anton Wilson (New Falcon, ISBN 1-56184-071-8).)
. Perhaps it would simplify the language to not have *any* prefixes or suffixes -- make them stand-alone words that modify the previous or following noun, like prepositions and adverbs.``Words are reduced to a stem by removal of common suffices before searching is performed, so in general you need not worry about singular versus plural and other grammatical distinctions.''
I assume those search engines use the Porter Stemming Algorithm http://www.tartarus.org/~martin/PorterStemmer/ (links to implementations in C, C++, Prolog, Python, Ruby, and many other languages).
[FIXME: move up to "keep it the way it is" section ?]The problems with our current system are sufficiently well-known that I feel no need to rehearse them all here; and people have been protesting about the situation for centuries. So just what is wrong with the idea of switching to something better?
Voice of America Special English and Ogden's Basic English are similar languages. ... VOA SE [has] more ... expressions for international news: activist, administration, ambassador ...
http://www.boeing.com/phantom/sechecker/se.htmlWhat Is Simplified English?
AECMA Simplified English http://www.aecma.org/Publications.htm is a writing standard for aerospace maintenance documentation. This type of writing standard is also known as a controlled language because it restricts grammar, style and vocabulary to a subset of the English language. ...
The objective of Simplified English is clear, unambiguous writing. Developed primarily for non-native English speakers, it is also known to improve the readability of maintenance text for native speakers. ... many of its rules are recommendations found in technical writing textbooks. For example, ...
- Use the active voice.
- Use articles wherever possible.
- Use simple verb tenses.
- Use language consistently.
- Avoid lengthy compound words.
- Use relatively short sentences.
-- http://specialized.english.net/whatspec.htm (the word list is available for download here) (Not afraid to talk about Jesus)
- a vocabulary (word list) of about 1500 words.
- a speaking speed of about 90 words each minute (that is about half of the normal speaking speed).
- short sentences.
Compare Specialized English with VOA Special English http://www.basiceng.com/specialized.html has a list of the additional words (and additional meanings to attach to words already there) that Specialized English added to VOA Special English, and a list of the words they left out.
links toThe rules of usage are identical to full English so that the practitioner communicates in perfectly good, yet simple, English.
We call this simplified language Basic English, the developer is Charles K. Ogden, and was released in 1930
http://www.basiceng.com/ramble.htmlBasic English is a language created by Charles Kay Ogden as a subset of standard English ...
Even though Basic English is intended to be self-contained, it incorporates some of the idiosyncrasies of standard English such as full conjugation of the few Basic verbs. A criteria was compatibility so that (1) the speaker would be seamlessly accepted as an English speaker, and (2) the student can progress painlessly to the full language. Therefore Basic is not without unfortunate complications -- namely its whimsical spelling.
Basic English could have taken the track of a separate language without any exceptions of grammar, spelling, and pronunciation, but it would sound strange to the ear of the speakers of standard English. Others have tried to regularize English, to no avail. ...
There are at least two schools of thought about simplifying language. Ogden studied and attempted to select words that can be used to express all fundamental ideas. The second approach is to select the most common words in use and exclude the frills. ...
... simplify, yet retain full compatibility with normal English ...
Interesting.
Because ``god'' is not in the word list,
that concept is expressed by other words:
Father of all
.
Here BASIC English is an acronym for British American Scientific, International and Commercial English.
There are 2 very different kinds of translation: translating from one "natural language" to another (difficult), and translating from one "artificial language" to another (easier, especially when the language was designed to be translated).
There are a few translation-related items that don't fit in either category -- -- translation from natural language to machine language (difficult) and vice versa (almost trivial), and tools that can be used for both kinds of translation.
Help me find more general-purpose translation tools.
Related local pages:
general translation:
Douglas R. Hofstadter writes a lot of fascinating stuff about the "language translation problem".
Douglas Hofstadter seems to think that natural language translation cuts to the very heart of the intelligence problem. I've enjoyed his books on fonts [FIXME: title] and on natural language ( _Le Ton beau de Marot: In Praise of the Music of Language_ book by Douglas R. Hofstadter, 1997 book.html#le_ton ).
natural language translation
(see also
)
The baffled tourist takes a digital photo of the sign with a camera built into his PDA, and the sign translator software detects the text within the image. In a matter of seconds, the text is translated into English. ... The current version of the Sign Translator is fluent in Chinese.
...
"Japanese people really enjoy reading documentation, but that's because Japanese documentation is actually fun to look at," explained Mike Adams of translation and marketing firm Arial Global Reach .
Manners must also be considered when preparing documentation. One company's on-screen training program alerted users with a sound whenever a mistake was made. Adams said this feature proved so embarrassing to Japanese users that it had to be removed.
...
"The left hand is considered unclean in some cultures, .... Burning flesh isn't considered a great sales gimmick in a lot of countries, either," Shapiro said.
... "Technobabble is difficult to translate in any language," Shapiro said.
[FIXME: unknowns#what_do_people_want] [FIXME: move to user_interface.html ?]...
It's pretty clear that programmers think in one language, and MBAs think in another. I've been thinking about the problem of communication in software management for a while, because it's pretty clear to me that the power and rewards accrue to those rare individuals who know how to translate between Programmerese and MBAese.
... Customers Don't Know What They Want. Stop Expecting Customers to Know What They Want. It's just never going to happen. Get over it.
...
You know how an iceberg is 90% underwater? Well, most software is like that too -- there's a pretty user interface that takes about 10% of the work, and then 90% of the programming work is under the covers. ... [perhaps even] less than 1%.
That's not the secret. The secret is that People Who Aren't Programmers Do Not Understand This .
There are some very, very important corollaries to the Iceberg Secret.
...
Since I am fluent in the C programming language, but not-so-fluent in the Fortran programming language, sometimes I use Fortran-to-C translation tools.
more and more people are using C as a kind of ``assembly language'' that other languages compile into. [mention this on c_programming ?]
[FIXME: crosslink with #compiler ?] [FIXME: ... perhaps I should break this section up and disperse the fragments to the appropriate places. ]
... using C as intermediate language in native code generation ...seems to include all source code for the cross-compiler. The goal here seems to be *speed* of the compiled code. Measured using the Stanford integer benchmarks (source for those also seems to be included). [FIXME: benchmarks] DAV wonders about the reverse direction -- translating C to Forth, and then optimizing for *space* of the compiled code (or perhaps even the Forth source code). This article *seems* to say that the compiled object code is about the same size, whether it was compiled by a C compiler or a Forth compiler -- but it just does a straight translation; I suspect that semi-automatic factoring could make things smaller (at some loss in speed ... but there's hope that making it smaller helps it fit into the instruction cache, perhaps making it a bit faster in some cases).
... contrary to a myth popular in the Forth community, calls in C are not slow...[FIXME: move to c_programming.html ?] [FIXME: combine with quotes from that paper in data_compression.html#program_compression ?]
how to make HP48, Windows, Atari, etc. software run under Linux or other boxes.
See also video_game.html#emulators [FIXME: merge ?] hardware_david_uses.html#hp48
``Turn Your PC Into a Mac?!'' review by Phil 1999-03-25 http://www.sysopt.com/reviews/macemu/ reviews, compares, and contrasts 3 emulators:
MACE: Macintosh Application Compatibility Environment http://macehq.cjb.net/ ``Mace is free under the terms of the GNU Lesser General Public License (LGPL)''
I don't know much German, so I try to absorb more by reading German (".de") web pages.
See also international mailing addresses .
See also online natural language translation tools #natural_translation .
a few links about the smallest pieces of a written language, letters (listed in the alphabet).
At the letter level, there's a big split between typography computer_graphics_tools.html#fonts which can be (more or less) easily read and written by someone trained to read standard Roman letters, and the alphabets I list here, which need a bit more effort and in some cases specialized tools.
Why on Earth would you go to this extra effort when you already are familiar with the Roman alphabet ? Why hassle with anything other than traditional Roman letters unless there's some real benefit ? There are some very good reasons:
(I originally started out here talking about simple letters, but then I rambled on until I started describing the general design principle http://rdrop.com/~cary/html/3d_design.html#design of ``levels'' ... I don't think I can really talk about such an abstract idea without some sort of concrete example. [FIXME: should I seperate out the idea of "levels" anyway, then just point to it here where I discuss letters ?] )
Once I thought that the more words and ideas one had to express, the more letters/symbols one needed to express them with. I was surprised to learn in grade school that all possible words in English can be represented by a fixed number of symbols strung in long sequences (the 26 uppercase and 26 lowercase letters ... English text also uses a few other symbols ). But I still thought that I'd have to learn more symbols every time I learned a new language (Spanish, German, Mathematics, etc.). (: Multilingual Downhill http://cartoons.sev.com.au/archivepage.php?cartoonid=s329 :)
I was surprised to learn that all possible words in all possible languages can be represented by a fixed number of symbols strung in long sequences.
How many symbols do I need ?
I was even more surprised to learn I only need 2 symbols. ( Bits are Bits http://www.argreenhouse.com/papers/rlucky/spectrum/bits.shtml )
In calligraphy I got my first clue that the exact shape of the letters was not very important. Simple substitution ciphers http://www.webcom.com/nazgul/codeclass.html (an entertaining way to start learning about the ASCII code and Morse Code and data compression) showed the particular shape of the letters is completely unimportant. If a completely different shape were substituted for any particular letter of the alphabet in all the books of the world and in the letter-recognition part of the brain, no one would notice (with a couple of exceptions: deeply intertwined calligraphy, and expressions like ``U shaped'' and ``T shaped'' and ``O shaped''; we'd have to find some sort of replacement like ``circular'' or ``zigzag'' something ... any other exceptions ?).
This was my first glimpe of the concept of ``levels'' or ``layers'' in the sense of the OSI seven-layer model. The idea is that one thing (in this case, English words) builds on top of another thing (the Roman alphabet). In some sense, the Roman alphabet is like a tool that can be used to build many different things -- an English word, a French word, ein Deutsch Wort, etc. . But on the other hand, any of these other alphabets can be substituted.
While the Roman alphabet could be considered ``analogous'' to the Chinese alphabet, the alphabets I list here have a much closer relationship. It goes beyond putting them in the same category. Various fruits have many things in common, and while I might have roughly the same reaction (happy happy joy joy) to an apple pie, a peach pie, a cherry pie, or a pumpkin pie, substituting other fruits -- cucumbers, eggplants, tomatos, etc. -- is just not going to cut it for me.
But these alphabets go beyond being in the same category -- if they're used as an internal representation and translated at both ends, no one on the outside ever notices when one is swapped for another.
upgradeability:
I think the cool thing about ``levels'' is that instead of having one huge monolithic structure that you have to change all-or-nothing, you have small pieces that you can pop out and replace -- hopefully making an improvement, and when it's not an improvement you can pop the original back in without much hassle and with a little more respect and understanding. If you get an entire new set of silverware (-: or in my case plasticware and stainless-steel-ware :-), you don't need to upgrade your dishes, your table, or your food to work with it. You can try it out the new set for a while and see how you like it. If you decide you liked the old set better after all, you can switch back easily. In fact you can try out just 1 new knife or fork or spoon at a time. But there are limits -- you can't try out just a new fork *prong* by itself, the entire fork prongs + handle is a monolithic all-or-nothing structure.
If you truly want to understand something, try to change it.
-- Kurt Lewin
In architecture and in computer programming we have the concept of ``scaffolding'' or ``framework'' -- you build something that you really don't want (which at first seems wierd), but it works just enough that you can start swapping out levels and get incremental improvement. (related to http://c2.com/cgi/wiki?TestFirstDesign , http://c2.com/cgi/wiki?TestDrivenDevelopment )
Often you have several levels. The next higher level, ``words'' ... we can swap out different words (especially nouns) ... different ways of spelling the same words ... we're still using the same alphabet at the lower level, and the sentence (at the higher level) means the same thing (with a few exceptions -- puns and onomatopei and anagrams spin_dictionary.html#anagrams ). (exceptions are called http://c2.com/cgi/wiki?MixingLevels )
It seems that this concept of ``levels'' occurs in many fields of study:
-- http://www.math.toronto.edu/mathnet/questionCorner/noneucgeom.html``All of Euclidean geometry can be deduced from just a few properties (called "axioms") of points and lines. With one exception (which I will describe below), these properties are all very basic and self-evident things like "for every pair of distinct points, there is exactly one line containing both of them".
This approach doesn't require you to get into a philosophical definition of what a "point" or a "line" actually is. You could attach those labels to any concepts you like, and as long as those concepts satisfy the axioms, then all of the theorems of geometry are guaranteed to be true (because the theorems are deducible purely from the axioms without requiring any further knowledge of what "point" or "line" means).''
... Euclid's Parallel Postulate ... is independent of the other axioms, in the sense that it is logically self-consistent to have some things called "lines" and other things called "points" which satisfy the other axioms but don't satisfy the parallel postulate. Any such a collection of things is called a non-Euclidean geometry.
There are many examples. Most concretely, if you do geometry on a curved surface instead of on a flat plane (where now "line" refers to the shortest path between two points, which obviously will not be straight if you are on a curved surface), you typically end up with a non-Euclidean geometry.
-- http://www.math.toronto.edu/mathnet/questionCorner/projective.htmlThe axiomatic approach:
This approach requires no philosophical definition of what a point or a line actually "is", just a list of properties (axioms) that they satisfy. The theorems of geometry are all statements that can be deduced from these properties. In this approach, the theorems of geometry are guaranteed to be true no matter what concept of "point" or "line" is being used and no matter how they are defined, as long as they satisfy the basic axioms.
...
One interesting fact is worth mentioning: in projective geometry, points and lines are completely interchangeable! That is, any statement about points and lines would still be true even if you replaced all occurrences of the word "point" with the word "line", and vice versa. For instance, the basic axiom that "for any two points, there is a unique line that intersects both those points", when turned around, becomes "for any two lines, there is a unique point that intersects (i.e., lies on) both those lines", which is the property described above. There is a complete duality between points and lines in projective geometry.
-- http://www.math.toronto.edu/mathnet/questionCorner/noneucgeom.htmlIf it seems unsatisfying to think of having to assume certain things without proof before you can prove other things, you can think of it in the following alternative way: the postulates give a definition of what one means by the words "point" and "line". These words mean any things that behave in the manner described by the postulates.
...
... the axiomatic approach ... rather than defining points and lines by some philosophical definition of what a "straight line" actually is, they are defined by what their properties are.
... one either opts for the axiomatic approach, or else moves to analytic geometry, whereby a point on the plane is defined to mean an ordered pair of numbers, and a line is defined to be a set of numbers (x,y) satisfying and equation of the form ax + by = c where a, b, and c are constants.
From a coding perspective, all of these alphabets are ``simple substitution cipher'', a 1 to 1 mapping of the letters of the alphabet onto different shapes (sounds, bumps, etc). While they may seem mysterious and ``impossible to read'' at first glance, they're all easily decoded by any competent amateur cryptologist, given enough source text.
"Why are the letters of the alphabet in its current order (alphabetical order, rather than, say, Futhark order) ?" For most purposes the ordering of the letters makes absolutely no difference, (reading english text), while most other times the ordering is arbitrary (sorting names, words, etc) as long as we include all the letters, and we might as well use the standard alphabetic order. DAV: is there a better order ? ... consonants, then vowels ...
I briefly touch on the idea of adding or subtracting letters from the alphabet here ...
http://www.evertype.com/standards/wynnyogh/thorn.html
also has a nice explaination of where each of the letters of the modern Latin alphabet (aka Roman alphabet, English alphabet) came from, and suggests adding the letter thorn (þ þ Þ Þ) after the letter zulu.
The Moby Dick chart indicates "TH" occurs 345 times, while 'J' only occured 11 times, 'Q' 16 times, 'X' 11 times, 'Z' 6 times. )
[FIXME: I mention this URI elsewhere -- should I bring the references together ?]
A B C D E F G H I _ K L M N O P Q R S T _ V _ _ Y _
and ran words together with no spaces.
-- _The Second Cryptographic Shakespeare_ by Penn Leary http://home.att.net/~mleary/pennl14.htm ``In Elizabethan english, the letters "I" and "J" were used interchangeably ("Ben Ionson"), as were "U" and "V" ("INSVING"). The letter "W" was often printed as two "V"s ("VVilliam").'' -- http://www.veling.nl/anne/templars/acrocipher.html (So how were X and Z represented ? Were they just skipped, or what ? -- DAV) (DAV: consider eliminating C as well, as suggested by EU SPELLING by Gary W. Fugate http://www.basiceng.com/euspell.html ) [FIXME: crosslink "adding and deleting letters" with "simplified english" ...]
A phonetic alphabet is useful when you are talking over a noisy phone / radio and the exact spelling of a word ( name, call sign, etc.) is important.
Expressing letters in terms of word-sounds. Names people give individual letters (and other written symbols).
Too many people give the letters ``n'' and ``m'' and ``p'' and ``b'' and ``z'' and ``c'' names that I can't tell apart. (Read the previous sentence aloud to someone over the phone...).
[FIXME: move to greek_alphabet.html ] [FIXME: delete ../mirror/greece/Greek_Alphabet.htm once I'm sure all the information there is redundant. ]
See also greece.html
Greek alphabet Scalable form in utf-8 http://www2.york.ac.uk/depts/maths/greekutf.htm
Greek alphabet http://www.york.ac.uk/depts/chem/course/studhand/greek.html
What I'm calling ``improved alphabets'' (is there a better term I could use ?) are superior to the Roman alphabet in some quantifiable area, although the Roman alphabet could be substituted.
technical improvements -- given characteristics of the tools you give to write letters on paper, these other alphabets let you write text much more quickly or much more compactly. compact representation -- packing more words onto a sheet of paper. [FIXME: Mark Twain quote ... ``typewriter ... an awful pile of words ...'']
Torbjorn Andersson is the only human that DAV has ever heard about that cuts runestones and also writes web pages. You can order custom runestones from him. The inscription on his first runestone says: ``Torbjorn cut these runes for his good mother Ingrid. She is buried in Botkyrka. This will stand as a memorial of Ingrid as long as mankind will exist.'' -- http://www.algonet.se/~tanprod/zenrph1.htm
One page on this site mentions -- "Reseach has shown that lower case letters with the extenders are more distinctive and legible. Most newspaper and magazine headlines are now downsized for this reason." http://66.41.60.21/unicase.htm "There are 12 pure vowels in English speech and at least 12 important combinations."
DAV played with something similar, but thought everyone would laugh if I put it on my web page. (basically 7 segments ...) ... Who cares if people laugh ? let's put it up anyway: computer_graphics_tools.html#seven_segment_letter
[FIXME: should I move this over to typography computer_graphics_tools.html#fonts ? ]
See si_metric_faq.html#iso8859 for a little info on ISO-8859-1, how to encode all the letters I use into that encoding, and some of my related rants and raves.
"Sequoyah and His Syllabary" by Helge Moulding http://www.geocities.com/Athens/1401/sequoyah.html
someone, somewhere, thought this alphabet ``looked cool'', but (in DAV's opinion) it is not objectively superior to the Roman alphabet.
Here I list alphabets with typography that's not directly readable by someone used to the Roman alphabet. It's obviously artificial and seems to be intelligent, but the mystery about what it means is part of the emotion the artist wants to convey.
Some of them were designed to handle standard English text (26 letters), others were designed for fictional characters (science fiction and fantasy) speaking fictional languages.
See typography computer_graphics_tools.html#fonts for fonts that are directly readable by humans used to the Roman alphabet, and also standard fonts for human languages.
*************************************************************************** From: Anders Sandberg Subject: Re: >H geometry of thought-space Transhuman Mailing List On Thu, 18 Apr 1996, Anton Sherwood wrote: > I often wonder what sort of shape an information space has. How many > dimensions? Flat or curved? For the whole of human knowledge there > may be no answers to these questions, but for some restricted areas it > ought to be not-too-hard. > > Example: musical styles. Badfinger is near the Beatles; [etc] In this case you could try to form a distance metric, letting artists be nodes linked by labelled arcs. Using some kind of graph displaying software (there are some rather impressive systems out there) you could then try to display this graph with miniumum distortion or some other optimization. However, I'm not sure there is a low-dimensional embedding of music. Another possibility would be to plot ideas in a high-dimensional space and use principal component analysis to project it into a low number of dimensions. In general I conjecture that information spaces can be represented as (statistically) fractal graphs. Hmm, opens up interesting possibilities: "Abstract: We have found that the field of social psychology has a semantic embedding dimension of 33.3+-2.2, which is significantly lower than psychology in general (65+-4.2 [1]). This implies that epistemic lacunarity measure theory is not applicable." ----------------------------------------------------------------------- Anders Sandberg Towards Ascension! http://www.nada.kth.se/~nv91-asa/main.html GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y
From: Anton Sherwood Subject: >H geometry of thought-space Transhuman Mailing List : Take lyserginic acid diethylamide instead, it's much cheaper. Are you kidding? I can't do math _at all_ when I'm polluted, let alone visualize curved spaces in higher dimensions. A video-game in curved 3-space would, I imagine, help physicists and mathematicians gain useful intuitions about such spaces. But this strays from what I was talking about ... If the "natural" metric space for some concept-field (my example was musical styles) is too high-dimensional to represent visually, it can still be useful. If nothing else, you can project it into 3-space. (Choose the remaining axes wisely. I have an idea about that, too.) Anton Sherwood *\\* +1 415 267 0685 *\\*
From: Alexander Chislenko Subject: Re: >H geometry of thought-space Transhuman Mailing List Anton Sherwood <dasher@netcom.com> wrote: >If the "natural" metric space for some concept-field (my example >was musical styles) is too high-dimensional to represent visually, >it can still be useful. If nothing else, you can project it into >3-space. (Choose the remaining axes wisely. I have an idea about >that, too.) Please share more of your ideas! I would assume that in the ideal case you find orthogonal (unrelated) features, and build [groups of] axes starting from the most important features for the domain. Then projection of your models on sub-spaces represent generalized abstractions (i.e., you ignore the [particular] features represented by the other axes). An interesting thing is to draw domains in the idea-space and watch their dynamics. A domain would usually look like a balloon or a curved worm. Domains, such as music styles in music-space, or communication methods in the space represented by (Speed_of_Delivery * Number_of_Recipients) axes, may come close to each other, and even overlap (usually in projections). The domains come into competition with each other where they meet, and may freely develop into yet unoccupied areas of space. They often squeeze and push each other. If you make a series of animations out of your observations of domain positions, you will see a little movie with a bunch of little worms crawling and struggling on some [concept] terrain. My father used to do this for biological species, though without any computers the results were just sequences of paper drawings. ----------------- Physical space gave birth to biological systems, but its structure is hardly optimal for functional development. Throughout the history, we see intensifying efforts to rebuild the effective system architechures to more closely correspond to the functional space. The key to success here is to [effectively] bring related systems closer to each other. The first attempts in this direction started in early ecosystems, with related objects physically getting closer to each other. Then, more flexible connections were discovered, and things (animals) started moving around. Humans enhanced these abilities by both clustering themselves better and speeding up physical relocations with transportation. Shortly after this, humans started actually altering the effective metrics of physical space by building transportation infrastructure - bridges, roads, etc. to provide faster connections between functionally related domains. With the advent of communications (from making faces at each other to sending letters, to e-mail) it became possible to move away from the necessity to drag object to each other for every interection. Now, the underlying structure of physical space may be often be ignoreed, and you may concentrate your interactions with - and representations of - reality increasingly in the terms of functional space. Ultimately, I think, this process will come to a situation where the high-level processes effectively transcend to the idea-space, and the physical space and matter - the hardware of the Universe - will be dealt with by the low-level layers whose sole purpose would be to provide the desired functional connections at the lowest possible cost. ----------------------------------------------------------- | Alexander Chislenko | sasha1@netcom.com | Cambridge, MA | | Home page: http://www.lucifer.com/~sasha/home.html | -----------------------------------------------------------
Date: Wed, 15 May 1996 00:00:07 -0400 (EDT) From: transhuman@umich.edu Sender: transhuman@umich.edu Subject: >H Digest Status: U ... From: Mitchell Porter Subject: Re: >H Implications of geometric memespace ... A space of "signifieds", on the other hand, is, I think, what people have in mind when they talk about spaces of concepts, memes, and so forth. (You could call a signifier space a namespace, and a space of "signifieds" a memespace.) This is an ontological concept: such a space is a set of possible entities or possible state of affairs. It would have to include, say, the notion of "a world where elephants invented cars" or "a planned meeting on the Moon in 2006 AD", but not "grzzxyllph", because that doesn't mean anything in any language. (Yet.) ... *********
automatically organizing by interest
self-organizing geometry of Usenet
[FIXME: summarize]
original idea by Anton Sherwood: http://www.ogre.nu/text/instead.txt
DAV:
I like the idea Anton Sherwood proposes.
A few minor flaws:
I suspect there *will* be lots of asymmetry. Everyone will want to read what Emperor Dogbert and the Usenet Oracle have to say, but Emperor Dogbert won't read what *everyone* else has written (not enough hours in the day).
Already some people have different "modes" -- I hear that some people really enjoy the fiction of Carl Sagan, but really hate his non-fiction. I guess you could handle this ad-hoc by giving Sagan 2 points in N-space, which will quickly diverge. Perhaps you can think of a less ad-hoc mechanism.
I personally think it would be a Bad Thing for a person to read lots of stuff from group A but never send A any feedback, and then post lots of stuff to group C but ignore anything C ever says. Of course, this happens all the time -- Bernoulli *never* sent Pythagoras any mail, and (since he is dead) Bernoulli ignores everything I say. I like the way your mechanism gives us some sort of feedback on the sort of people who read our messages, not in terms of the readers themselves, but in terms of other writers who are "attracted" to the same readers.
Perhaps you could have 2 separate universes: the "universe of writers", with a star for each poster, but their positions determined by the readers -- if a reader really likes A, B, and C, then those stars feel an attractive force towards each other (or towards their center of mass, which is equivalent); if that same reader really hates X, Y, and Z, then those authors will be repelled -- but in which direction ? not away from each other, since it may be that they have a single thing in common that that reader finds repulsive, which should bind them together. the "universe of readers", with a star for each reader...
On a positive note:
Perhaps the attraction/repulsion mechanism will cause spammers to be pushed out towards the outer edges of N-space, giving new meaning to the phrase "push them over the edge". Yes, let's bring back sea monsters swimming the oceans near the Waterfall at the Edge of the World.
Date: Mon, 22 Apr 1996 00:00:06 -0400 (EDT) From: transhuman at umich.edu Subject: >H Digest *************************************************************************** From: (Anton Sherwood) Subject: >H geometry of thought-space Transhuman Mailing List Anton Sherwood wrote: : >If the "natural" metric space for some concept-field (my example : >was musical styles) is too high-dimensional to represent visually, : >it can still be useful. If nothing else, you can project it into : >3-space. (Choose the remaining axes wisely. I have an idea about : >that, too.) Sasha wrote: : Please share more of your ideas! : I would assume that in the ideal case you find orthogonal (unrelated) : features, and build [groups of] axes starting from the most important : features for the domain. Then projection of your models on sub-spaces : represent generalized abstractions (i.e., you ignore the [particular] : features represented by the other axes). Yeah, that's what I was thinking, except that I am not sure what you mean by "features". The coordinate axes are arbitrary, because a priori we don't know what features exist, let alone which are significant. One approach is to find the 2- or 3-plane of least squares. But we can't use the least-squares technique we learned in our youth, because here is no distinction between dependent and independent variables! Or rather, part of the task is to find the dependent variable(s), if any. Another useful space is that of people with shared interests. Sasha and I have discussed this before, as a way to filter Usenet. Each user is assigned a point in N-space (initially random) which attracts or repels others according to how interesting they find that user's postings. (Requires a new header; what shall we call it? X-Sasher?) The resulting vector functions as a semi-automatic killfile: your newsreader computes the dotproduct of your vector and the author's vector, and shows you the article only if it exceeds the threshold you set. (You may want your threshold to include randomness.) With time, assuming that for active posters "interest" is roughly symmetric (i.e. few pairs exist such that X loves Y's posts but Y finds X tedious), I expect the users will settle into a moderately clumped distribution isotropic on a hypersphere of no more than eight dimensions. The beta version of this scheme, I suggest, would use 16 dimensions. After a year, we look at the results, derive a new coordinate system and publish a conversion formula. The new axes will be listed in order of their significance (variance). At revision time, we may decide to change the number of dimensions. The user's vector should be encoded as a string of no more than ~64 characters; the tradeoff is between resolution and number of dimensions. There are 94 usable characters (`!' through `~'), which I suppose is plenty for one dimension. An article about net.personalities can then be illustrated by a projection (in the first two or three dimensions) of this universe. It'll look like the night sky, where the brightest stars represent the most voluminous posters. (If you see a Milky Way, you did not choose the most useful projection.) (Spammers and other killfile-bait appear as quasars surrounded by blackness. Unfortunately, a new noise-source, appearing at a random position, causes a sudden shift in the map as neighbors flee. Can this effect be avoided?) My 3-projection of this N-space may be very different from yours, because I want one of the axes to be "interest to me" - i.e. dot-product with my own vector. The two next most significant dimensions would then show the diversity of my interests. : An interesting thing is to draw domains in the idea-space and watch their : dynamics. A domain would usually look like a balloon or a curved worm. : Domains, such as music styles in music-space, or communication methods : in the space represented by (Speed_of_Delivery * Number_of_Recipients) : axes, may come close to each other, and even overlap (usually in : projections). The domains come into competition with each other where : they meet, and may freely develop into yet unoccupied areas of space. : They often squeeze and push each other. If you make a series of : animations out of your observations of domain positions, you will see : a little movie with a bunch of little worms crawling and struggling on : some [concept] terrain. : My father used to do this for biological species, though without any : computers the results were just sequences of paper drawings. I can imagine this for species, where the axes may be length, mass, litter size, length of beak, size of eyes and so on, diferent aspects of the changing niche a species occupies. What data did your father use? Anton Sherwood *\\* +1 415 267 0685 *\\*
Date: Wed, 24 Apr 1996 00:00:04 -0400 (EDT) From: transhuman@umich.edu Sender: transhuman@umich.edu Subject: >H Digest *************************************************************************** From: Anton Sherwood Subject: >H geometry of thought-space Transhuman Mailing List Ruminating on my scheme for a self-organizing geometry of Usenet, I wrote: : (Spammers and other killfile-bait appear as quasars surrounded by blackness. : Unfortunately, a new noise-source, appearing at a random position, causes : a sudden shift in the map as neighbors flee. Can this effect be avoided?) Yes! And without the central choreographer that someone suggested. Each user's X-Interest-Vector begins as +-epsilon in each dimension. That gives the universe a "seed" asymmetry; if everyone started at zero, which way would they move? As users selectively step away from each other, eventually they approach the limiting hypersphere; at that distance, the core's asymmetry is negligible. Since spammers don't read, they never stray from the core, and their effect on the Sphere of Interests is therefore symmetric. (Same goes for flamers without well-defined interests, who wander about aimlessly.) I observed that X-Interest-Vector could be encoded as a string of characters between `!' and `~' inclusive. That range can be interpreted as representing odd integers from -93 to +93, ensuring no zeroes! A new user's header, then, may show X-Interest-Vector: OOOOPPOPPPOOPPOP and an old hand's header may show X-Interest-Vector: SYA>QA[MA>[PGRA> Definition: the "principal dimension" is a unit vector P which maximizes the variance of dot products of P with posters' Interest Vectors; informally, the axis along which posters tend most to sit. If the N-dimensional Sphere of Interests is projected along its principal dimension, we can then look for the principal dimension of that (N-1)-plane, and so on; if I mention the k principal dimensions, I mean the first k mutually perpendicular vectors found in this way. (Statisticians, is there another way to choose a principal k-plane?) Conjecture: three or four principal dimensions are well-defined; further principal dimensions are well-defined for limited sectors of the Sphere of Interests (regions with diameter less than half that of the whole Sphere), but no better than noise for the Sphere as a whole. This because once the broad groups have staked out their turf - a spontaneous order as ever was - subgroups divide on axes relevant only to their nearest neighbors. The division of guitars into acoustic and electric does not show up in the fishing groups. Which means that if the rule of attraction/repulsion in the Sphere is symmetric with respect to the axes, every axis will be distinctive in some sectors; the "natural" dimensionality of the Sphere will seem to be whatever it's allowed. Can that be avoided? Can the users be constrained to do most of their varying in the first few dimensions, spreading out in the last dimensions only as necessary? Yes, I think so, if the migration rule has a slight bias for movement in the first dimensions; I'm still thinking about such rules. (Expect a long post to AltInst.) Anton Sherwood *\\* +1 415 267 0685 *\\*
---------------------------
Anders wrote: >>One could for example create a high-dimensional vector space where each >>coordinate represents occurences of a certain keyword or group of keyword >>("upload-", "cryo-", "nano-" etc), and then reduce its dimensionality to >>three using principial component analysis. And there is a kind of diagram >>where the user specifies some search terms and places them in space, and the >>database creates a 3D model where nodes organize themselves depending on >>their distances. Weren't we just discussing something like this over on the Transhuman Mailing List ? (I *still* haven't gotten anything from that list in a month now -- apparently I missed the singularity.). Remi wrote: >my great problem... >cryonics is close to uploading, and close to to space migration. But uploading >is not close to space migration (I don't know if this is true. Just an example). >So, I must make uploading close to cryonics, but far from space migration. I am >not at all a mathematician , but I suspect a "4th dimension trick" I cannot >resolve. I thought of using "teleports" or LOD Nodes (changing the appearance of >the object as you approach it ,so new connections appear), but it would be very >difficult to materialize, and above all, a reader would not obtain a good idea >of the relationships with just a glance to the world. And I would love to >preserve lisibility. I don't want to transform Omega in an Escherian Doom Game ! >If you have some ideas which can help me go out of this pit, ,I'd like to >Iisten to you (but remember i'm very dummy in mathematics !) > >Remi Unfortunately, adding normal Euclidean dimensions will not help this problem (contrary to most explainations of "hyperspace" you may hear). If uploading is close to cryonics, and cryonics is close to space migration, then uploading will be close to space migration no matter how many dimensions you add. It looks like the Omega database will turn into a tightly interwoven, self-referential document. Lots and lots of links to other parts of itself. I've seen some Web sites where (as Remi says) it is difficult for a reader to understand what is available on the site and the relationships between them. I just click on links and I get randomly teleported to other pages, quickly getting lost. It looks like Moss is doing a good job organizing the Omega to avoid this problem. On the other hand, a Escherian Doom Game sounds kinda cool ... ---------------------------
Connect: 02:12:02 Wed Oct 18 23:36:09 1995 Starting "/usr/games/fortune | /usr/gnu/bin/less" A good question is never answered. It is not a bolt to be tightened into place but a seed to be planted and to bear more seed toward the hope of greening the landscape of idea. -- John Ciardi line 1/4 (END)
Apparently-To: <cary at agora.rdrop.com> Date: Tue, 16 Apr 1996 00:00:08 -0400 (EDT) From: transhuman at umich.edu Subject: >H Digest *************************************************************************** From: Anders Sandberg Subject: >H Forwarded Intro Transhuman Mailing List ---------- Forwarded message ---------- Date: Mon, 15 Apr 1996 18:18:18 +0200 (MET DST) From: Eugene Leitl ------------------------------------------------------------ _ _( )_ ascension (are there any ^ \ ^ / official transhumanism | <-*-> glyphs, btw?) _ / v \ expansion (_X Hi co-netizens and aspiring transhuman larvae, I am a new subscriber -- please have some patience as I hadn't had enough time to wade through the FAQ, and, since our machines had to be transiently shut down, had only browsed through only a few posts from the archive. Even from the scant few poster names I had time to glimpse, however, it is obvious that right from the start this list has acted as a powerful attractor for some of the most articulate and shrewd thinkers on the net I ever had the pleasure to encounter: the incredible ubiquitous Mr. Anders Sandberg (all hail to the lampreys ;), then one of the very few anchors of sanity on nanotech, Dr. Rich Artym, and the not entirely unknown Sasha Chislenko (privet seml'akam). I am looking forward to meeting other, similiarly distinguished netizens which doubtlessly should be swarming this list in droves. Preliminaries done, I'd like to proceed to the beating of diverse, doubtlessly extremely dead horses (it's probably all in the FAQ anyway). 1. so why this list? 2. can we do something? 3. should we do something? 4. how exactly shall we proceed ? As to 1), I am actually amazed to find any list members at all, since the transhumanism meme appears to be even more obscure than the cryonics one. It seems we constitute an infinitesimally tiny fraction of the, alas, already quite exquisite netizen community. While this is very good for ego stroking, it obviously foretells grim future for implementation attempts. So why this list -- to spread the transhumanism meme? Look at CryoNet: for all the years it exists it has hardly managed making cryonics a mass movement. In fact, after some past brief flowering, the major players have quantitatively ceased but to token contribute -- in their partially explicitly voiced oppinion everything of value has been said already -- and thus posting has become a waste of time. Sci.cryonics, the cesspool it at times is, has doubtlessly contributed much more to the cryonics pandemic memefection than CryoNet ever could. (However, the utterly moronic mass media had had in nick of time accomplished much more than even that particularly notorious newsgroup). If, on the other hand, this list is meant as an instrument to grind out the implementation roadmap and (hear, hear) to effect a task force assignment, it can potentially become extremely valuable. I fervently hope it is meant as the latter. Even a mere virtual think tank can be constructive enough. (Several guys can sample a larger fraction of idea space than one guy alone, no?) We have arrived at point 2). Even if this obviously sounds like terrible hubris, I think we, scant as our numbers are, actually _can_ do something. While the nanotech list, with its predominant "gee whiz, wouldn't it be great, if ..." attitude vividly shows us how things should _not_ be done, think about such space flight pioneers as Ziolkovsky/GIRD members and that Goddard guy. However obscure initially, philosophies/schools of thought with time can sure draw enough followers to become sufficiently mainstream and thus can be regarded as a potentially transmutational force. But what are the outline specs of the Omega project? I think I can speak for a substanteous percentile of transhumanists (flame me if I'm wrong) that we have set out to construct the Ultimative Intelligence, UI, or, God. At the very least we want to travel towards the Omega singulariy in persona, converging towards it in Far Future -- in fact _to make Omega happen_. Personally, I don't care too much whether it is the infinite-by-definition Tiplerian Omega or just something actually limited but still very, very puissant. From this side of the long Ascension path mundanely exponential and holy hyperbolic look all about the same. Imo, even a single planetary or a Dyson sphere intelligence ecology is surely sufficiently complex to be far beyond our wildest collective dreams. 3). But _should_ we do it? Obviously, the proverbial five-digit-IQ Superhuman Intelligence, whether space- or earth-based can plan rings around us, and, should it ever find our feeble self-protection attempts threatening to its well-being, boojum us out of existance while we are still in bewildered disarray as to what is actually happening. At very best, we will first watch the SI making its escape (probably with a noticeable blue shift ;), then spend some few quiet decades (or a century) marvelling at those really spectacular things happening in the skies, then, more likely with a whimper than with a bang, cease to exist, as the Earth is disassembled to become Dyson sphere raw material. Or to Something Utterly Unimaginable (but, to us, equally terminal). Or just remain there unscathed, to puzzle over Singularity enigmatic artefact relics peppered all over the system. Or whatever. If, on the other hand, we can build a earth-tied, only-modestly-superhuman intelligence, a dead-man-switch-triggered nuke pointing to its head and a solemn (fingers in plain view) promise to let it go once it has given us the means to upload and ascend (certainly a trifle to a SI) I think we might have a chance. No risk, no fun. So, hell, why not go fer it? 4.) How should we proceed? "- Oh, Mr. Risk Capital Vendor, Mrs. General Public - we just intend to build God and, by the way, if we happen to succeed in this humble venture of ours it will almost certainly destroy the world as you know it, quite en passant" -- while this is pretty decent eschatology it makes probably totally inadequate marketing. The average human default perception of transhumanism is already either a) its pure SciFi at best or b) dire, dangerous lunacy at worst. Neither of these is likely to provoke constructive response (or $$s) but is sure to draw funny stares or even outright flames. But enough of that. Before we set out, there is another thing I'd like to know in advance: Why is there this ominous Silence in the Skies? - either intelligence is very, very rare and we're alone in the visible region of the universe (but why? Physics and thus secondary/tertiary planetary nebulae chemistry is quite isotropic, according to recent insights life nucleation events are now thought to be pretty common, and co-evolution, while certainly a nonlinear process, artefacts a cerebralization index growth trend as evident from the fossil record -- so then why rare, dammit?) - for some reason, intelligence must screw it up in the end and doesn't survive up to the space leap (either Nemesi keep continously zebraeing iridium content of sediment layers or the alien idiots manage either to starve or to nuke themselves out of existance -- (hmm, that one seems to be a really substantial point if one takes a look at what has been happening in the last decades)) - if one has a faible for paranoia, then - hush - we're not alone, alien SI's stand all around our cage in the intergalactic zoo (if one cares for really _heavy_ paranoia then we/the visible part of the universe are the experiment in progress, likely to be terminated at any time ("...and the first thing you see in heaven is a score list?")). - alien SI's are out there but we mistake their metabolitic signatures for natural phenomena (brown dwarfs with non-blackbody spectra? now and then funny gamma flashes coming from everywhere? stellar jets? whatever?) - SI's are all there but keep mum for some reason/are not expansive/don't tread on early life forms (choose the one you like most) - Vinge was right after all, the Singularity is just around the corner, the civilization zooms through final development stages real fast and transcends/ceases to interact with this universe in the end phase, in the aftermath being a very short-duration process which leaves very few if any long-range detectable large-scale artefacts. - any major alternative left out? ideas? Anyways, we can probably glimpse for what we are in for from scrutinizing the skies long and hard enough. It is a really sad fact that SETI has been discontinued due to lack of finance -- but at least Hubble is still operational. It may turn out yet that Dyson stellar spheres constitute a major percentile of Dark Matter mass ;). Since the shells probably won't be entirely optically dense, their spectra should look funny enough to raise eyebrows. So, again, how should we proceed? What are our doorways into summer? The worthy proponents of nanotechnology, holding banners with Drexler's counterfei on them, will surely claim that the all-purpose nanoassembler is just round the corner, and, once we manage to build it, everything else becomes utterly trivial. I don't think it is thus easy. The most trivial counterargument would be that we still don't know whether strong (''Drexlerian diamondoid mechanosynthesis'') nanotechnology is feasible. Oh, doubtlessly one could do computation by diamondoid rod logic, that much one can already see even from the preliminary numerical simulations. But what if, e.g., for some weird reason physics forbids mechanosynthesis reaction set to be sufficiently all-purpose? (We have virtually no experimental data on that, and diamondoid-tip quantum chemistry calculations can be described as unreliable (and thats an euphemism)). Or the tip positioning precision insufficient to construct a perfect diamondoid lattice but leads to subsequently deteriorating lattice perfection? A single nanoassembler one somehow manages to bootstrap but which can't autoreplicate is obviously not exactly awe-inspiring. If the strong nanotech Omega route is open to us, however, matters _do_ become much easier. Since nanoassemblers as envisioned by Drexler, prefer to frolic in the hard-vacuum low-luminance habitats, they are virtually tailor-made for the seeding of the planetoid belt and Oort cloud. As their replication cycles are much shorter than of larger systems, their advantages are obvious. Thus not to exhaustively sample the Drexlerian Omega route is obviously the greatest folly of them all. But to manipulate matter is not the same as to manipulate information. Nanoassemblers offer us the chance of building extremely cheap maspar hardware but do not tell us how a truly intelligent system (I think we all agree that the brittle logic-inference-engine typed AI has been dead for at least a decade now and good riddance to it) should look like exactly. True enough, an array of nanodisassemblers hooked to an imager is a good approximation of the perfect microscope archetype and offers a excellent empowerment tool as to finding out what a mammal brain performs its magic with exactly. But we still have to unravel the quite nontrivial complexity essentially unassisted, to improve upon it, and then to give orders to the nanocritters to fashion the better mousetrap we have eventually envisioned, which, given time, can build an iterative sequence of even better ones, all the way up to Omega. What I think you all can agree with is the fact that SI is essentially all about computation. Once you have/are an SI, you can have all the other goodies (like nanotech and uploading and and and) virtually for free. And that the noble quest towards SI probably must lead us through the narrow paths of neuroscience, which literally _bristle_ with complexity. Then steps forth the grim-faced phalanx of strong AI proponents, chanting "Hans, Hans!" and "Marvin, Maarvin!". Their insignia are the firm belief that, at heart, the connectionist AI guys are, essentially, a bunch of really incompetent blokes, and, that the biological neurons keep maintaining these highly unusual, extremely elongated shapes and very high connectivity at such great energetical and time-spent-in-evolutionary-optimization expense merely to show off what weak nanotech is capable of. On the face side of their shields in gaudy colors the log plot of computational performance against time ("see? see? it's straight!") is painted. That on the _other_ side of their shields the memory access latency trend plot is asymptotic is, of course, of absolutely no practical relevance at all. Of course _all_ applications, _particularly_ the connectionist ones, _must_ all have high code and data locality so that cache hierarchy can go on pulling its parlor trick of faking high memory bandwidth. Of course it's common knowledge that evolution is really lousy at optimization and hence all that neuronal circuitry tangle is collapsible to a few lines of code. Of course the low connectivity of retina, which is heavily optimized to do the first stages of visual processing only, is absolutely typical and hence one has the right to make such really grand sweeping generalizations which hold all the way up to the neocortex. Of course the super-GHz switching rate of transistors can rung rings around the synapse piffling sub-kHz rate. Of course. Since if, with good reasons (both theoretical and practical), one starts to suspect that all that spectacular connectivity is really necessary and then makes an estimate, which architecture and which silicon estate resources an integer automaton network die implementation might need one might encounter some very, very unpopular truths -- but truths nevertheless. That, among other things, silicon lithography can only produce flat structures which average path length is sadly lacking in comparison to neurons' noisy 3d grid interconnects. That random defect hit probability, the essential good die yield determinant, allows only a very limited achievable die complexity, which WSI constrains even further. That fanout is _very_ limited and the 10k average connectivity of the cortex makes any chip designer gasp from envy. That the synapse is a submicron structure which is a _working_ instance of soft nanotechnology and operates with an hitherto unsuspected precision. That time multiplexing we are forced to use in silicon can easily absorb three orders of magnitude (and then some more) switching latency advantage, thus rendering significantly superrealtime SI what it ought to be, namely a dream. (This "feature" list is far from being complete, btw). Because then one might arrive at a human equivalent estimate which takes about 2 WSI MWafers, needs the volume of a small house if packed _really_ dense and will dissipate some 100 MWatts of power. ("..oh yessir, we take this river here for coolant and put that nuclear power plant on the opposite side of thar them hill yonder."). Needless to say, this is not current off-shelf technology. One cannot help but to grudgingly admit that the human brain is a pretty awesome piece of neatly engineered circuitry, particularly if it comes to its volume/power specs. Do you still think current, or future silicon photolithography is a viable Omega implementation platform? For some weird reason I can't recall I don't, anymore. Of course we could always launch a von-Neumann probe to the Moon to turn a large fraction of its surface into a sea of solar cells and silicon chips in what is probably a relatively short period. While one might argue whether the 80's 100 t estimate was realistic, with our superior IT technology we probably could do it today. In fact, I don't have a clue as to why such a project was ever abandoned. (Fidipur has absorbed the means, probably :( Should, due to some cosmical joke, strong nanotech prove nonviable, turning the Moon into a large computer might become an economically sane short-term option. Since this has obviously denaturated into a substantial rant sans detectable structure I might as well go on with reviewing the more conservative transhuman technologies anyway. Again, comments and flames alike are welcome. As there was some discussion concerning whether the SIs will occupy the same ecological niche as we and hence must compete for energy/matter resources -- I don't think this is a very substantial point. The SI's true habitat is space -- because there are lots of elbow room and energy both, no messy atmosphere to foul up industrial processes, no tiny planetary surfaces which have _weather_ (uck) and _night_ and the steep gravity gradient _plus_ atmosphere (sometimes I wonder how we can live down here at all ;). Surely, after the entire planetoid belt and the small atmosphereless satellites plus Mercury had been metabolized and there is still demand for the matter resource they will come to disassemble the inner planets. If Singularity hypothesis is true, however, they should have had switched substrate by then. If there is no built-in escape exit from this spacetime, we might as well grew fatalistic enough not to yell and kick when our cannibal children come home to roast. But back to conservative transhuman technologies. I trust anybody here is aware that solar sailing is the best way to travel in high-luminance areas of the inner solar systems (in fact, NASA had just relatively recently had to abandon a planned solar-sail-driven comet exploration mission due to the much lamented financial cuts). The elegance of solar sailing is that it does not need any reaction mass which is entirely supplied by Sol. By tilting the sail to and fro one can either climb to a higher or descend to a lower orbit -- with not a single gram of fuel spent -- and pretty fast, too. I haven't seen/made any calculations, but it should amount to truly impressive values (I think I heard somebody saying the sail mass makes itself paid during the first day of travel). Graphite cable fractal mesh, protected from hard UV photolysis with a thin metal layer coat can be a good backbone for the very flimsy, almost transparent sail reflector fashioned from nickel-iron or aluminum. How, and where to manufacture these, however? We are truly lucky to have the Moon at our doorstep. It has what we hereabouts call a very hard vacuum for atmosphere and a pretty low escape velocity. While lacking in volatiles, its surface is made from alumosilicates (in patches, with a pretty high titanium content) has probably large nickel-iron deposits from past meteoritic impacts (at least mascon data seem to indicate this) and it should also have some few carbonaceous chondrite-containing areas. I believe a high-resolution radar map (which ought to be able to penetrate about a meter into the topsoil and hence give valuable remote-sense prospection data) has recently been made (not a NASA mission, I think. Anybody knows whether the data is online?). It has low gravity which allows very flimsy support structures. Vacuum has no wind drag, no oxidation, no erosion but by micrometeorite impact, so one can use very thin foils for mirrors and solar cells. Moon has a very long day (and, alas, an equally long night) with a 1.3 kW/m^2 luminance (the solar constant). By using relatively large but extremely lightweight parabolic/spherical mirrors made from micrometer-thin metal foils we can have up to 5.7 k deg C process heat, this can be used to produce glass from soil (for matrix and support structure) or to melt nickel-iron (for foils and support structure, also for electric conductors), or, simply, for fractional destillation of soil. Thus process heat is very cheap up there. Electrical power is equally cheap if one uses locally produced silicon photovoltaic cells. A highly straightforward way to separate pure elements from oxides is to use molten-oxide (see process heat) electrolysis and to allow oxygen to escape into vacuum. An even more elegant, if noticeably less-throughput approach would be to use preparative mass spectroscopy, e.g. the minimalistic quadrupole MS technique, which should scale up very well to quantitative separation (there was I think, a MS isotope separation group at the Manhattan project, but they had settled for diffusional/centrifuge isotope enrichment instead) Semiconductor-grade silicon should be pretty easy to prepare, since both time and energy are plentiful. Circuitry structures and even macroscopic structures can be deposited by ion and atomic beam writers. Welding and vaporisation for thin-layers deposition can be done with electron beams. Etc. etc. etc. An autoreplicating factory can grow pretty fast during the day but has to shut down for the long night. Due to vacuum and low escape velocity using electromagnetically operated mass drivers one can easily hurl small packets of processed material or machinery (e.g. autark solar sail-equipped autoreplicator probes) into space. All this can be done by extremely stupid, autonomous or semiautonomous systems, requiring no more external gardening effort than a patch of jungle (but better watch it pretty closely and held a nuke or two ready should a runaway autoreplicator emerge). So the Moon is obviously an excellently instrumental basis camp. Given this extremely impressive while still highly incomplete list one grows wonder why government funding for Moon exploration has been virtually discontinued so that we have to wait for privately funded space programmes. But then, sanity has never been a typical trait of upper-echelon politics. I can only hope that this lamentable lack of foresight has not closed one essential Omega route time window, be it due to growing social instabilities, a nuclear winter or a stray asteroid impact or to countless other thinkable events. Species mass extinctions have appeared regularly in the past, even now we are the precipitators of the current one. Given our already well-documented liability to countless stupidities, our rudimentary technology does by no means make us immune from mass extinctions. In fact, as as a species we seem to be too stupid even for this low level of technology -- as good a reason for launching the SI project as any one can think of. I think, we all can safely agree upon that even with this crude technology (just solar cells and processing elements, line-of-sight or fibre laser links) the planetary intelligence the size of the Moon can be a true SI. If we can build it (and we obviously potentially can, the necessary effort threshold being substantially lowered with each passing decade), then we have succeeded in bootstrapping the first step towards Omega. Now I'd like to begin a long detour into maspar connectionist IT techniques & other goodies. Imo, if applied synergistically, they present a sufficiently strong method for Omega first-stage bootstrap implementation. Drop me a line if you feel this is off-topic. (In one very true proverb somebody said, if one's toolbox sole contents is a hammer -- then everything in the world looks very like a nail. In contrast to Ron ''opponent processing'' Blue I think I can provide at least several hammers of diverse weights and shapes.) Modern organic synthesis often utilizes a technique known as retrosynthesis. In it, the target molecule is considered as a graph, to be constructed from feedstock graphs (which are readily available) by a finite series of discrete synthetical reaction steps. Obviously, though chemists often cloud it in arcane acronyms and circumspect imagery, this is a standard multidimensional optimization problem. Using an empirically available ''cost'' metric we must choose one of the countless discrete-jump trajectories leading through structure space. (Of course, the fact that most transitions are forbidden due to limitation of synthetic art does not exactly contribute to the simplicity). However, in a neat trick, the problem is reversed like a glove: instead of the synthetic reactions, we use the formally exact opposite, the inverse transform, which maps target to its simpler precursor(s). So instead of starting the sampling the large subspace of available feedstock we have only to examine the paths leading to the nearest neighbours of the structure space lattice. Since the total path length is limited to low two-digit numbers due to end product yield considerations and the atomic step cost varies, but not very ruggedly so, we can prune a lot of obviously fruitless paths quite early. Diverse enhancements upon this scheme exist, e.g. by GA optimization, but you surely got the salient point. We can consider the first bootstrap stage of Omega our target. In fact our task is even easier, since a lot of targets meet the member-of-SI-categorization-bucket criterion. So if you can choose a simpler target, the more power to you. Once an SI has significantly progressed beyond the HI threshold, it can easily do all the rest on its own. Let's choose the primitive planetary SI as synthetic target. Since the location, large-scale geometry and available energy flow as well as the first-stage implementation technology are all known in advance, we obviously have a lot of boundary condition constraints. Let's sum up the things we can de novo know. Whether a planetary, or a Dyson SI, the fundamental symmetry is a spherical shell or segments of it. The former is imposed by stress load forces arising from gravitation and available power, the latter by orbit geometry and available power (shading off by other modules). The power is provided by the entropy gradient (''hot spot in the cool sky'') photovoltaics. The planetary SI consists of elementary Christaller-flavoured hexagonal cells, populated by identical modules, which communicate by direct line-of-sight diode laser and optical fibre links. The stellar SI, which suffers much more from communication latency and concurrently has to rule out the fragmentation lawine catastrophe has thus a lot of possible orbits and orbit population kinetics excluded. The computation paradigm is connectionism, particularly edge-of-chaos-rule integer automaton network connectionism. Since we obviously have to utilize an optical packet-switched high-connectivity network of orthogonally aligned nodes, an instance of grassrouting (a finest-grain local-knowledge routing), specifically hypergrid, instantly comes to mind. Mapping of a high-dimensional hypergrid to a 2d spherical shell is trivial, since deformed orthogonal lattice maps to a sphere artefacting only four small ''blind spot'' addressing singularities, where the mesh is too heavily distorted to be of any value. How does the individual module look like? It has a photovoltaics area to provide power for both computation and structure anabolics. It needs a mass separation unit, e.g. by quadrupole MS and a construction unit, which can be a parallel beam writer. It must have a rudimentary local-sensing and local matter transport infrastructure (bucket brigade if thru-cell), which can be based on emergent behaviour autonomous agents for simplicity's sake, which gives us a nice ''ant queen''/''ant nest'' metaphor for each hexagonal cell ''nest''. While the ant queen structure is pretty large and complex (mostly bits, though), the individual ant agent can be a spindly multipod which sucks juice off photocell power grid by means of motive ''antennae''. Since it needs headroom to crawl about (or should we let it crawl over the cell pavement?), photocell and the associated power grid infrastructure have to suspended by spun-glass tensile structure or Fuller polyonal spacefill structures, rods made from nickel-iron or soil-glass. I could have gone on endlessly, but it is clear enough that one can arrive at arbitrary level of refinement. (Should the concept draw enough interest, I can provide detail specs ad nauseam.) Since, once seeded and after having spread over the Moon surface, the system has a high (both local and global) matter throughput and a generic macro and microscopic assembler capability, we (or the infant SI itself) can upgrade it overnight (uh, better make it a Moon day ;). Hence, if we can implement this particular synthetic target, we have virtually achieved our omegalomonomaniac goal. Comments? -- Eugene P.S. coming next (if not too off-topic): system statespace kinetics, mapping integer networks to molecular cellular automata machines, hypergrid routing details & Co. ***************************************************************************
see computer_architecture.html#replication
Date: 18 Oct 96 04:41:14 EDT From: Remi Sussan To: David Cary Subject: Re: geometry of thought-space and cyberspace protocol David, Here is an article of Mark Pesce you might find interesting for your researches on thought space. Although it deals principally with VRML (this article started the whole trip, as far as i know), it presents a theory of "cyberspace protocol" which intends to visualise the contents of the whole Web in a spatial coordinates system. The "cyberspace protocol" has been abandonned in the subsequent implementations of VRML, probably because it demanded a complete reorganisation of the web and of the URL adresses system.(and, as far as I know, "Labyrinth", the only browser which could use it, is not downloadable anywhere) But I suppose the cyberspace protocol could be easily simulated inside one server. (Of course I think of the Omega database). Your comments are welcome. As you know already, most of the mathematical stuff in this article is flying much higher than my little brain can do. Remi
Mark D. Pesce, Peter Kennard and Anthony S. Parisi
Labyrinth Group
45 Henry Street #2
San Francisco, CA 94114
This work describes a visualization tool for WWW, "Labyrinth", which uses WWW and a newly defined protocol, Cyberspace Protocol (CP) to visualize and maintain a uniform definition of objects, scene arragement, and spatio-location which is consistent across all of Internet. Several technologies have been invented to handle the scaling problems associated with widely-shared spaces, including a distributed server methodology for resolving spatial requests. A new languague, Virtual Reality Markup Language (VRML) is introduced as a beginning proposal for WWW visualization.
The emergence, in 1991, of the World Wide Web, added a new dimension of accessibility and functionality to Internet. For the first time, both users and programmers of Internet could access all of the various types of Internet services (FTP, Gopher, Telnet, etc.) through a consistent and abstract mechanism. In addition, WWW added two new services, HTTP, the Hypertext Transfer Protocol, which provides a rapid file-transfer mechanism; and the Uniform Resource Locator, or URL, which defines a universal locator mechanism for a data set resident anywhere within Internet''s domain.
The first major consequence of the presence of WWW on Internet has manifested itself in an explosion in the usability of data sets within it. This is directly correlatable to the navigability of these data sets: in other words, Internet is useful (and will be used) to the degree it is capable of conforming to requests made of it. WWW has made Internet navigable, where it was not before, except in the most occult and hermetic manner. Furthermore, it added a universal organization to the data within it; through WWW, all four million Internet hosts can be treated as a single, unified data source, and all of the data can be treated as a single, albeit complexly structured, document.
It would appear that WWW, as a phenomenon, has induced two other processes to begin. The first is an upswing in the amount of traffic on Internet (1993 WWW traffic was 3000x greater than in 1992!); the second is a process of organization: the data available on Internet is being restructured, tailored to fit within WWW. (This is a clear example of "the medium is the message", as the presence of a new medium, WWW, forces a reconfiguration of all pre-existing media into it.) This organization is occurring at right angles to the previous form of organization; that is to say that, previously, Internet appeared as a linear source, a unidimensional stream, while now, an arbitrary linkage of documents, in at least two dimensions (generally defined as "pages"), is possible. As fitting the organization skills most common in Western Civilization, this structure is often hierarchical, with occasional exceptions. (Most rare are anti-hierarchical documents which are not intrinsically confusing.)
Navigability in a purely symbolic domain has limits. The amount of "depth" present in a subject before it exceeds human capacity for comprehension (and hence, navigation) is finite and relatively limited. Humans, however, are superb visualizers, holding within their craniums the most powerful visualization tool known. Human beings navigate in three dimensions; we are born to it, and, except in the case of severe organic damage, have a comprehensive ability to spatio-locate and spatio-organize.
It seems reasonable to propose that WWW should be extended, bringing its conceptual model from two dimensions, out, at a right angle, into three. To do this, two things are required; extensions to HTML to describe both geometry and space; and a unified representation of "space" across Internet. This work proposes solutions to both of these issues, and describes a WWW client built upon them, called "Labyrinth", which visualizes WWW as a space.
As of this writing, HTML is capable of expressing both textual and pictorial data, and can provide some limited formatting features for each of them; beyond this it provides a linkage mechanism to express the connection between data sets. HTML's roots are in text; its parent, SGML, specifies a format for printed media, a expression which is intrinsically two-dimensional. For this reason, we have stepped "outside" of HTML in our language specifications for geometry and place, defining a simple, easily parsed scripting language for the generation of objects and spaces.
The basic functionality for any three-dimensional language interface to WWW can be broken into three parts; object definitions, which include the definitions of the geometric representations for these objects; scene definitions, which define "placement" of these objects inside of a larger context; and a mechanism which "binds" a URL to an object within a scene. The current revision of Labyrinth's Virtual Reality Markup Language (VRML), while unsophisticated, does fulfill all of these requirements, and therefore provides all of the basic functionality required in a fully visualized WWW client.
As currently defined, Labyrinth's VRMLd" files are a unique data type, like MPEG or AIFF, and must be integrated with MIME in order to launch a companion "viewer". This is not an optimal solution; rather, it should be possible to extend HTML to encapsulate "spatial" data types; these, then, could be visualized or ignored given the capabilities of the WWW client. The OpenGL, OpenInventor, or HOOPS specifications could form a basis, insofar as object definitions are concerned, for HTML extensions, and should be examined as a possible (and well-supported) solution to this issue. Our scripting language should serve as a starting example, rather than a proposal for an all- inclusive solution.
Any conceptualization of space contains within it, implicitly, the quality of number; i.e., "how much" or "how far" is contained within the simple expression of existence. Space, in its electronic representation, is numbered, and, if it is to be shared by billions of simultaneous participants, it must be consistent, unique, and very large/ dense. Despite this, it is rarely necessary for a WWW client to deal with the totality of space; operations occur local to the position of the WWW viewer, and this local description of space is nearly always a great deal more constrained than the entire spatial representation.
It is necessary for VRML to define a numbering system for visualization which conforms to the three principles outlined above. Another section of this work describes such a system.
For the purposes of continuity in navigation, it is necessary to create a unified conceptualization of space spanning the entire Internet, a spatial equivalent of WWW. This has been called "Cyberspace", in the sense that it has at least three dimensions, but exists only as a "consensual hallucination" on the part of the hosts and users which participate within it. There is only one cyberspace, just as there is only one WWW; to imply multiplicity is to defeat the objective of unity.
At its fundamental level, cyberspace is a map that is maintained between a regular spatial topology and an irregular network topology. The continuity of cyberspace implies nothing about the internetwork upon which it exists. Cyberspace is complete abstraction, divorced at every point from concrete representation.
All of the examples used in the following explanation of the algorithmic nature of cyberspace are derived from our implementation of a system that conforms to this basic principle, a system developed for TCP/IP and Internet.
Internet defines an address "space" for its hosts, specifying these addresses as 32-bit numbers, expressed in dotted octet notation, where the general form is {s.t.u.v}. Into this unidimensional address space, cyberspace places a map of N dimensions (N = 3 in the canonical, "Gibsonian" cyberspace under discussion here), so that any "place" can be uniquely identified by the tuple {x.y.z}.
In order to ensure sufficient volume and density within cyberspace, it is necessary to use a numbering system which has a truly vast dynamic range. We have developed a system of "address elements" where each element contains a specific portion of the entire expressible dynamic range in the form:
{p.x.y.z}
where p is the place value, and x, y, and z are the metrics for each dimension. The address element is currently implemented as a 32-bit construct, so the range of p is ?127, and x, y, and z, are unsigned octets. Address elements may be concatenated to any level of resolution desired; as most operations in cyberspace occur within a constrained context, 32, or at most, 64 bits is sufficient to express the vast majority of interactions. This gives the numbering system the twin benefits of wide dynamic range and compactness; compactness is an essential quality in a networked environment.
This is only one possible numbering scheme; others may be developed which conform to the principles as given, perhaps more effectively.
Cyberspace has now been given a universal, unique, dense numbering system; it is now possible to quantify it. The first quantification is that of existence (metrics); the second quantification is that of content. Content is not provided by cyberspace itself, but rather by the participants within it. The only service cyberspace needs to provide is a binding between a spatial descriptor and a host address. This can be described by the function:
f(s) => a
where s is a spatial identifier, and a is an internetwork address.
This is the essential mathematical construction of cyberspace.
If cyberspace is reducible to a simple function, it can be expressed through a transaction-based protocol, where every request yields a reply, even if that reply is A. In the implementation under examination, cyberspace protocol (CP) is implemented through a straightforward client-server mechanism, in which there are very few basic operations; registration, investigation, and deletion.
In the registration process, a cyberspace client announces to a server that it has populated a volume of space; in this sense, cyberspace does not exist until it is populated: this is a corollary to Benedikt''s Principle of Indifference, which states: "absence from cyberspace will have a cost."
The investigation process will be discussed in detail later in this work. The basic transaction is simple: given a circumscribed volume of space, return a set of all hosts which contribute to it. The reply to such a transaction could be NULL or practically infinite (consider the case where the request specifies a volume which describes the entirety of cyberspace); this implies that level-of-detail must be implemented within the transaction (and hence, within registration), in order to optimize the process of investigation. Often, it is enough to know cyberspace is populated, nothing more, and many other times, it is enough to know only the gross features of the landscape, not the particularities of it. In this sense, level of detail is a quality intrinsic to cyberspace.
Registration contains within it the investigation process; before a volume can be registered successfully, "permission" must be received from cyberspace itself, and this must include an active collaboration and authentication process with whatever other hosts help to define the volume. This is an enforcement of the rule which forbids interpenetration of objects within the physical world; it need not be enforced, but unless it is observed in most situations, cyberspace will tend toward being intrinsically disorienting.
Finally, the deletion process is the logical inverse of the registration process, where a volume defined by a client is removed from cyberspace. These three basic transactions form the core of cyberspace protocol, as implemented between the client and the server.
Cyberspace is a unified whole; therefore, from a transaction-oriented point of view, every server must behave exactly like any other server (specifically with respect to investigation requests). The same requests should evoke the same responses. This would appear to imply that every server must comprehend the "totality" of cyberspace, a requirement which is functionally beyond any computer yet conceived of, or it places a severe restriction on the total content of cyberspace. Both of these constraints are unacceptable, and a methodology to surmount these constraints must be incorporated into the cyberspace server implementation.
The cyberspace server is implemented as a three-dimensional database with at least three implemented operations; insertion, deletion, and search. These correspond to the registration, deletion, and investigation transactions. Each element within the database is composed of at least three items of data; the volumetric identifier of the space; the IP address of the host which "manifests" within that space; and the IP address of the cyberspace server through which it is registered.
The investigation transaction is the core of the server implementation. Cyberspace servers use a repeated, refined query mechanism, which iteratively narrows the possible range of servers which are capable of affirmatively answering an investigation request until the set exactly conforms to the volumetric parameters of the request. This set of servers contains the entire possible list of hosts which collaborate in creating some volume of cyberspace, and will return a non-null reply to an investigation request for a given volume of space. The complete details of the investigation algorythm are beyond the scope of the current work and will be explained in greater detail in a subsequent publication.
An assumption implicit in the investigation algorithm is that investigative searches have "depth", that investigation is not performed to its exhaustive limit, but to some limit determined by both client and server, based upon the "importance" of the request. Registrations, on the other hand, must be performed exhaustively, but can (and should) occur asynchronously.
The primary side-effect of this methodology is that cyberspace is not instantaneous, but is bounded by bandwidth, processor capacity, and level of detail, in the form:
where c is a constant, the "speed limit" of cyberspace (as c is the speed of light in physical space), l is the level of detail, b is bandwidth of the internetwork, p is processor capacity, D is the number of dimensions of the cyberspace, and r is the position within the space. The function rho defines the "density" of a volume of cyberspace under examination.
This expression is intended to describe the primary relationships between the elements which create cyberspace, and is not mathematically rigorous, but can be deduced from Benedikt''s Law.
Finally, because cyberspace servers do not attempt to contain the entirety of cyberspace, but rather, search through it, based upon client transaction requests, it can be seen that the content of a cyberspace server is entirely determined by the requests made to it by its clients.
One way to visualize the operation of cyberspace servers is with the metaphor of Indra''s Net, from Vedanta Hinduism; finely woven of glittering jewels, each jewel reflecting every other.
Having defined, specified, and implemented an architecture which provides a binding between spatio-location and data set location, this architecture needs to be integrated with the existing WWW libraries so that their functionality can be similarly extended. As "location" is being augmented by the addition of CP to WWW, it is the Universal Resource Locator which must be extended to incorporate these new capabilities.
The URL, in its present definition, has three parts: an access identifier (type of service), a host name (specified either as an IP address or DNS-resolvable name), and a "filename", which is really more of a message passed along to the host at the point of service. Cyberspace Protocol fits well into this model, with two exceptions; multiple hosts which collaborate on a space, and the identification of a "filename" associated with a registered volume of space.
We propose a new URL of the following form:
cs://{pa.x.y.z}{pb.x.y.z}.../filename
where {pn...} is a set of CP address elements.
Resolution of this URL into a data set is a two-stage process: first the client CP mechanism must be used to translate the given spatio-location into a host address, then the request must be sent to the host address. Two issues arise here; multiple host addresses, as mentioned previously, and a default access mechanism for CP. If a set of host addresses are returned by CP, a request must be sent to each specified host; otherwise, the description of the space will be incomplete. Ideally, all visualized WWW clients will implement a threaded execution mechanism (with re-entrant WWW libraries) so that these requests can occur simultaneously and asynchronously.
A default access mechanism for CP within WWW must be selected. The authors have chosen HTTP, for two reasons; it is efficient, and it is available at all WWW servers. Nonetheless, this is not a closed issue; it may make sense to allow for some variety of access mechanisms, or perhaps a fallback mechanism; if one service is not present at a host, another attempt, on another service, could be made.
It is now possible, from the previous discussion, to describe the architecture and operation of a fully visualized WWW client. It is composed of several pieces; WWW libraries with an integrated CP client interface; an interpreter for an VRML-derived language which describes object geometry, placement, and linkage; and a user interface which presents a navigable "window on the web".
The operation of the client is very straightforward, as is the case of the other WWW clients. After launching, the client queries the "space" at "home", and loads the world as the axis mundi of the client's view of the web. As a user moves through cyberspace, the client makes requests, through CP, to determine the content of all spaces passed through or looked upon. A great deal of design effort needs to be put into the development of look-ahead caching algorithms for cyberspace viewers; without them, the user will experience a discontinuous, "jerky" trip through cyberspace. The optimal design of these algorithms will be the subject of a subsequent work.
At this time, visualized objects in WWW have only two possible behaviors; no behavior at all, or linkage, through an attached URL, to another data set. This linkage could be to another "world" (actually another place in cyberspace), which is called a "portal", or it could link to another data type, in which case the client must launch the appropriate viewer. Labyrinth is designed to augment the functionality of existing WWW viewers, such as NCSA Mosaic, rather than to supplant them, and therefore does not need a well-integrated facility for viewing other types of HTML documents.
Cyberspace Protocol is a specific implementation of a general theory, which has implications well beyond WWW. CP is the solution, in three dimensions, of an N-dimensional practice for data set location abstraction. Data abstraction places a referent between the "name" of a data set locator and the physical location, allowing physical data set location to become mutable.
If an implementation were to be developed for the case where N = 1, it would be an effective replacement Internet''s Domain Name Service (DNS), which maintains a static mapping of "names" to IP addresses. Any network which used a dynamic abstraction mechanism could mirror or reassign hosts on a continuous basis (assuming that all write-through mirroring could be maintained by the hosts themselves), so that the selection of a host for a transaction could be made based upon criteria that would tend to optimize the performance of the network from the perspective of the transaction. It would also be easy to create a data set which could "follow" its user(s), adjusting its location dynamically in response to changes in the physical location or connectivity of the user. In an age of wireless, worldwide networking, this could be a very powerful methodology.
This work attempts to outline the requirements for architectures which can fully visualize WWW, and proposes solutions to the issues raised by these requirements. While much further study needs to be done, this work is meant to serve as a starting point for an understanding of the subtleties of wide-area, distributed, visualized data sets.
Labyrinth and Cyberspace Protocol are logical extensions to the World Wide Web and Internet. Indeed, without the existence of WWW, neither would be very useful immediately; they would operate, but lack content, and individuals would hardly be compelled either to use them or adapt their existing data sets to realize their new potentials. Used together, they work to make both WWW and Internet inherently more navigable, because they help to make Internet more human-centered, adapting data sets to human capabilities rather than vice versa. This, thus far, is the single largest contribution that "virtual reality" research has offered to the field of computing; a human-centered design approach that lowers or erases the barriers to usage by creating user-interface paradigms which serve humans to the full of their potential.
Finally, network visualization marks the end of the "first age" of networking, where protocols, services, and infrastructure dominated the discourse within the field. In the "second age" of networking, questions like data architecture and the inherent navigability of a well- designed data set become infinitely more important than "first age" questions; where "how do I find what I'm looking for?" becomes more relevant than "where did it come from?"
The authors would like to thank the following individuals, who contributed their own thoughts to the formation and development of the ideas expressed in this work: Michael J. Donahue, Owen Rowley, Dr. Stuart D. Brorson, Clayton Graham, Christopher Morin, Neil Redding, James Curnow, Marina Berlin, Casey Caston and the Fugawi tribe.
Date: Fri, 19 Apr 1996 00:00:08 -0400 (EDT) From: transhuman@umich.edu Sender: transhuman@umich.edu Subject: >H Digest *************************************************************************** From: "Alexander 'Sasha' Chislenko" Subject: Re: >H geometry of thought-space Transhuman Mailing List >On Thu, 18 Apr 1996, Anton Sherwood wrote: > > I often wonder what sort of shape an information space has. How many > dimensions? Flat or curved? For the whole of human knowledge there > may be no answers to these questions, but for some restricted areas it > ought to be not-too-hard. > At 08:31 PM 4/18/96 +0200, Eugene Leitl wrote: >You ought alway to follow the practicability criteria. Personally, I >find it extremely hard to vizualize even a 4-hypercube, aside from the >other 4-lattices. One should alway choose the best approach, that one >which allows the maximum productivity. Obviously, this is 3-space. After studying topology and functional analysis for a few of years, I personally have no problem understanding spaces with any number of dimensions, including infinite, metric spaces without dimensions, as well as non-metric spaces. I still have problems with many others though, and in general would choose the simplest representative space I can use for a model. My favorite representation for an object is still a 0-dim. space (a dot). In many cases though, the intrinsic structure of an object requires a complex space, and so for better understanding of its nature it may be easier to learn a more complex space, than to squeeze the object into a overly simple familiar representation. A simple suggestion would be to think of the general realm of ideas as a metric space without dimensions, and draw some rough local projection of neighbor ideas in 2 or 3-D. There are some indications though (based on the remarkable fact that I can't explain, that the number of parts in a whole [geometrically] averages to pi) that the "idea space" may actually *be* a locally Euclidean manifold - at least in some respects. My article on the topic (written at the time when my English was significantly worse than now) is available at http://www.lucifer.com/~sasha/articles/SemanticSpace.txt ------------------------------------------------------------- Alexander Chislenko Home: http://www.lucifer.com/~sasha/home.html -------------------------------------------------------------
Date: Sun, 26 May 1996 00:00:05 -0400 (EDT) From: transhuman@umich.edu Sender: transhuman@umich.edu Subject: >H Digest *************************************************************************** From: "Dr. Rich Artym" Subject: Re: >H Flexibility of Namespace Transhuman Mailing List ... Corwyn J. Alambar writes: > ... this brings up an ineteresting point about names and labels in a >H > society - Namespace is already becoming more fluid (with people adopting and > shedding names with marriages, stage names, pen names, pseudonyms, etc.), and > a person is often referred to by multiple names. The 'Net has made this even > more prevalent - How many people on the 'Net use a name different from their > birth name, compared to those that do so in "real life"? > > Show of hands - how many people here have already "drifted" in namespace? "Drifting in namespace" ... a wonderful concept, Corwyn! (:-) There are some interesting questions that go with the territory here. This "individual namespace" that we're talking about, what kind of namespace is it, does it imply a mapping of name to individual, or is it very intentionally free of such a deterministic mapping? [Eg. most of us make use of an implied deterministic mapping during most of our lives, and we'd be most annoyed if an amount creditted to our name were to go to the account of a namesake.] On the other hand, the "I am not a number" view (if it is at all meaningful, and it may not be) would tend to mitigate against a deterministic mapping, just like it does in practice when we book into a hotel as "John Smith" or when we adopt stage names, or pseudonyms as authors. It seems to me that there are uses for both types of mapping, ie. one- to-many, as well as one-to-one, and that many-to-one might even become relevant if mentalities are developed that can carry multiple threads of consciousness localized in independent individuals. If deterministic mapping is perceived to be useful then we'll have to develop unique identifiers to implement it. "Rich Artym" has a good probability of being a unique person-tag in the world at present, but not "John Clark", and probably not even "John K Clark". Any suggestions for a unique identification system? Perhaps a URL-type scheme like name:/Rich Artym/galacta.demon.co.uk/ would be suitable, since (i) its implementation is distributed, so that no central registry is required, DNS being a distributed system, and (ii) in any given hostspace it would be easy to ensure deterministic mapping of a given name to a given person identifier. Note that such a scheme does not actually provide the personal identifier itself, by design, since that may not be desired by the individual concerned. What it does do is to provide a deterministic mapping for public references to a given person by name, by specifying a domain within which that mapping IS unique. As to actual personal identifiers, DNA fingerprints are hard to beat. One wouldn't want them as part of the public names though, since that would remove at a stroke the freedom of the individual concerned to control access to her unique identifier. Rich. -- ########### Dr. Rich Artym ================ PGP public key available # galacta # Internet: <rich at galacta.demon.co.uk> DNS 158.152.156.137 # ->demon # rich@mail.g7exm[.uk].ampr.org DNS 44.131.164.1 # ->ampr # NTS/BBS : g7exm@gb7msw.#33.gbr.eu # ->nexus # Fun : Unix, X, TCP/IP, OSI, kernel, O-O, C++, Soft/Eng # ->NTS # More fun: Regional IP Coordinator Hertfordshire + N.London ########### Q'Quote : "Object type is a detail of its implementation." ***************************************************************************
Date: Sat, 25 Jan 1997 10:32:47 -0500 From: Remi Sussan <73514.3466@compuserve.com> Subject: Re: >H new world To: tranhuman mailing list <transhuman@logrus.org> Sender: owner-transhuman@logrus.org Precedence: list Reply-To: transhuman@logrus.org Transhuman Mailing List Very interesting post, Anders. Thanks for these thoughts. It is time now to ask The Forbidden Question: what is the correlation between so-called "New age movements" and tranhsumanism ? Of course, the beliefs are obviously not the same. But the mythic movies projected on the screen on the brain are quite close. It would be easy, for anybody attracted by this imagery, to balance between new age and transhumanism, especially if devoid of good epistemological background (I don't say "scientific background", because, according some statistics, new age attracts a good proportion of people with some scientific knowledge, especially in engineering). Somebody completely stranger both to Transhumanism and New age , would have difficulties to see the difference between the two. I read once a (very funny) publication of the catholic church which mixed happily New Age, cybernetics, virtual reality, psychedelism, Chaos theory and Rock culture. And if the church said it, who am I to contradict ? ;-). Now let's see the case of Gregory Bateson. He was a rationalist, a co-founder, with Norbert Wiener and Heinz Von Foerster, of the cybernetic method (he introduced it in anthropology).But he choosed to finish his days at Esalen Institute, sanctuary of the New Age, and died in an american Zen Temple. He didn't renounced to rationalist method, and never believed in reincarnation, channelling, etc.But when asked about his way of life, he simply answered that he was disgusted by the destructive attitude of most of his scientific colleagues, and preferred to live among the Esalen people and their strange creeds. to summarize, new agers and tranhumanists are definitely on the same boat-even if it seems sometimes too small. > I don't think visions like >this or theosophy in general should be interpreted literally, rather as >myths and ideas showing interesting possibilities and affecting us in >various ways. Agree again. "visions" play a great role in an imaginary space. They must not corrupt the "real space" of facts, as the "real space" must not corrupt the "imaginary space". To avoid such confusion, it is better to encapsulate imaginary space in a virtual spatio-temporal place: the ritual. To be "secure" the ritual should begin by the assertion of a belief system used only for emotional purposes, an incantation like that : (define Hale Bope :a-wonderful-space-city-populated-by-great-masters) .... And the ritual should finish by an inverse banishing riding us toward the "real world" (define (Hale-Bope Lemurians Great-Masters Channeling) : Illusion) Brenda, see no offense in that vision of Theosophy: HP Blavatsky has really created a powerful mythology which influenced people like HP Lovecraft, who in turn influenced numerous science-fiction writers who in turn created most of the mythological framework of transhumanism. We all owe a great debt to her. Saying theosophy is a metaphor is not saying it is silly, to the contrary. We only want to give it a place where it can be really positive. >Personally I think ideas like theosophy might have some merit, but they >far too often tend to place the locus of control outside the individual. But with little adjustments, we can make them really useful. I have always been impressed by Roger Zelazny's "Lord of light", with its wonderful mix of Indian mythology and high technology. somewhere in the story, the hero, Sam, is practising meditation on a stone. An other character worry about it, saying Sam is renouncing to active life and is only interested in spiritual attainment. but an other says: "No. Look at his eyes. He doesn't want to forget reality by uniting the subject and the object in a state of ecstasy. He is STUDYING the stone..." Remi *************************************************************************** Please email all technical problems to alexboko@umich.edu, not to the list. http://www.us.itd.umich.edu/~alexboko/mlist.html is our web site. ftp://us.itd.umich.edu/users/alexboko/th/ is our ftp site. ***************************************************************************
http://www.cybercom.net/~wmcguire/dreamwave/0422.html
mental dimensions http://www.indiana.edu/~pietsch/shufflebrain-book11.html
3DXML: Authoring, Publishing and Viewing Structured Information http://www.vrml.org/WorkingGroups/dbwork/ancona/home.html by Dan Ancona.
3DXML is intended to pave the way for the eventual replacement for HTML. It is intended to be a general front end for XML. The web has many problems, and lacks much of the functionality imagined by its conceptual but unimplemented predecessors like Vannevar Bush's Memex and Ted Nelson's Xanadu project. 3DXML addresses a subset of this missing functionality and brokenness
Date: Thu, 27 Aug 1998 12:35:51 -0700 (PDT) From: Joe Jenkins Subject: >H Re: Uploading To: transhuman@logrus.org MIME-Version: 1.0 Sender: owner-transhuman@logrus.org Reply-To: transhuman@logrus.org Transhuman Mailing List ---John Clark wrote: > > Joe Jenkins Wrote: > > >My first shot at defining identity is as follows: > >Identity - A slightly dynamic but mostly stable fuzzy area > >within the design space of all possible information > >processors > >as defined by the ego. > > I don't know what you mean by "design space", as far as I know > we were not designed. Perhaps you mean the set of all > conceivable > experiences, but some experiences I can never have, such as the > experience of reading every Chinese book ever written. No, I don't mean all conceivable experiences. Your mind is an information processor. Therefore, A universal Turing machine can emulate your mind. If you were to make a graphical representation where every point in its space represented one and only one possible state of that universal Turing machine you would have what I call the "design space of all possible information processors". This is regardless of whether that information processor was designed, evolved, or just happen to drop out of the sky from a disordered blob and accidentally self assemble into an SI. I prefer to think of this graph in terms of 3D. In reality the graph would have to be assembled such that the smallest of changes in the human mind would always be associated with adjacent point plots. I'm sure this would cause it to end up being Nth dimentionnal with N being very large. This concept is not new to you. You recently wrote about Tipler's Omega creating a copy of all possible minds. Here its assumed that all possible minds means those in the "the design space of all possible information processors" that are compatible with the biological configuration of the human mind. All of your arguments so far seem to be exploiting the fact that there is a fuzzy boundary around identity. However, it doesn't follow from that that you can just throw up your hands and give up the clear difference between being inside and outside that boundary. Yes the transition between night and day is fuzzy, but I can say for sure that between 11:00 am and 2:00 pm in my time zone it is always daytime. To be consistent with your method of debating in this thread I expect you to come back with something about an eclipse. From my perspective you seem to be skirting the issues here. My debate with you hinged on your answer to the question posed in my modified thought experiment. In your response to that post that part of it was just conveniently omitted. So I'll ask it again: My original thought experiment: << You've just completed 10 hours of mundane work at the office today in your normal biological state. You then documented your work more thoroughlly than ever before. A doctor shows up and completely convinces you beyond any shadow of a doubt that he has invented a flash light type device that can induce a perfect 10 hour amnesia with no side effects [or pain]whatsoever. You are 100% convinced this is safe and effective. He offers you $10,000 to let him use the device on you. >> John Clarks response was: > As you say it's a judgment call but I'd tell him no way. > I can accept this answer as honest, especially if you are a multi-millionaire. So I modified the thought experiment to more accurately reflect the situation in the original spawning of 1000 copies. I wrote: My modified thought experiment: << Okay, then how about if he said he found out that your wealth had been divided by 1000 because of some unfortunate events in the U.S. economy today. And that your family and friends will welcome your interactions with them 1000 times less. This includes your friends on the Extropian and Transhuman lists. And then he continued to list many more unfortunate things that happened to you today. Then he says, "but I can fix it all if you just let me use the device on you". Would you do it if all this was true? >> This question was conveniently omitted from your reply. Your answer is key to this debate. John Clark wrote: > Another problem, I'll bet any definition of "ego" will have the > concept of identity in there someplace, it's the sort of thing > that gives circular definitions a bad name. I'm sure the ego > exists, consciousness too, but the chances that anybody > will ever come up with a definition of either that's worth a > damn > is almost zero. This doesn't make me despair however because > definitions are vastly overrated, most of our knowledge and all > of the really important stuff is in the form of examples not > definitions. > I agree with your comments above. I was pressured into inventing a definition by John Novak so I did the best I could with a first cut at it. This definition however is not key to my debate with you. Its for the reasons you mention above that I have been sticking with examples and thought experiments whenever possible. If I were so inclined, I'm sure my best definition of identity would be no less the size of a book - complete with a multitude of examples and thought experiments. > [Me] > >In order to self preserve and keep my ego intact 999 of my > >point plots would not hesitate to commit voluntary amnesia. > >With the above definition it is clear to me that we are in > >fact > >talking about 1000 minds with one ego because the ego defines > >identity as the fuzzy area where all 1000 point plots reside. [John Clark] > Then why not look on me as an imperfect copy of you, after all > from a Martian's point of view all humans are pretty much alike. > If I could prove that I'm happier and more successful than you > would it then be logical to kill yourself? The point of view that matters here is not that of a Martians but that of your ego. So obviously, from my egos point of view, you are not an imperfect copy of me. Joe Jenkins
http://www.extropy.org/exi-lists/extropians.1Q97/2107.html
"the mind is capable of at least seven dimensional thought" http://users.owt.com/dibrager/thinker.html not exactly hard science, but he's getting there.
[FIXME: gather quine-related stuff I have scattered elsewhere]
/* ** Challenge: Write the smallest self-duplicating program, not ** reading the source file, which successfully exits and is strictly ** conforming Standard C. ** ** Public domain response by Thad Smith */ #include<stdio.h> main(){char*c="\\\"#include<stdio.h>%cmain(){char*c=%c%c%c%.102s%cn%c;printf(c+2,c[102],c[1],*c,*c,c,*c,c[1]);exit(0);}\n";printf(c+2,c[102],c[1],*c,*c,c,*c,c[1]);exit(0);}DAV: has telomere at both ends; does good by avoiding any global variables; works even on non-ASCII machines; ...
Chuck: I think the answer to the Y2K problem is not the patches that they are doing now. I gather that what they are doing is windowing it so you will have another problem in the future. The answer is to make the specialty embedded microprocessor self-documenting. So that you can walk up to one and ask it what is your take on this particular problem. And the way you do that is with source code instead of binary. And the way you do that is that you code it in Forth and compile it at runtime. And then you know that the program that you are running is the program that the source described and that it works that the way you expect. Especially if you are going to look at that program fifty years from now for the first time and try to figure out what it does.-- http://www.ultratechnology.com/cm52299.htm
access
DAV: Many people have a simple understanding of security: either someone has access, or that person does not have access.
This leads to unnecessary fear, confusion, panic, and anxiety when presented with
"transparent society", "open source", and some other useful ideas.
It's obvious that we don't want certain people having full access
to certain things. Too many people jump to the conclusion
that
it's wrong to allow those people to have *any* access to those things
,
because that's the only alternative they know about.
Computer system administrators have long used a more detailed security model: "read access" and "write access".
There are other ways of giving people "some" access without giving them "full" access.
For example, when you want someone to know about a particular fact, you could
"Secrecy vs. Integrity: What are you trying to protect?" http://www.interhack.net/people/cmcurtin/snake-oil-faq.html#SECTION00042000000000000000
[non-intuitive]
The SSN FAQ by Chris Hibbert http://www.cpsr.org/cpsr/privacy/ssn/ssn.faq.html draws the distinction between a "representation of identity ... identification" vs. "a secure password ... authentication", and notes that far too many people mistakenly believe that SSNs are a secure password. "Even assuming you want someone to be able to find out some things about you, there's no reason to believe that you want to make all records concerning yourself available."
Yes, I'm a Brinist.
related links:
...
David Seex, chairman of LAAS International said: "The scheme recognizes that, far from being classed as possible security risks who need to be moved on from car parks and viewing spots in and around airports, genuine aviation enthusiasts can actually play a valuable part in the battle against crime and terrorism."
When people speak about privacy, they may actually be talking about very different forms of privacy: defensive privacy, human-rights privacy, personal privacy, and contextual privacy.. I think "defensive privacy" and "human-rights privacy" are practically the same thing -- people getting read-only access when they should have no access. I see how this could *lead* to evil things happening, but I would like to point out that it doesn't make this evil per se. I think "personal privacy" is people getting write-only access when they should have no access. The "contextual privacy" points out some important ideas that don't really fit into my little "read/write access" theory.
Ch. 30 p. 227 _A fire upon the deep: a novel from the zones of thought_ book science fiction by Vernor Vinge 1992 ISBN 0-312-85182-0Jefri ... repeated the words more slowly, speaking directly to Steel. ``No. It is [something something] dangerous. Amdi is [something] small. And also, time [something] narrow.''
The Flenser strained for the meaning. Damn. Sooner or later their ignorance of the Two-Legs' language was going to cost them.
-- Reuben Hersh 1998 http://www.maa.org/features/hersh.htmlmath lingo isn't a complete language. ... a narrow view of language found among some logicians and theoretical-computer people. "I have a headache" is used in actual communication between actual live human beings. ...
What about Fortran, Lisp, Pascal, C++, et cetera? Aren't they "languages"? Sure, but in a different sense than English is a language. Hint: English is a human language.
DAV: this is sort of a sticky nexus of interests of mine ... we have programming, we have typography, and we have a little bit of language translation ... How to organize my thoughts and pointers to places like this, when I'm not sure there really is a clean seperation ?``This Web page provided as a public service for computer programmers seeking information about
- localization
- internalization
- character encoding and representation.
- human language issues
''
... English-Only legislation that closed down the Hawaiian medium public schools of Hawai'i. The legislation not only nearly exterminated the Hawaiian language and culture but also had disastrous effects on literacy, academic achievement, and even the use of Standard English among Native Hawaiians. ...
...
The Hawai'i public school system, including the first high school west of the Rocky Mountains, was once taught and operated entirely through the Hawaiian language. The Hawaiian language was banned in all private and public schools in 1896 and this ban continued until 1986 ...
... [in] Hawaiian medium schools ... children are educated entirely in Hawaiian until fifth grade where English language arts is introduced as a subject ... A long range study of the program has shown academic achievement equal to, or above that, of Native Hawaiian children enrolled in the state's typical English medium programs, even in the area of English language arts.
The Hawaiian language, like several other Polynesian languages, uses two diacritical marks that affects the pronunciation and meaning of words.
The kahako is a macron, a short, horizontal line that ... indicates an increased duration for the vowel that it appears over.
The 'okina is the single, open quote that appears frequently before vowels. It indicates a glottal stop, a clean break between vowels.
The presence of the kahako and/or 'okina affects both the pronunciation and meaning of words that contain them. The lack of ability to display these characters on the computer severely hampered its usefulness to speakers of 'olelo Hawai'i for many years.
describes the Hawaiian names for the letters: vowels: " `â " " `ê " " `î " " `ô " " `û " long vowels: " `â kô " " `ê kô " " `î kô " " `ô kô " " `û kô " consonants: " hê " " kê " " lâ " " mû " " nû " " pî " " wê / vê " " `okina "There are 10 vowels (five vowels without kahakô; five, with) and eight consonants, h k l m n p w ` (`okina). To repeat: `okina is a consonant (a letter that is not a vowel); " `oki " means "to cut" and " na " is an ending that makes a verb into a noun. `. In English, it is known as a "glottal stop", which means "stop the flow of air at that point".
...
If a vowel has a kahakô over it, it is indicated by saying "kô" after the vowel. For example, â is spelled " `â kô ".''
[FIXME: Is there a better way to represent these letters ? The macron is a horizontal bar, the `okina is more like an apostrophe ... ] [FIXME: Does Hawaiian have uppercase / lowercase ? ]There is a standardized alphabet for Hawaiian divided into two parts with the following order and names: nä huapalapala ÿöiwi (the indigenous letters) - A (ÿä), E (ÿë), I (ÿï), O (ÿö), U (ÿü), H (hë), K (kë), L (lä), m (mü), N (nü), P (pï), W (wë), ÿ (ÿokina) nä huapalapala paipala (the introduced letters) - B (bë), C (së), D (dë), F (fä), G (gä), J (iota), Q (kopa), R (rö), S (sä), T (tï), V (wï), X (kesa), Y (ieta), Z (zeta.) Vowels can be marked with a macron called a kahakö in Hawaiian. Thus there are two versions of each vowel, e.g., ä (ÿä kö) and a (ÿä kö ÿole.) The introduced letters are used primarily for words and names from foreign languages. Prominent among such foreign derived words are words in the bible. ...
...
... Local people of all ethnicities ... voted in 1978 to reestablish Hawaiian as an official language of the State of Hawaiÿi
more exceptions:
efficient Einstein (OK, so this is a German word, not a English word)
``Essay: Communities, Audiences, and Scale: Limits on community size in online environments'' by Clay Shirky http://shirky.com/writings/community_scale.html discusses ``one basic but important effect: Audiences scale, communities don't.''
4. In _Grooming, Gossip, and the Evolution of Language_ (ISBN 0674363361), the primatologist Robin Dunbar argues that humans are adapted for social group sizes of around 150 or less, a size that shows up in a number of traditional societies, as well as in present day groups such as the Hutterite religious communities. Dunbar argues that the human brain is optimized for keeping track of social relationships in groups small than 150, but not larger.
http://shirky.com/writings/half_the_world.html How many people have never made a phone call ? Well, it used to be half the people on earth, but it's plummeting rapidly.
-- Jeffrey Henning 1995 http://www.langmaker.com/ml0101.htmtoo often, our desire to learn to express ourselves with language, to create new words, has been suppressed in favor of rigid conformance to the norm.
...
a model language that is actually meant to be used to communicate. Such a language requires a vocabulary of at least 1,000 to 2,000 words and a detailed grammar.
LangMaker lists many languages, including
even has a wiki http://www.nomic.net/~nomicwiki/ DAV: the idea of Nomic is pretty mind-bending ... it has connections to the idea of Wiki, non-static quine, ...Nomic is a game in which changing the rules is a move. In that respect it differs from almost every other game. The primary activity of Nomic is proposing changes in the rules, debating the wisdom of changing them in that way, voting on the changes, deciding what can and cannot be done afterwards, and doing it. Even this core of the game, of course, can be changed. (Peter Suber, The Paradox of Self-Amendment, Appendix 3, p. 362)
The Swiss Guy
A Swiss guy visited Sydney, Australia, and pulled up at a bus stop where two
locals were waiting.
"Entschuldigung, koennen Sie Deutsch sprechen?" he
asked. The two Aussies just stared at him.
"Excusez-moi, parlez vous Francais?" he tried. The two continued to stare.
"Parlare Italiano?"
Other than a glance at each other, there was still no response.
"Hablan ustedes Espanol?" Still nothing.
The Swiss guy gave up and drove off, extremely disgusted.
When he was gone, the first Aussie turned to the second and said,
"Y'know, maybe we should learn a foreign language."
"Why?" the other replied. "That guy knew four languages, and it didn't do
him any good."
Search for Sentences Used by VOA's Special English Programs: Study Special English Words by Reading Sentences Actually Used on VOA.http://www.manythings.org/voa/sentences.htm lets students see the context of words they are learning. Also points to all the words listed in order of frequency. http://www.manythings.org/voa/frequency.htm which begins
34349 the 13363 of 10465 to 10375 in 8359 and 7782 a 5457 is 4291 that 4197 for 3871 they 3641 it 3176 are 2828 people 2457 from 2368 was 2302 he 2092 have 2071 on 2037 s (Probably means uses of 's.) 1974 say 1956 about 1947 with 1939 this 1873 be 1835 more 1753 also 1740 not 1717 as 1646 at 1642 says 1633 their 1616 will 1594 by 1573 has 1494 or 1486 years 1479 than 1437 can 1390 other 1378 new 1372 many 1278 states 1274 united 1258 an 1239 were 1237 one 1220 disease 1177 scientists 1162 american 1102 who 1093 some 1088 world 988 but 976 his 967 called 957 these 944 most 910 had 869 first 856 year 832 may 829 mr 791 said 785 two ...
for far more details, see ``A tutorial on character code issues'' by Jukka Korpela http://www.cs.tut.fi/~jkorpela/chars.html is a (long) discussion of the basic concepts of the concepts of character repertoire, character code, and character encoding. And glyphs. Also talks about ASCII, ISO 8859-1, other ISO 8859-n codes, Windows character set, ISO 10646, UCS, and Unicode.
An "A" (or any other character) is something like a Platonic entity: it is the idea of an "A" and not the "A" itself. -- Michael E. Cohen
... You should never use a character just because it "looks right" or "almost right". Characters with quite different purposes and meanings may well look similar, or almost similar, in some fonts at least. Using a character as a surrogate for another for the sake of apparent similarity may lead to great confusion. Consider, for example, the so-called sharp s (es-zed), which is used in the German language. Some people who have noticed such a character in the ISO Latin 1 repertoire have thought "vow, here we have the beta character!". In many fonts, the sharp s (ß ß) really looks more or less like the Greek lowercase beta character (β β). ... the use of sharp s in place of beta would confuse text searches, spelling checkers, indexers, etc.; ... and some font might present sharp s in a manner which is very different from beta.
a list of concepts, described in English, along with words to express those concepts in several "natural" and "artificial" (constructed) languages. ... if you are creating an _a_posteriori_ planned language, these wordlists can be extremely helpful. ... I've hacked together some crude programs that assemble these files ... The program entitled COLLATOR.BAS is written in ... Microsoft QuickBasic ... The program entitled COMBINER.C is written in "generic" C.
DAV is uncertain whether machine languages such as the C programming language fall under 2.2.1 or if a completely different category should be made for them.
This essay advocates the creation of an international auxiliary language (IAL) for the world according to relatively objective criteria, and discusses how such criteria can be specified.
... I propose that the following three goals be considered the prime, paramount considerations in evaluating IAL design possibilities:
- An optimal IAL will be relatively easy for most children and adults to learn as a second language.
- An optimal IAL will have the ability to handle both mundane conversation and highly technical information.
- An optimal IAL will be culturally neutral; it will not provide advantages of word recognition or other special favors to one or two ethnic groups at the expense of all others.
These three goals are listed in order of declining importance. If there seems to be a conflict between item 1 and item 3, item 1 takes precedence. Other considerations in language design, such as the desirability of brevity versus the desirability of redundancy to prevent errors, questions of aesthetic appeal, and so forth are interesting and worthy of consideration. ...
... There definitely are languages that are simpler than others, and by a long shot, too. If you don't believe me, just try learning Navaho, or French, for that matter. If learnability is one thing we are looking for, we ought to examine those simple, that is, easily learnt, languages, and draw lessons from them. How do you tell them? Easy: round up all the Pidgins ... Let's round them up and ask ourselves: what have they got in common? And what is it that they don't have? ...
In summary, there is much testimony available ... indicating that some languages are more difficult for most humans to learn than other languages, and that many of the characteristics which cause such difficulty can be specified.
... Certain choices in the elements of orthography, syntax, and grammar will inevitably be familiar to some groups of people and unfamiliar to others. In view of our design goals, it seems most reasonable to make such choices in a way that will provide maximum ease of learning (not necessarily the same thing as familiarity) to the largest number of people.
... an optimal interlanguage should not use any sounds that would cause serious difficulties to large groups of speakers. ...
simplicity
As the great linguist Otto Jespersen observed, "That language ranks highest, which goes farthest in the art of accomplishing much with little means, or in other words, which is able to express the greatest amount of meaning with the simplest apparatus."
Constructed Languages: Language Creation http://www.kineret.com/dir/$/Science/SocialSciences/LanguageandLinguistics/ConstructedLanguages/LanguageCreation/ lots of links to information on language design
both point to pretty much the same set of sites.
... A hundred or so alphabets exist today. The most widely used are roman, arabic and cyrillic. ...
The great business advantage of the first alphabets was that plain merchants and tradesmen could write and read messages and reports that said what they wanted to say, without needing first the intensive training of scholars in committing an immense vocabulary to memory.
The reason for the great advantage of the alphabet is that in most languages the number of phonemes (speech sounds) is only around forty, with a range of between twelve to sixty, a limit probably due to the restricted range of sounds that humans can distinguish in listening or articulate in speaking. It defines the maximum number of letters needed to represent them, that need to be learnt. Since the necessary letters are so few in number, they can be simple and distinctive, and easy to write and to copy.
...
A new language or one previously unwritten can be given a written form relatively easily, whereas creating a new complete logographic symbol system would be a major work. [DAV: ... proper layering ... allowing incremental improvement ...]
...
Direct representation of any spoken language is not problem-free. We do not naturally consciously hear as separate elements in our speech the speech sounds that are distinguished in a language to make up its words, and which are the building blocks of alphabetic spelling. We hear them as part of the continuous sound wave. To map spelling to sound requires an explicit and abstract analysis of what we hear.
Anyone who has been at the working face of computer speech synthesisers, or who sees how naive children start to write, is struck by how different are the speech sounds we have learnt to hear as literate adults, from the sounds that computers and children hear 'naturally'.
Vowels are particularly difficult to analyse, and the first alphabets had none.
# Spoken languages are always changing, however slowly. This sets a dilemma of whether to reform the spelling to keep up with these changes for the sake of learners, including new learners of the language, or to conserve the existing system for the sake of those who are already literate. The older alphabetic orthographies tend to become more 'morpho-phonemic', as they retain the old representation of words and fail to represent their changing pronunciation.
* BUT, updating spelling is now more feasible, because computer techniques have done away with the bogey of reprinting in a modified spelling. Complete changes of house-styles and spellings can be made at the touch of a button. Since it is estimated that 90% of material in print has been printed or reprinted within the past decade, transitions can soon be practically complete, and work backwards to link with the past as well as forwards to update.
...
... readers do not have to be consciously aware of a literal auditory representation for every word they read or write. Readers with a rate of 600-2000 word per minute clearly cannot be ... [DAV: why not ?]
See also http://home.vicnet.net.au/~ozideas/16sp.htm for more ideas on changing the spelling of English words so that it keeps up with changes in pronounciation.
[modified English] Seven spelling reform principles to research, to improve standard spelling http://home.vicnet.net.au/~ozideas/spprinc.htm
Pinker's is the best, simplest, most concise, most fluent technical writing about language I've ever read, ... I am seriously awed by this book. ... he explains language on every level of detail: phonemes and morphemes up to words, grammar, and whole sentences; and from many different angles, commenting on philosophical concepts, politics, and research history, as well as suggesting answers to open questions. It's an excellent introduction and overview-- recc. jutta (Jutta Degener) http://kbs.cs.tu-berlin.de/~jutta/me/books.html [FIXME: toread book.html]
constructed/artificial scripts, including scripts for constructed/artificial languages.lots of interesting-looking shapes.
Origin - The Language Agencyhttp://www.origin.to/ translation service
-- Dr. Philip Emeagwali http://emeagwali.com/interviews/job-postings/first-break-all-the-rules.htmlpeople with limited education often believe that there is a limit to the knowability of the world. My mother thought there was a limit to knowledge and that I could learn all there was to be learned by the time I had completed high school.
Dwarvish Elvish Common gaz sylva forest kag gaeus Hill kan sylvus Elf kaz grym Dwarf kez tynker Gnome torog tronkus Troll zin argus Silver, mithril zog traeda Oak zon aurus Gold
...the greatest "poetry" ever composed about syphilis lies not in Fracastoro's hexameter of 1530 but in the intricate and healing details of a schematic map of 1,041 genes made of 1,138,006 base pairs, forming the genome of Treponema pallidum and published with the 1998 article -- the adamantine beauty of genuine and gloriously complex factuality, full of lifesaving potential.-- Stephen Jay Gould http://www.findarticles.com/cf_0/m1134/8_109/65913170/print.jhtml ... http://www.geocities.com/BourbonStreet/Quarter/2926/Soap_History.html ??? ... http://us.imdb.com/title/tt0290334/board/thread/2600959?d=2614767#2614767
Newspeak Translator: ... Newspeak automatically edits your writing to make it: either (a) Old Fashioned & Beautiful. or (b) fashionable and politically correct.http://www.theabsolute.net/sware/#newspeak ??? ... "Java NewSpeak Translator", maintained by: Bertha Van Nation http://www.ecst.csuchico.edu/~dcfuhs/java/newspeak.html
Started 1996 Oct 17
Original Author: David Cary.
David Cary feedback.html
Use
CritSuite
to comment on this page and to see others' comments:
http://crit.org/http://rdrop.com/~cary/html/html.html
.
Return to index // end http://rdrop.com/~cary/html/idea_space.html