From: Richard Loosemore (rpwl@lightlink.com)
Date: Thu Jan 12 2006 - 13:39:44 MST
You must be kidding.
As far as Blue Brain is concerned, don't hold your breath.
This is more like Big Fat White Elephant Designed To Suck Federal
Dollars Into IBM.
There is no point simulating something whose functional structure you
don't have a clue about.
Mark my words: the net result of the Blue Brain project will be just as
world-shaking as Japan's Fifth Generation Project. Remember that? Ten
year superproject to build a complete human-level artificial
intelligence? Net result: nowt.
Richard Loosemore
H C wrote:
> Not to get into any actual math (too often grossly flawed by factors not
> taken into consideration), projects like Blue Brain
> (http://bluebrainproject.epfl.ch/) are probably the most important to
> take into account when discussing neural network AI implementations.
>
> "Scientists have been accummulating knowledge on the structure and
> function of the brain for the past 100 years. It is now time to start
> gathering this data together in a unified model and putting it to the
> test in simulations. We still need to learn a lot about the brain before
> we understand it's inner workings, but building this model should help
> organize and accelerate** this quest." Henry Markram
>
> This institute has BIG funding, and really 'effing big computers (which
> are only going to get bigger). I'm not an expert, but in terms of the
> neural modeling approach to AI, it appears they are at the top of the
> game, and they are certainly raising the stakes immensely.
>
>
> -hegem0n
> http://smarterhippie.blogspot.com
>
>
>> From: CyTG <cytg.net@gmail.com>
>> Reply-To: sl4@sl4.org
>> To: sl4@sl4.org
>> Subject: neural nets
>> Date: Thu, 12 Jan 2006 15:01:26 +0100
>>
>> SL4
>>
>> Hello.
>> Im trying to wrap my head around this AI thing, and entirely how far
>> along
>> we are in measures of computational power compared to whats going on
>> in the
>> human body.
>> I know many believes that there's shortcuts to be made, even
>> improvements of
>> that model nature has provided us with, the biological neural network.
>> Still. Humor me.
>> Here's my approximated assumptions, based on practical experience with
>> ann's
>> and some wiki.
>> Computational power of the human mind ;
>> 100*10^9 neurons, 1000 connections each gives about 100*10^12
>> operations _at
>> the same time_ .. now on average a neuron fires about 80 times each
>> second,
>> that gives us a whopping ~10^14 operations/computations each second.
>> On my machine, a 3GHz workstation, im able to run a feedforward
>> network at
>> about 150.000 operations /second WITH training(backprop) .. take training
>> out of the equation and we may, lets shoot high, land on 1 million
>> 'touched'
>> neurons/second .. now from 10^6 -> 10^14 .. that's one hell of a big
>> number!!
>>
>> Also .. thinking about training over several training sets (as is
>> usual the
>> case) wouldn't I be correct at making an analogy to linear algebra ?
>> thinking of each training set as a vector, each set having their own
>> direction. In essense, two identical training sets would be linear
>> 'depended' on each other and subject for elimination? (thinking there
>> could
>> be an mathematical sound approach here towards eliminating semi-redundant
>> training data!)
>>
>> Hope its not too far off topic!
>>
>> Best regards
>
>
>
>
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT