From: James Rogers (jamesr@best.com)
Date: Thu May 01 2003 - 15:11:12 MDT
Ramez Naam wrote:
>
> James, I found your last post interesting but rather abstract.
>
> What concrete approach, if any, would you suggest Eliezer /
> SIAI use for raising the funds necessary for the FAI project?
I'm not suggesting any particular approach. I think my only point was that
the ideas being proffered were not particularly creative. The idea that
research funding is an anathema to accomplishing technical goals is not
mine, but with time I have come to largely agree with it. More often than
not, research funding gives researchers just enough money to continue gaming
the system for more research funding. As a result, the conversion into
technical results of the already comparatively small amount of money
research funding gives you tends to be very poor.
Capitalization of one type or another always involves gaming a system of one
type or another, but if technical results are the goal it is generally
better to be gaming a system that doesn't have such direct negative feedback
on technical results. I've heard many fairly compelling arguments that the
conversion efficiency of "direct" research funding is so poor that indirect
research funding can often lead to technical results faster.
> Pardon? I don't think I understand this. Or if I do, I
> don't agree. I've pitched a business to hundreds of
> investors. I've been an angel investor in pre-VC startups
> myself. In the business world you make your reputation
> primarily /by/ making money, either for yourself or someone else.
Precisely. I don't disagree with this, and I've been in the game a long
time in Silicon Valley. Much capital is spent on people who deliver good
investment conversion, often whether or not they made any money themselves
in the past. However, it is useful to break this down further.
First, there are those "investees" whose reputations are that they can
convert money into more money. Their lever is that they have a proven
ability to turn capital into more capital; give them nothing but money and
they can make it multiply. This is what most people think of, but it isn't
the only type.
Second, there are those investees whose reputations are that they can
convert technology into more money. Their lever is that they have a proven
ability to turn any technological capital into investor capital. Note that
this is generally NOT an engineer or research geek, but someone with a
reasonably amount of both business and technological sense who can be given
just about any kind of technology and leverage it into a profitable
business.
There are lots of both of these types of capital magnets in Silicon Valley.
For a company that plans to do AI work you need someone of the second type
if you want commercial capital backing, but these are noticeably absent from
most AI startups. Having capital attractors of this type in an organization
gives a fair amount of carte blanche on the utilization of the capital for
technical development, and gives access to a larger capital pool than direct
research funding will ever give you. Of course, you can't do the wandering
aimless experimental work that seems to make up much of AI research; you
have to be able to articulate a clear target with a well-defined roadmap,
even if executing it will require substantial R&D. But if you don't have
this then you aren't ready for funding anyway IMO.
The really question in my opinion, is how do you coalesce a company that can
attract substantial and commercially legitimate capital (read: not
single-sourced angel investment from unconnected parties) that has the
freedom to legitimately spend most of the capital on critical core R&D. It
is certainly possible, as it has happened countless times in both good and
bad conditions, and there are many ways to go about creating such an entity.
Of course, you may have to get your hands dirty and spend a lot of time
working on things that have no apparent application to AGI or AGI companies,
but that is a tactical concern, not a strategic one. ("Specialization is
for insects", sayeth Heinlein.) Yet it seems to me that many, if not most,
AI researchers focus on the tactical scenario rather than the strategic one.
If they spent two years not doing any AI research at all, they may very well
get to their end goal *faster* than if they did nothing but AI research,
depending on how they spent the two years doing other things.
Path optimization is a legitimate use of time for AI research, even if it
has nothing to do with AI. And if I've learned nothing else from Silicon
Valley, path optimization is a decisive factor in who succeeds and who
doesn't.
Cheers,
-James Rogers
jamesr@best.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT