summaryrefslogtreecommitdiff
path: root/transcripts/hplus-summit-2010/d2s3-george-dvorsky.html
blob: c6cb069def67113473a3413e7b9c2cde8bf382c1 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
George Dvorsky. He's a former board member of Humanity+. He's also a cross-fitter, so he's fitter than you could imagine. Jolly too.<br />
<br />
When the Turing Test is not enough. I have yet to have a cup of coffee. You're looking at an unenhanced human. A very common thread has been neuroscience, artificial intelligence, and the emergence of consciousness. How about some ethical considerations as it pertains to this subject. Machine consciousness is a neglected area. It does get conflated with artificial intelligence, and it gets mixed together. Machine consciousness is a subset of AI research, it's the branch that thinks specifically about cognition, and subjective awareness. What are the neural correlates of consciousness? That's its particular focus. In a way that a calculator or Wolfram|Alpha works, or purely computational, or through algorithms, that's more the AI than the artificial consciousness side. That's what I am speaking about today.<br />
<br />
Machine ethics as it pertains to this is further behind, the whole idea of artificial consciousness. We're behind the moral issues and ethical issues, like subjective awareness in our machines. We have to think about it now, not when we're having a subjective awareness emerge in low level brain emulation projects. The potential for harm is significant, we could be infringing on some subjective lifeforms. If we don't do this now, it's going to be harder to over-turn later. This is something that the animal rights people know. There's thousands of years of presedence for how we're accustomed to treating non-human animals. There are cultural and human expectations. There's a multi-disciplinary approach that will combine science, ethics and love. This is an issue from robot ethics. Robot ethics is defined by the rule and conduct of autonomous robots and vehicles engaged in combat for instance. If we have robot engaged in combat in the field, what could it do within the context of war and rules of war? That's different.<br />
<br />
In addition, we're not talking about these guys. None of these are really part of the discussion. Robots, Big Dog, predator drones. The tendency is to project some kind of consciousness or awareness on to it, we're infamous as humans to project on to others that these are worthy of moral consideration. You can kick Big Dog as much as you want. How dare you kick this robot. It doesn't matter, there's nothing there. There's no subjective core with which to experience indignity, care or suffering. This distinction has to be made. There are things that we are talking about. Basically, we're looking at moral worth. Take Hal9000 for instance. We're to assume for instance that it did have a conscious base, that you were interacting with a human if you will. We're not talking about something that is sophisticated or advanced as HAL, but what about human-equivalent life, or even instectoid subjective awareness?<br />
<br />
Why is this a problem? There's a lack of development in this field, a sense of possibility. The sense of vitalism is still present. Roger Penrose insists that there's something non-computational, something non-tangible about piecing together a consciousness in a machine. Routed very lcosely to that is the persistence of scientific indifference, something about consciousness in the machine, creating analogs in human or animal bodies. There's a defeatism about it. Who gives a crap that there's an emergent awareness in my code, or that I'm investigating a brain that is half-biological half-synthetic, that there's a sense the ethics have been divorced. That's the fixation on ai. Everyone is content to wrok on the ai problem, not the consciousness issue. <br />
<br />
Fundamentally, the whole issue of human exceptionalism and substrate chauvinism. Human exceptionalism is the idea that the only animals worthy of human rights and moral considerations that the degree humans are afforded, are humans. That there is something intrinsically valuable about humans that is worth protecting. So for example, granting a non-human animal person (like elephants, great apes, etc.), personhood status is apparently a violation of the natural order, or it's spitting on human dignity. By definition, it's transhumanist: there is potential for greatness, for personhood, outside of the human spirit- transhumanism. Beyond that, personhood could extend to non-human animals. The whole idea of substrate chauvinism. The only thing that matters, morally worthy, is the stuff that you're made out of. You must be composed of biological matter for me to grant you ethical consideration, or if you're made out of chips or some synthetic crap, you're less important.<br />
<br />
The last point: empiricism itself versus scientific understanding. The idea that through obversation and empiricism we're trying to understand consciousness. We're starting to get a little bit of a sense of how consciousness works, but it's still largely an empirical endeavor. The Turing test definitely has its disadvantages. The problem with the turing test is that it's purely behavioral, it's autonomic response of questions. There's no proof of subjective feelings in the background. There are many traits about human intelligence. There's stuff that we wouldn't necessarily equate with intelligence. Anthropomorphic fallacy, it's really easy for us to extend person-hood when it so suits us. There's also a failure to account for the difficulty in articulating conscious awareness. The problem really, is that just because it looks like a duck, quacks like a duck, just because something passes the Turing test, it doesn't mean anything at all. Feynman famously said that "what I cannot create, I cannot understand", that's why we need to build a cyborg duck.<br />
<br />
There are ethical implications that I have already touched upon. Experimentation in the lab, when we start to get sophisticated in ai research, and we have sprung life into matter, and they are now observing the universe around itself. That's now something that we can't experiment on it, but it deserves laws and protection and our consideration. There's a point through brain enhancement or neural analogoues in our brain, where we may be more synthetic than biological, and we may have the concept that you are somehow less morally worthy. I have nightmare scenarios with great apes and other creatures, for modeling the brain, slicing the brain, and creating pockets of awareness and moments of absolute terror and anguish. Some developers will be fine with this, but ethically we need to be on top of this.<br />
<br />
Because of the space of all possible persons that may emerge within or outside human civilization; if we set the laws to respect all persons, even things that might not yet be a person, we need to have these laws in place. Moving along quickly, we need some solutions. We need to adopt cognitive functionalism, the idea that we recognize that there's nothing mysterious about the brain, we just need to figure out about how the brain works. We need to figure it out, develop the functions for bringing about subjective awareness, and then expand the protections in the legal realm because they are a part of our reality. Cognitive functionalism will be the proofism- we know the stuff that is responsible for creating conscious awareness. We know what it looks like in a human brain, so we'll know what it looks like in an artificial brain. To get there, we need to map the organs of consciousness, the neural correlates of consciousness. Half of the human genome is maybe about the nervous system. There are tons of neural correlates, and we need to figure it out; we need to identify when we create neural correlates of conscioussness.<br />
<br />
There are some individuals that have started on this path. Intelligence is different, like being able to calculate math and trajectory. This is the kind of approach that we need to start mapping out, the bits of cognitive function. Igor Aleksander (1995) had his own list of organs of conscious functions. I put this in here as well. It needs to be mapped against personhood as well. These are terms that we could describe for what a person is, take a quick look at that - like concern for others, knowing, communication, talking with others. So, again, the legal aspect of things. What we're talking about is epxpanding protections in the legal realm, laws to protect machines, and if they do qualify as persons, then we have no choice but to grant them fundamental rights. The right to not be confined, not experimented on, and definitive artificial consciousnesses will have the right to not be shut down, to own its own source code, to have its own access to its source code, the right to privacy and the right to its own mental states, and the right to its own direction.<br />
<br />
Animals are not property. Advocate for the legal binding rights to protect animals, and oppose the ownership by others for the code of your genome and gene patents. Thank you, guys.<br />
<br />
<br />
<br />
<br />