I’m not planning to make any kind of thorough treatment of my views on the morality of technological advancement or human enhancement. I’m just going to post a few thoughts that I have. My views on this matter are very extreme. I believe that pretty much any technology that enhances the capacity of humans to cope with environmental pressures or to better understand the world (even artistically) are inherently good. I also believe that evolution transcends genetics and it is not even necessarily bad if the human race is destroyed by some superior intelligence of our own creation.
So what are the points I’d like to make? Let’s see:
1) There’s no such thing as “unnatural” human performance enhancement.
2) We are subject to an evolutionary imperative to seek more and better methods for human performance enhancement.
3) Humanity as we know it today will cease to be the dominant living things on the planet some time in the near future (if this has not yet already happened).
4) The risk of human extinction (without replacement by superior beings) due to research and experimentation in this direction is much smaller than our sensationalist media would like us to think.
Now, these are all fairly controversial stances, as I said before. I’m not going to defend these to the full extent of my abilities (although if you object, feel free to comment and I’ll explain my position more thoroughly); I just want to give some basic explanation.
1) A lot of people seem to believe that it’s unnatural to create things that help us achieve our goals in interacting with the environment. Most people only apply this belief to specific categories of things; other people believe that it’s categorically bad. (At least two of my friends think that pretty much all technology, even as far down as basic agriculture, is overall bad.) I do not think that this view is consistent with observational evidence about evolution. Even if there are some negative effects from these things, obviously evolutionary pressures selected for the technological advancement. Even outside of the human race, we can find things like chimpanzees using sticks as tools to get termites out of rotten tree stumps, and we can also find social structure in all kinds of different species which help them deal with environmental pressures.
So, then there’s this question of whether certain kinds of things are “natural” or otherwise acceptable, and other things are not okay. In particular, most people seem to believe that things like eugenics, genetic enhancement, and direct physical enhancement are wrong. I find these beliefs to be highly inconsistent with accepted behavior. Why is it fine to apply the principles of eugenics and genetic engineering to plant crops and domesticated animals, but not to humans? Why is it acceptable to consume highly structured and artificial “health food” products, but not to take performance-enhancing drugs like steroids and nootropic drugs, acceptable to have artificial limb replacements after a limb is lost but not acceptable to voluntarily have a limb replaced by a prosthetic, acceptable to have a chip inserted in a blind person’s retina to enable them to see but not acceptable for a healthy person to have a chip inserted in his brain to enhance his ability to do computations, etc.?
My personal opinion is that these people are just jealous that other people will be able to benefit from these things but they won’t. I know when I was in 9th grade, I saw a video about people who selected for good traits when choosing sperm or egg donors, and I was mad about it because I thought that these genetically engineered “super-children” would have an unfair advantage over me. My math skills and suchlike were hard-earned, not bestowed upon me magically by my parents’ decision to pick sperm and eggs from very smart people. But now that my worldview has developed into a more sophisticated thing, I no longer think that complaint is justified. In fact, I think we have a moral responsibility to encourage the kind of behavior that I once spurned. This brings me to point (2).
(2) Not only is there nothing wrong with the stuff I just discussed, but we actually should be doing it. Now, this isn’t unconditional. We do need to be careful. I’m not going to take some “mind-enhancing” drug just because some dodgy dudes in some commercial lab said that it influences your acetylcholine activity to make you think more clearly. But I think if there’s substantial evidence that one of these nootropic drugs works with relatively negligible side effects (and I can afford it), I’m going to go for it. What reason is there against it? I think I should try to realize as much of my potential as possible.
It’s also important (in fact, even more so) to take actions that will have long term effects for human society. That’s why I’m in favor of eugenics and general technological advancement. I think that evolution has transcended the physical. It’s no longer true that physical environmental pressures result in selection of traits and it’s no longer true that all (or even most) of human advancement occurs in the realm of genetics. When we upgrade our computers every two years, we are perpetuating a societal selection pressure which encourages the development of superior computational technologies. In my opinion, the collective societal judgment that this stuff is desirable is sufficient justification for people to put their energy into developing it. Even if society changes and decides it doesn’t like technology (on average), and I still do like technology, I am evolutionarily obligated to apply that selection pressure in our society (unless that conflicts with more immediate goals, such as acquiring other things that I care more about than a better computer). This is what selection means.
I suppose I should clarify that by “evolutionary obligation”, I mean that if we believe that the Darwinian system of evolution (introduction of variation, selection of traits, retention of traits, and competitive pressure) is “right” (in some sense; maybe “natural” is a better word), then the only rational action when we want a particular thing is to try to contribute to the selection for that thing (or situations that would encourage the creation of that thing) and to the competitive pressure that will result in the success of that thing or situation. Personally, I think that change in our world is in fact governed by Darwinian evolutionary processes, and I think that increasing intelligence, capacity for abstract thought and understanding, computational ability, precision in measurement and construction, etc. are good, so I should take whatever actions I can to perpetuate these things. I also think that most of my views about what we should work to increase (the things I just listed) are generally accepted, and when people refuse to take the actions (the views I described in the beginning as controversial), they are doing something wrong.
Unfortunately, I didn’t make a very good transition into (3). Basically, I think that technology and society are advancing at an ever increasing pace, and people do not realize how fast this pace actually is. People don’t think that technology is advancing that quickly because they don’t see the details of what’s going on. They don’t see how much innovation is happening, and many of the new ideas and creations don’t appear before their very eyes. They only notice the things that are really revolutionary. However, when you are on the cutting edge of a field, you see how fast that field is really moving. Of course, I’m not on the cutting edge of any field of technology, but I know some people who are and I am on (in the sense of observation, not participation) the cutting edge of an academic field. Anyway, my point is that I believe in the “technological singularity” ideas put forth by certain scientists and science fiction authors in the past few decades.
Allow me to briefly explain this concept. Basically, the idea is that the ever-increasing rate of technological advancement will eventually (some say soon) outpace human comprehension. This requires the development of devices which can accomplish superhuman feats. Of course, we are pretty close in terms of computation (machines are much better than we are at many kinds of computation already, although not higher order processing), and we are well into the realm of superhuman capacity in terms of physical activity and information transmission. The only major gap is “intelligence”, or “creativity”. Once we create artificial intelligence, the last piece will be in place, and a new era will be ushered in.
Some people think that AI will run rampant and destroy the human race. I don’t think this is necessarily bad. The new beings will have to be alive in some reasonably concrete sense, or they would not be able to destroy humans; they would in some sense have evolved from us and replaced us.
But I think less drastic situations are much more likely. Personally, I’m hoping for technology that will allow the lossless imprinting of all of the information stored in a human mind into a computer. Then our consciousnesses can be placed in machines, and we can reap the benefits of the superior transmission of information, physical strength and efficiency, etc. of machines. At this point there will be no use for human bodies any more; but can we really say that the human race is destroyed, as long as all of the living humans are transplanted in this way, and our “species” still has a mechanism by which it can survive indefinitely?
Another (more radical) idea I have is that organizations will become the dominant sentient life forms some time in the near future. Since organizations do not exist in the same framework as humans, it’s hard for us to evaluate where they are (in terms of overtaking us) right now. But I think that already an argument can be made that organizations are alive, and an argument can be made that organizations are intelligent (perhaps even more intelligent than humans). It may even be the case that organizations have been the dominant life forms on this planet for decades (or centuries). But this will become much more pronounced as technology advances, because organizations are able to incorporate technology into themselves much more effectively than humans, so organizations can cope with the advancement of technology much better than humans can. I am very interested in this idea of organizations as living things, and I’ll post about it in much more detail some other time.
The last, and possibly least repulsive to most other people, option that I think is worth mentioning is just vast human enhancement. In order to cope with increasing demands on our minds and bodies, we (as a society) acquiesce and actually make the sort of modifications that I’m advocating; in the not-too-distant future, the typical human is physically and mentally superior to even the most brilliant or physically fit human today. Furthermore, human appearance will probably be somewhat different, due to side effects of the modifications and inclusion of mechanical (cyborg) modifications…
I’m getting really tired, which is why my writing is getting much less organized already. I’m just going to briefly mention point (4). There are lots of essays and articles already written about how we are about to destroy ourselves and about how the previous dude who wrote about how we’re going to destroy ourselves is an idiot. I just wanted to briefly mention some key points.
(a) The “reproducing machine” problem. Everyone’s so paranoid about this that it’s definitely not going to happen. This apocalypse is so old that von Neumann thought of it. Basically, the fear is that a machine that can reproduce in the natural environment may end up consuming all available resources just to make more copies of itself. Personally, I think that as this kind of technology develops, people will be very careful to make sure that self-reproducing machines will not be able to copy themselves in any general environment. In nanotechnology, the general idea is usually that you’ll put your nano constructors into a vat of specialized chemicals, and only if these chemicals are all available in the appropriate states will the constructor be able to make things.
(b) Military or medical disaster. If this hasn’t happened already, why do we think it will? Some people think that point (a) could be a subset of this: someone makes a “doomsday machine” which will unleash this kind of machine on the world. Even if someone could make these self-copying machines that could function in the natural environment, who’s to say that it wouldn’t be easy to destroy them? Also, I don’t think that human medical experimentation or weapons technology is even close to capable of wiping us out right now, and our ability to protect ourselves from these things continues to advance along with our ability to harm ourselves with these things, so there’s no reason to expect this to change.
(c) Robots destroy us but then cannot sustain themselves, resulting in apocalypse. This is absurd. If robots could destroy us, they could surely sustain themselves too. This is because they must have been able to take control of pretty much all electronics and they are also physically articulate enough to do the things we do, and therefore they can turn all of our current energy-generating capacity to their uses, build new generators, repair themselves, etc. The only concern is that there might not be a very effective variation mechanism built into the machines, so they may not be subject to the same evolutionary process that currently governs the world. However, I don’t know if this is bad. It’s an interesting question.
I should point out that I don’t want to be killed by machines of superior intelligence. Some of my earlier statements make it sound like I wouldn’t mind if this happens. My point is actually that I don’t think this is bad from a global perspective. I think it’s only natural that humans (as we know ourselves today) will develop into or be replaced by something markedly different and in certain ways superior. If there’s a human-machine war, I’m going to do what I can to survive (if I think I can side with the machines and survive after they won, I’d certainly consider it).