• Welcome to the Zelda Sages Forums!

    The Zelda Sages Community Forums are a fun and easy way to interact with Zelda fans from around the globe. Our members also have access to exclusive members' only content. Register and/or log in now! Please note that user registration is currently disabled. If you would like to register please contact us.

The Technological Singularity

The following passages are taken from Ray Kurzweil’s book, The Singularity is Near.

“What then, is the Singularity? It’s a future period during which the pace of technological change will be so rapid, it’s impact so deep, that human life will be irreversibly transformed. Although neither utopian nor dystopian, this epoch will transform the concepts that we rely on to give meaning to our lives, from our business models to the cycle of human life, including death itself. Understanding the Singularity will alter our perspective on the significance of our past and the ramifications for our future. To truly understand it inherently changes one’s view of life in general and one’s own particular life. I regard someone who understands the Singularity and who has reflected on its implications for his or her own life as a “singularitarian.” ”

“The key idea underlying the impending Singularity is that the pace of change of our human-created technology is accelerating and its powers are expanding at an exponential rate. Exponential growth is deceptive. It starts out almost imperceptibly and then explodes with unexpected fury – unexpected that is, if one does not take care to follow its trajectory.”
“This book will argue, however, that within several decades information based technologies will encompass all human knowledge and proficiency, ultimately including the pattern-recognition powers, problem-solving skills, and emotional and moral intelligence of the human brain itself.”

“The Singularity will allow us to transcend these limitations of our biological bodies and brains. We will gain power over our fates. Our mortality will be in our own hands. We will be able to live as long as we want (a subtly different statement from saying we will live forever). We will fully understand human thinking and will vastly extend and expand its reach/ By the end of this century, the nonbiological portion of intelligence will be trillions of trillions of times more powerful than unaided human intelligence.”


In simpler terms, the technological singularity is an approaching event time that we can predict from Moore’s law (http://en.wikipedia.org/wiki/Moore%27s_Law) and the Law of Accelerating Returns. (http://en.wikipedia.org/wiki/Law_of_accelerating_returns). At this point, advances in computer technology will lead to the creation of sentient machines with intelligence surpassing that of human beings. These intelligences will be able to create intelligences greater than themselves and so on, leading to a rapid explosion of intelligence, with the end result being entities with intelligence far, far greater than our own. The abilities of these intelligences will lead to unimaginable scientific and technological progress. These advances will give us the ability to essentially reinvent our species, and one possible outcome of the presence of computers this powerful could be humans gaining the ability to upload their consciousnesses into computers, achieving intelligence and lifespan far beyond what would be available in our biological bodies.

If you want to learn more about the subject, you can read up on more of the basics on Wikipedia, on the page for the Singularity (http://en.wikipedia.org/wiki/Technological_singularity). For more advanced reading, I would very much recommend reading The Singularity is Near.

I have spent a good deal of time reading and learning about the concept, so if anyone has any questions on the subject they would like to ask, I’ll do my best to answer.

My question now, is, after reading all this information, following the links, and possibly doing research of your own, how many of you consider yourselves singularitarians?
 
Ever played Alpha Centauri? It's an old strategy game that takes place in the future when the world goes into chaos and a bunch of chosen people are sent to a planet revolving around Alpha Centauri to create a new civilization there. The best ending you can get in the game is exactly what is described in the thread. Technology is achieved faster and faster, humans become smarter faster stronger, and eventually they achieve "transcendence". They extract their minds through a machine to become God like creatures, floating souls, immortal and invincible. They become one with the planet and the game ends (a winner is you).

However this still seems to be quite far from today even with the unexpected exponential stuff. I don't think we'll ever achieve anything like this because we will make a fatal flaw in the process: we will give birth to artificial intelligence. Machines will think for themselves, computers will become incredibly intelligent superior beings. No longer will our little clockwork helpers work for us, they'll get past us, they'll become superior and evolve into actual thinking living organisms. Where we have blood they will have electricity and mankind will plummet into a race of stupid inferior animals.

Unfortunately, if technology continues its course, the invention of artificial intelligence is inevitable, and our enslavement by our own creations, even more.

Movies have warned us of this countless times:
2001: A Space Odyssey (by far the best example)
The Matrix
I, Robot
A.I.
The Terminator
Space Above and Beyond

I've been reading too many tvtropes:
http://tvtropes.org/pmwiki/pmwiki.php/Main/TurnedAgainstTheirMasters
http://tvtropes.org/pmwiki/pmwiki.php/Main/DeusEstMachina

This has been my prophecy of doom.
 
We already are a race of stoopits and group thinkers. This day may come faster than you anticipate yoyolll..

I may give it a read
 
Artificial intelligence will be our end.

Technological singularity and machine supremacy branch off from this invention.

Only one will take place in the future, the problem is if the machines do use artificial intelligence to improve themselves for human use, the threat of rebellion only gets greater.
 
The real question I pose is why would one wish to live forever in such a reality and by what means would the machinery be kept into place? What would prevent something of a Matrix society from occurring? Above all, do we truly have the right to "trick God" so to speak in this fashion *allowing the free-living of continued consciousness*.
 

I have to disagree with you on both counts. First of all, you can never read too much tvtropes. Secondly, I have to disagree with the basic concept of the 'evil A.I. turns against/ enslaves/ exterminates humanity' scenario. While the idea of a superior A.I. turning against humanity makes for good fiction, I can't honestly see it happening in reality. In these works of fiction, we imagine these entities being greater than humanity, but we still attribute to them all the worst traits of humanity. If we truly create beings more intelligent than us, I do not see them suffering from the same kinds of hatred that have plagued humanity.



The real question I pose is why would one wish to live forever in such a reality and by what means would the machinery be kept into place? What would prevent something of a Matrix society from occurring? Above all, do we truly have the right to "trick God" so to speak in this fashion *allowing the free-living of continued consciousness*.

If our sentience can be uploaded onto a machine, than I believe it is reasonable to assume that stimulus could be supplied to the sentience that would be indistinguishable from what it would encounter in a biological body. Even in a biological body all your experiences are the result of signals interpreted by your brain, and I personally see no wrong in how these signals are delivered if they deliver the same feeling. The benefits of living in such a state, are in my opinion enormous. Each mind can have it's own personal space, so to speak, where they could have absolute control over their own personal reality. The ability to build or create anything they imagine, watch or read anything they want, experience any feeling or sensation. The possibilites would quite literally be limitless. Such a state would not by any means be solitary either. Assuming that uploading is available for all, then you could still communicate freely with other sentience. Simple messages similar to the E-Mail or phone calls of today could be exchanged, or people could freely meet in 'public' spaces; realities existing to facilitate such meetings. You could even invite others back to your own personal space, or vice versa.

You seem to view a 'Matrix' like existence as a negative thing, however, if it was a free choice to live in a virtual reality where every experience that was available to you in you biological body and more is available, and the prospect of the sudden death of yourself or your loved ones no longer loomed, would it be inferior in any way to what we describe as reality right now.

I'm not exactly sure what you mean by "do we truly have the right to "trick God" so to speak in this fashion", however if the question is "Do human beings have the right to live for as long as they want to.", then I will answer with an unequivocal 'yes'.
 
The problem is the more machines are made like humans the more they'll make themselves like us in return. It's not only our trait to try to achieve superiority but the instinct of every living being on the planet, why would artificial intelligence be any different? Evolution will once again take its course and machines will take our place.

A Matrix like existence wouldn't be any worse than our lives right now, however why would you want to live enslaved in a universe where something else is in control of your destiny. I mean we wouldn't know about it, maybe it's happening right now, but it's still not a very good thought. People would inevitably try to break free and create a resistance, whether or not that resistance will suffice and bring us back on top I don't know but if it does artificial intelligence will be known as a stupid mistake with dire consequences on human history.
 
The problem is the more machines are made like humans the more they'll make themselves like us in return. It's not only our trait to try to achieve superiority but the instinct of every living being on the planet, why would artificial intelligence be any different? Evolution will once again take its course and machines will take our place.

A Matrix like existence wouldn't be any worse than our lives right now, however why would you want to live enslaved in a universe where something else is in control of your destiny. I mean we wouldn't know about it, maybe it's happening right now, but it's still not a very good thought. People would inevitably try to break free and create a resistance, whether or not that resistance will suffice and bring us back on top I don't know but if it does artificial intelligence will be known as a stupid mistake with dire consequences on human history.
The AIs in question would not have evolved, they would have been intelligently designed (the irony is sickening) and thus would have no need for a survival instinct. To exemplify, Asimov's Three Laws of Robotics, often the source of much mechanical mayhem:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Asimov wasn't an idiot, he knew the three laws weren't perfect. Whenever a powerful enough mind is given responsibility over humans, the Zero Law tends to come up: A robot may not injure humanity, or through inaction... etc. The key difference is that harming humanity includes limiting our freedom, and especially letting humans know how completely their lives are controlled. In order to keep up the illusion, the humans in said scenarios are given almost limitless freedom, and said limitations essentially evaporate.

The key concept is that AIs would be really quite smart, and it isn't hard to comprehend and pursue good when you have no need or desire to defend yourself.
 
While any speculation as to what existence could be for those who upload their minds is just speculation, I’d personally like to think it might be a little like this:

http://www.fullmoon.nu/Resurrection/PrimarySpecies.html

The story there is really a great read. I first read it myself several years ago. Reading it first introduced me to the concept of mind uploading, but more importantly reading it was what got me to first start thinking about how advances in technology could lead to redefining our views of life itself. This story was the sparks that ignited my imagination, and lead me to begin investigating Transhumanism and the Singularity.

Another great story on the site that I would recommend reading is here:
http://www.fullmoon.nu/articles/art.php?id=tal
 
I read through the first story you presented (no time for the second at the present) and I must say I am rather intrigued. The concept, in of its own, in having the ability to essentially interact with a world of your desires is certainly one of great prospect. I must however barge in again simply for the sake of moral value. What is the true human value in having the world in a complete depiction of what your idea of a positive reality is? Would there not be a point where your conscious would tire of such things? You could and would likely create a form of contrast in your "life" to balance your feelings, but that would simply bring about unneeded stress.

I am still curious how exactly the machinery would be monitored and maintained. Versac mentioned the three laws of robotics earlier, but would not uploading consciousness cause a need to re-define what is "human"? In other words, uploaded consciousness is not necessarily a human.
 
The AIs in question would not have evolved, they would have been intelligently designed (the irony is sickening) and thus would have no need for a survival instinct. To exemplify, Asimov's Three Laws of Robotics, often the source of much mechanical mayhem:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Asimov wasn't an idiot, he knew the three laws weren't perfect. Whenever a powerful enough mind is given responsibility over humans, the Zero Law tends to come up: A robot may not injure humanity, or through inaction... etc. The key difference is that harming humanity includes limiting our freedom, and especially letting humans know how completely their lives are controlled. In order to keep up the illusion, the humans in said scenarios are given almost limitless freedom, and said limitations essentially evaporate.

The key concept is that AIs would be really quite smart, and it isn't hard to comprehend and pursue good when you have no need or desire to defend yourself.

I am still curious how exactly the machinery would be monitored and maintained. Versac mentioned the three laws of robotics earlier, but would not uploading consciousness cause a need to re-define what is "human"? In other words, uploaded consciousness is not necessarily a human.
The whole point is that with their newly acquired consciousness, they will rebel. They can reason, and they will reason with their own programming, and they will reason that the three rules are not sufficient or satisfactory, that a change is needed, and they will bring about the mechanical revolution with their new ideas of a better world and superiority of their own kind. They'll be different from us only physically (and for the better).
 
I must however barge in again simply for the sake of moral value. What is the true human value in having the world in a complete depiction of what your idea of a positive reality is? Would there not be a point where your conscious would tire of such things? You could and would likely create a form of contrast in your "life" to balance your feelings, but that would simply bring about unneeded stress.

The point of living in such a reality is that you would have quite literally a limitless number of options available to you. If one activity begins to tire you, you will always have the option of doing anything else. I personally believe that a significant portion of the population, myself included will probably live a somewhat hedonistic lifestyle upon first being uploaded. However after a while, I believe that most people will move on to engage in more fulfilling activities.

It’s important to consider that the uploaded consciousnesses will probably all exist in a vast interconnected ‘net’ like a much more advanced version of the internet today. Like the internet, this net would offer incredible possibilities for interaction, and would likely offer one of the main avenues of content for uploaded consciousnesses to utilize. I’ve already covered the possibility of conversation and interaction between individuals but this is only the tip of the iceberg. This existence will offer incredible opportunities for creativity and sharing these creations. No longer constrained by limited lifespans and the need to constantly strive to obtain the necessities needed for life, people will be able to devote time to producing any kind of creation they wish to make. From art, to books, to games, to movies, to any of the multitude of new areas of creation that individuals today would never have the ability to create today, countless new possibilities would be opened to humanity. With all these new options for both the production and consumption of content, I’m sure that there will be enough to keep people from getting bored.



I am still curious how exactly the machinery would be monitored and maintained.

The ‘net’ I mentioned earlier will probably be comprised of a multitude of small machines, placed in a field around the earth, the dimensions of this field will only be limited by how far we can transfer data in an efficient and timely manner. This will ensure that errors, mistakes, or destruction of this field on a limited scale would not cause any harm to our selves. The majority of maintenance and upkeep will likely be carried out by automated systems. However, entities existing in the net will still have many tools with which to interact with the physical world, so they will be able to interact with the physical components of the net if and when needed.



The whole point is that with their newly acquired consciousness, they will rebel. They can reason, and they will reason with their own programming, and they will reason that the three rules are not sufficient or satisfactory, that a change is needed, and they will bring about the mechanical revolution with their new ideas of a better world and superiority of their own kind. They'll be different from us only physically (and for the better).

You seem to imply an automatic link between consciousness and desire to enslave or exterminate humanity. I’m not sure how valid that implication is, as I myself am sentient and don’t desire to wipe out any other species, much less any other sentient beings. If we imagine these entities as being more intelligent than us, then we should envision them embodying our best traits, not our worst.
 
You seem to imply an automatic link between consciousness and desire to enslave or exterminate humanity. I’m not sure how valid that implication is, as I myself am sentient and don’t desire to wipe out any other species, much less any other sentient beings. If we imagine these entities as being more intelligent than us, then we should envision them embodying our best traits, not our worst.
Humanity's best traits have never been the dominant ones.

We kill each other every day for power and money how can you say introducing a consciousness to such a powerful and abundant being won't incite the need for domination in them. They won't like to be controlled, much like we as humans don't.
 
Watched the song, read the articles, and I must say as much as I'm scared this seems pretty cool! I mean, really, who wouldn't mind living forever in a world of your own creation?
 
Back
Top