You are viewing a read-only archive of the Blogs.Harvard network. Learn more.

Redefining Humanity

Having a discussion about artificial intelligence at the level we had in class today would have been unimaginable a decade ago.  If a spectator who did not know much about science fiction and the development of technology were to walk into class and observe us have our discussion, they would have thought that we were trying to write the next sci-fi hit that would oust 2001: A Space Odyssey and The Terminator.  Putting this into perspective, we were talking about an issue that appears so deeply science fiction as if it were science.  Unfortunately (fortunately for the technophiles), artificial intelligence and the Singularity is science.  It is happening every day.  It might appear crazy to have these conversations, but it is these conversations that we must have because at the rate technology is developing, we are eventually going to face the issue of super intelligent machines that would replace humanity.  Today, we are going to discuss about multiple topics in AI.  It won’t be as nicely structured, but a discussion about AI will help us slowly structure our humanity for the future to come.

There are multiple theories about AI and the Singularity, but one thing that often gets ignored in sci-fi is the fact that right now, AI is in the control of companies like Google or Facebook.  These are the most sophisticated AI that we humanity possess, and it is in control in the hands of the few and wealthy.  This should raise huge red flags because if the predictions about the development of AI turn out to be correct, the AI of the future won’t be some independent god-like entity.  Rather, it will be a tool of a company, and that company and it’s leaders will be the gods.  Google might not be intending to be worshipped, but it is placing itself in the position to do so.  Dean Smith stated that these companies want to develop “pervasive, ambient AI” because they want to give their customers a great experience with their product.  We must not be fooled by this because what we get in luxury is what we lose in privacy and control in our lives.  The kind of AI that these companies want are AI that are literally your shadow, following everywhere you go, omnipresent and omniscient in your life.  With the sophistication of data-mining technology, we are ‘freely’ giving out our information.  As Professor Waldo stated in class, how much data should we give to these companies?  Unintentionally, we are placing ourselves in the position of being the subordinates.  Being aware and thinking of the proper action to take is crucial at this point before it is too late.  

I believe it was Professor Waldo and Amanda who nicely put three potential futures for humanity if the Singularity does occur.

  1. The Singularity enables us to create a “Heaven-on-Earth” scenario, where people are allowed to pursue higher levels of work and enjoy the pleasures of life.
  2. A dystopian future results from this, where are subjugated and living life as a sentient being is miserable.
  3. We coexist with the AI, but it just ignores us.  It pursues greater things, while we sometimes use it for our humanistic pursuits.

Although I am a techno-optimist and I am looking forward to a future where we have a superintelligent machine, I find myself siding with Option #2.  AI might not necessarily eradicate us, but it would heavily change our answer to the question: what is the purpose of life?  Careers would be meaningless because AI would be able to do it for us.  Sure we might have more time available to us to focus on higher level thought, like creativity and theory, but at the Singularity level, AI can already do those things better and faster than us.  It is like the logic of going to the store rather than raising your own food.  Why should I spend my time farming crops and raising livestock when I know that there is food that is cheaply produced in the store that would take me a lot less time to buy?  One might argue that you are producing food for the ‘fun’ of it, but this is different.  AI makes life a lot easier for us, and we as humans are hardwired to accept the easy way.  I don’t need to grow food because I can buy it at a store, I don’t need to calculate all the values of sine because a calculator can do it for me, and I do not need to search for new YouTube content because YouTube can suggest to me what I should watch based on my previous content.  

Ultimately, we would be facing an existential crisis because if there is nothing to do when things are done so efficiently for us, then what is the purpose of living?  It is hard to say it, but we live for the struggle of life.  Biology is centered around the theory of evolution, and what drives evolution is the struggle to live.  Whether it is putting food on the table to debugging our code, there is a struggle in life that intrinsically drives us.  The Singularity has the potential to remove this struggle.  It seems that I am touching on point #1 because a Heaven-on-Earth would be created, but what point #1 subtly implies is that we will not be aware that life has a greater purpose.  For instance, in George Orwell’s book 1984, everyone is taught the principles of Ingsoc, where everyone lives to obey and serve Big Brother and the Party.  The protagonist Winston is able to see that this is not what life is because he was born in a time before the Party existed.  It is this knowledge that ultimately caused his demised.  He pushed to attain such a life and was willing to go through the struggle and risk is life because he knows that there was a greater purpose than serving Big Brother.  It was being free to live how he wants.  The Singularity can put us in a situation like in 1984; however, we do not necessarily need to be serving an all powerful entity.  We will lose our humanity, our desire to know, to search, and to fight for what we want and believe in because all of that is unnecessary in this world of the Singularity.  I argue that if we were to live in such a world, we need to redefine what it means to be human and what life is in order to live peacefully in such a world.  

As Professor Waldo has pointed out, what is interesting is that talks about AI are often more philosophical rather than technical.  For every discussion we had about AI, we were running on the assumption that either AI will eventually develop to the high level of super intelligence we may or may not want or just imagine that if this were to exist.  It might not seem that we have the technology to reach this level, but I would argue that we already do, especially with the advent of quantum computing.  For one thing, if we define the singularity as something that has near limitless computing power, than quantum computers offer that.  With just about a few hundred qubits, we will have more computing power than there are atoms in the universe.  This level of computing power already fulfills one of the requirements of being labelled the Singularity (By the way, the other assumptions of being a singularity are that it builds on itself and that it does not need input from humans).  

Ok, so what if we don’t define the Singularity to be like that, but rather define it has surpassing our human minds.  If we look deep down, we are all made of atoms that behave in the laws of quantum physics.  Every reaction in our body is the result of quantum systems interacting with one another.  In essence, we are just a large scale quantum system, and our mind is no exception to this.  And what are quantum computers?  A computer that can simulate quantum systems.  Basically, quantum computers can simulate how our minds work, and because they are so efficient in the way they run, it has the potential to exceed what our minds are capable of.  

There is a lot more that I want to talk about AI, but I believe that these are the key things we need to think about.  AI has so much potential to move humanity up on the ladder of existence, but it also runs the risk of destroying our reason to exist.  Maybe not only should we program it to be smart, but also how to be human.  It is a complicated topic that has no right answers, but it something to look forward to thinking about and developing.  

 

*PS, I would highly recommend watching the movie Ex Machina and CGP Grey’s video Humans Need not Apply (link in the description).  It may not be explicit, but the themes that are covered were covered here could somewhat be seen.  AI is my favorite food for thought, and these works are a few of the best choices in the menu to start dining on this topic.  

https://www.youtube.com/watch?v=7Pq-S557XQU

1 Comment

  1. profsmith

    October 23, 2017 @ 12:27 am

    1

    I share your concerns about corporations focused on AI, as I explain in my blog. Overall, I found it fascinating how each of the students focused on different things from our seminar this past week. I hope you and the others are reading each other’s blogs!

Leave a Comment

Log in