You are viewing a read-only archive of the Blogs.Harvard network. Learn more.

What can be Created, can be Destroyed.

This is a really interesting idea for the future–will computers, similar to the Terminator, take over humans one day? Will computers become smarter than humans and be able to act independent of and against humans?

Let’s get into talking about the possibility of this happening.

Theoretically speaking, this could most definitely happen and become a reality. As long as Moore’s law holds true for the next few decades, computers will be able to model the human brain in terms of processing and functionality. The ability to act quicker and more intelligently can be a reality, but I want to discuss two key ideas before we continue: intelligence and modeling after the human brain.

Intelligence is such an ambiguous term. A good way to examine the definition of intelligence is by looking at Harvard students. If you take a look at the average admission statistics for Harvard College, you’ll notice that the College reports that the average SAT score and GPA for admitted students was a 2260 and 3.97 unweighted–two main measures of intelligence or “college readiness”. Considering students who attend the college were among the top of their class and scored better than 98% of all of America, on average, I think many people would confidently conclude that Harvard students are intelligent. While I’m not trying to challenge that assertion by any means because many of my peers perform very well academically, I would say that based off of those statistics, it can be assumed that Harvard students test well academically and performed well academically. In terms of intelligence, I cannot say that my peers are intelligent because intelligence is so unclear. Is the best biologist in my class also an expert in the game of football? Does he share the same football intelligence as some of the most famous coaches or players of all time? Possibly, because it is Harvard, but probably not. This does not mean that he is not smart or intelligent; this just means that he is intelligent in terms of biology and whatever other subjects this particular student may excel in. How about the student the student that does not do well in biology, but is extremely social? Is this student dumb? Definitely not–he is socially intelligent, but perhaps not so well-versed in subjects such as biology. This is what I mean that intelligence is ambiguous–there are so many different types and no definitive ways to measure it. For the singularity to happen, however, many of us assume that computers have to outsmart humans, but we have no idea what “outsmarts” even means. Does that mean overpower, does it mean to have the ability to learn better and faster than humans, or is it a combination of both? This is a difficult question that I really do not have an answer to. The best answer I can provide, however, is that computers must have the ability to learn quicker and better than humans to be considered “smarter” than humans and be able to use this obtained knowledge better. If these computers can learn and use the information better, they will be able to teach themselves in all aspects to objectively be smarter than humans.

Let’s move on to the idea of modeling it like the human brain

If computers can learn better and more than humans and use this information better than us, then how do you go about making computers like the ones described? Do you model the human brain? Well, the brain is a network of chemical pathways, whereas the computer is a network of electric pathways. Are chemical and electrical pathways the same? How about the fact that we don’t even understand how the human brain works (good luck trying to model it after something we don’t understand). How about the idea that you don’t model it like the human brain, and instead model it into something better and more efficient than the human brain because it is meant to be more efficient anyway. How about the idea that the brain is known to have poor memory and clashes between different types of intelligence, such as emotional and logical intelligence–do you allocate enough memory so that it remembers everything perfectly or just partially, like us, and do you allow for these clashes or prevent them at all costs? The human brain is such an intricate system and abstraction would come into play big time here when modeling it like or even better than the human brain.

I think the singularity is really interesting, but I am a firm believer in the laws of the universe: what can be created, can be destroyed. If we were nearing the singularity, you can either stop innovating through a ban if you can detect that point. If it’s made already and is seen to have possibly detrimental effects, then it should be maintained the same way nuclear bombs are.

What can be created, can be destroyed. 

1 Comment

  1. school of applied science

    November 11, 2016 @ 4:35 am

    1

    I think your website discussing complex issues, I love to read.
    I’m glad you discuss and provide solutions

Leave a Comment

Log in