{"id":16,"date":"2017-10-17T18:53:31","date_gmt":"2017-10-17T18:53:31","guid":{"rendered":"http:\/\/blogs.harvard.edu\/matty\/?p=16"},"modified":"2017-10-17T18:53:31","modified_gmt":"2017-10-17T18:53:31","slug":"ai-splendid-or-skynet","status":"publish","type":"post","link":"https:\/\/archive.blogs.harvard.edu\/matty\/2017\/10\/17\/ai-splendid-or-skynet\/","title":{"rendered":"AI &#8211; Splendid or Skynet?"},"content":{"rendered":"<p>As we looked to the future with AI, our class became more like a philosophy and ethics class than a STEM class.\u00a0 An interesting idea to consider is would making an AI assistant that has the intelligence level of humans or greater your servant be slavery?\u00a0 Since the sole purpose of all our digital devices currently is merely to do our bidding as we desire and as quickly as we desire, our devices are already slaves to us.\u00a0 So, of course AI assistants would be slaves.\u00a0 With this logic, I believe that our future would likely mirror Skynet in Terminator in which machines rule over humans.\u00a0 At worst, the futuristic super-intelligent machines view humans as a threat to them or a danger to the environment or a waste of living space.\u00a0 At best, the futuristic super-intelligent machines ignore us or even help us, but why would they do so?\u00a0 Humans do not truly help animals that we regard as lesser intelligent beings so why would they help us?\u00a0 From a risk\/reward perspective, I am skeptical of the benefits of creating a fully autonomous AI system that has more intelligence than humans.\u00a0 If we can control the function of the AI system, then the system would be useful to use.\u00a0 But as soon as we lose control, then the AI system becomes likely useless if not dangerous.\u00a0 AI would be extremely beneficial if we can direct it solve a problem such as cancer, but I find the autonomous super-intelligent AI system not only useless, but frightening.\u00a0 AI has great potential to solve problems humans cannot tackle and to improve our lives in ways we cannot imagine, but we need to stay in control, else we risk allowing the Skynet scenario to play out.<\/p>\n<p>&nbsp;<\/p>\n<p>In a different manner, our discussion involved ethics when discussing how in the beginning, voice recognition and face recognition had trouble with women and African Americans respectively due to the composition of the development team.\u00a0 An issue with machine learning is that our biases in large data sets will be picked up by the machine.\u00a0 This illustrates a tangible benefit of diversity in the technology industry.\u00a0 From this, we can also see that the creator of a machine learning or AI system holds much power in determining how the system interacts with the world.\u00a0 When dealing with the unknown, we need to be careful with developing AI in the beginning stages so we do not face Skynet in the future.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>As we looked to the future with AI, our class became more like a philosophy and ethics class than a STEM class.\u00a0 An interesting idea to consider is would making an AI assistant that has the intelligence level of humans or greater your servant be slavery?\u00a0 Since the sole purpose of all our digital devices [&hellip;]<\/p>\n","protected":false},"author":8868,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-16","post","type-post","status-publish","format-standard","hentry","category-uncategorized"],"jetpack_featured_media_url":"","_links":{"self":[{"href":"https:\/\/archive.blogs.harvard.edu\/matty\/wp-json\/wp\/v2\/posts\/16","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/archive.blogs.harvard.edu\/matty\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/archive.blogs.harvard.edu\/matty\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/archive.blogs.harvard.edu\/matty\/wp-json\/wp\/v2\/users\/8868"}],"replies":[{"embeddable":true,"href":"https:\/\/archive.blogs.harvard.edu\/matty\/wp-json\/wp\/v2\/comments?post=16"}],"version-history":[{"count":1,"href":"https:\/\/archive.blogs.harvard.edu\/matty\/wp-json\/wp\/v2\/posts\/16\/revisions"}],"predecessor-version":[{"id":17,"href":"https:\/\/archive.blogs.harvard.edu\/matty\/wp-json\/wp\/v2\/posts\/16\/revisions\/17"}],"wp:attachment":[{"href":"https:\/\/archive.blogs.harvard.edu\/matty\/wp-json\/wp\/v2\/media?parent=16"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/archive.blogs.harvard.edu\/matty\/wp-json\/wp\/v2\/categories?post=16"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/archive.blogs.harvard.edu\/matty\/wp-json\/wp\/v2\/tags?post=16"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}