Archive for October, 2017

The Tremendous Importance of Labeling Consumer Products

Tuesday, October 31st, 2017

I really like the analogy of labeling advertisements on social media to labeling food products.  Since advertisements are in a way consumer products, they should be labeled as to make consumers aware of what they ‘buying’ (believing) just like when buying food products.  Many years ago, the FDA required all food products to be labeled with allergen information, which got a lot of push back from companies as it cost them money.  Companies disliked the regulation because they lost some consumers that now would not consume some products after knowing about the allergens and the companies had to do more research and be careful in what they were putting into their products (especially ingredients that were outsourced).  In the same way, labeling on advertisements will be resource intensive and get push back.  As someone with an extremely severe peanut allergy, my life depends on the labeling of food products to decide what I can consume.  So, I similarly find the labeling of advertisements extremely valuable so people know what they are consuming.  With the food labeling, some companies blindly slap on labels such as “May contain egg, milk, and peanuts” as a lazy way to make sure they are not liable.  There could be similar issues in labeling advertisements, but since I am extremely grateful for the food labeling that is overall beneficial, I think that label advertising will also be mostly beneficial.

 

Our discussion on what news organizations to trust was quite interesting.  Personally, I do not think trust is the right way to think about news because all news is biased.  Even ‘facts’ are biased since one could say that the Atlanta Falcons beat the New York Jets 25 to 20 emphasizing the Falcons’ victory or one could say that the New York Jets lost to the Atlanta Falcons 20 to 25 highlighting the Jets’ defeat.  For information, I tend to look at a broad spectrum of news outlets and usually like to supplement with videos before believing anything.  The idea of looking at the stock market for ‘facts’ is quite intriguing, but the market can have big swings in reaction to powerful individuals.  For example, when Hillary Clinton reproached Mylan (company that makes Epi-Pen) over twitter, Mylan’s stock immediately plummeted, but later recovered.  Anything can be twisted or manipulated.

Big Data

Tuesday, October 24th, 2017

David Eaves’ outlook on the world is a lot less optimistic compared to the authors of our previous readings.  His comparison of the internet to the printing press that first increased individual power, and then increased the power of the state is quite fascinating to consider.  First, the printing press gave millions easy access to more information.  Later, Napoleon was able to mobilize a one million man army only after the printing press standardized language, history, identity, and nationality that he was able to exploit to his own ends.  Similarly, the internet has empowered the individual, but we have started to feel the threat of centralized governments controlling technology in malicious ways.  Looking to China, we can see that the government’s censorship attempts to suppress any rebellion against the government, and attempt to make one billion people think as the government sees fit.  Arguably, China is an example of Eaves’ outlook on the future as people are in some ways being manipulated by the government.  The centralized, all-encompassing technology in China such as Weibo, certainly seems appealing and convenient, but this unfortunately gives the government easy access to all your data.  We need to be careful about how much data we are willing to give to technology that may help us in some ways, but can be used maliciously.  At worst, collection of all this data could lead to a scenario depicted in Captain America: The Winter Solider in which a computer program determines who would pose a threat based off all data records of people, and then would kill them.  Data has been extremely beneficial to us, but soon could be detrimental.

 

I am not sure that open government solves the issue of data being used maliciously against people.  Open government does not necessarily stop the government from using the data they collect.  Open government does make it harder for the government to be malicious, but does not prevent it all together.

 

Jeff Bezos’ thought that it is extremely difficult for large organizations to fundamentally change processes and adjust as relayed by Eaves shows the importance that we think carefully about the early stages and foundation of technological systems and databases in how they could be used in the future.  I think this also relates to our discussion of AI, and how important that we are careful with AI in the beginning as it will be increasingly difficult to change the core values of AI systems.

AI – Splendid or Skynet?

Tuesday, October 17th, 2017

As we looked to the future with AI, our class became more like a philosophy and ethics class than a STEM class.  An interesting idea to consider is would making an AI assistant that has the intelligence level of humans or greater your servant be slavery?  Since the sole purpose of all our digital devices currently is merely to do our bidding as we desire and as quickly as we desire, our devices are already slaves to us.  So, of course AI assistants would be slaves.  With this logic, I believe that our future would likely mirror Skynet in Terminator in which machines rule over humans.  At worst, the futuristic super-intelligent machines view humans as a threat to them or a danger to the environment or a waste of living space.  At best, the futuristic super-intelligent machines ignore us or even help us, but why would they do so?  Humans do not truly help animals that we regard as lesser intelligent beings so why would they help us?  From a risk/reward perspective, I am skeptical of the benefits of creating a fully autonomous AI system that has more intelligence than humans.  If we can control the function of the AI system, then the system would be useful to use.  But as soon as we lose control, then the AI system becomes likely useless if not dangerous.  AI would be extremely beneficial if we can direct it solve a problem such as cancer, but I find the autonomous super-intelligent AI system not only useless, but frightening.  AI has great potential to solve problems humans cannot tackle and to improve our lives in ways we cannot imagine, but we need to stay in control, else we risk allowing the Skynet scenario to play out.

 

In a different manner, our discussion involved ethics when discussing how in the beginning, voice recognition and face recognition had trouble with women and African Americans respectively due to the composition of the development team.  An issue with machine learning is that our biases in large data sets will be picked up by the machine.  This illustrates a tangible benefit of diversity in the technology industry.  From this, we can also see that the creator of a machine learning or AI system holds much power in determining how the system interacts with the world.  When dealing with the unknown, we need to be careful with developing AI in the beginning stages so we do not face Skynet in the future.

Risk/Reward

Tuesday, October 3rd, 2017

Plato’s argument that people should not learn to read and write because people’s memory capacity would decrease is extremely intriguing to consider in the context of technology’s role in human retention and general thinking ability.  Studies have shown that most people’s reading attention span has decreased in an era dominated by 140 character tweets.  Personal conversations and human interactions have suffered since the introduction of mobile devices and Facebook.  This phenomenon is presciently described by Ralph Waldo Emerson’s “Self-Reliance” writing in which he asserts that society never truly advances since progress in one dimension results in regression in another.  While human memory has deteriorated first with reading/writing and now with technology such as Google, knowledge has become widely available for all.  Similarly, the rise of all things becoming connected to the internet leads to convenience such as your phone automatically telling you how long it will take to reach your likely destination with no prompting, but poses a potential problem of an enormous amount data being able to be used against you in harmful ways.  A person with ill-intentions could use the same phone data that predicts where you intend to drive.

 

Personally, I am wary of the new digital assistants such as Amazon Echo and Google Home due to potential privacy and security issues.  That the devices are likely recording you at all times and storing that data forces consumers to trust that the company will not use your data maliciously and that the data will never be compromised, both of which are impossible to guarantee forever.  In addition, I am not sure if telling Alexa to turn off the lights is better than flipping the light switch or if telling Alexa to order an item from Amazon better than doing so on your computer.  While asking Alexa to buy products is faster, one could buy things and regret doing so later, which is less likely if one has to spend time and effort buying on the computer.  Technology certainly makes our lives more convenient in some ways, but adds more risk.