AI is here

I’ve fallen behind in my posts and today I’m going to try to write two. This first one deals with our seminar last week on “AI, the Internet, and the Intelligence Singularity — will the machines need us?” We spent quite a bit of our time together discussing the AI singularity, but I’m going to focus here on the current rapid pace of change in artificial intelligence. It’s fun to imagine what it might be like to interact with a clearly intelligent machine, but as you can see from the students’ blog posts, it is really hard to come to consensus on what each of us would characterize as a clearly intelligent machine. And without consensus, our minds just run wild in talking about what The Singularity — whether a point in time or a process over time — would look like.

With less imagination, what fascinates me is the practical advance of artificial intelligence in our daily lives. I have lived through several cycles of hype around how artificial intelligence would radically change our daily lives, and for the first time, it feels like it is finally happening. Siri was fun when it first came out, but it didn’t change my life and I never really used it. But about a month ago, my family got an Amazon Echo, and Alexa has changed our lives. While it is not perfect, we use it constantly. As a childhood fan of Star Trek (the original series), I feel like I have what Captain Kirk had when talking to the Starship Enterprise’s computer. Wow!

Outside the home, I’m astounded by the rapid adoption of self-driving technology. As someone who still drives a stick shift, I can’t say that I’ll be an early adopter, but I can’t deny that broad adoption of the technology is coming. And coming soon. In the New York Times today, the most emailed article is titled, “Self-Driving Truck’s First Mission: A 120-Mile Beer Run.” Perhaps it’s the reference to beer, but I would bet that this just shows how interested the general public is in self-driving technology. This particular technology comes out of Otto, which is owned by Uber, and founded by researchers from Google’s multi-year efforts into autonomous vehicles. Self-driving trucks is not just a research idea. It’s a business plan for Uber.

And the government has noticed too. About a month ago, the Times wrote an article titled, “Self-Driving Cars Gain Powerful Ally: The Government.” This is an important first step toward a future where our policies, regulations, and laws begin to catch up with the changes that technology is making on the nation’s highways and roads. It will be interesting to watch the battle between oversight and overregulation. And it’s good to see an early push to consider issues of safety, security, and privacy. Too often these issues have been left as an afterthought.

As we talked about in our seminar, self-driving cars are not just a technological challenge. In designing and coding for these cars, software engineers are making ethical decisions. How do you write the code that decides between an outcome that causes a car to swerve and hit a pedestrian and another that causes the car to swerve and injury the passenger? What software engineering practices do you put in place for situations that the artificial intelligence might encounter that are not as obvious to the designer as the example I just mentioned? Working at a college focused on a liberal arts and sciences approach to education, we need the humanities to be as strong as — and interacting with — our engineering programs as we enter this world of ubiquitous artificial intelligence.

Comments are closed.