IMG_0833

My TWSBI 580 AL Blue arrived on Monday. I didn’t really get to use it until Tuesday evening though – I was preoccupied with other stuff, namely debate preparation.

It’s pretty nice. I’ll probably take pictures of it soon (actually I already have, I just want to make it sound like I need some more time so I can delay posting stuffs).

Anyways, the pen is not what I wanted to talk about. Beyond the ever-continuing trend of commenting about my intermittent posts, I actually have something interesting (at least I personally think so) to think through.

And that’s about Google Self-driven cars.

Actually, it’s not about Google’s cars per say, but all self-driven cars. It’s just that Google is the most popular/well known, so I went with them. 

Today, I had COMM394 “Government and Business”. The name doesn’t give much away, and even 2.5 months into the 3 month~ course, I’m still not sure what it really means. It’s too vague and weird. They also cover like 5 different course topics at the intro levels, 3 of which I’ve taken the ‘full’ course (nothing can ever be a complete course) variant up in SFU. Environment economics, business ethics and ethical philosophy, and tax economics. If I have anything to say for this course, it is that it’s an awful course. I’m coming from a position that most students won’t find themselves in, but how this course introduces and then skims all the topics, it really makes it hard for me to understand what COMM394 taught, and what I learned at previous SFU courses. It’s actually super confusing because I’ll remember say the pollution abatement curve from ECON260 at SFU, but then when I have to do COMM394 abatement graphs, it’s completely different. They use different lines and different signs for things, it really makes me struggle. As an analogy for a select few who took chemistry, it’s like learning about electron shells and their 1s2 2s2 2p6 and then learning about the varying number of electrons for the transition metals, then going back to beginning science where electrons are 2 8 8 16. The correct solution to this issue is just for me to learn the new material very well, but first I gotta complain about it, it’s only fun that way.

Anyways.

In class today, my professor went over some ethical questions. The good old trolley car and the 1 or 5 bound victims question appeared – as expected. There was also the kill baby Hitler question, to which I’ve changed my mind about (from Yes, kill the poor soul, to No). But the most interesting one I wanted to talk about was about fully automated driving cars, aka: these things. Excuse the whole promotional commercial feel to it, I just chose the first video I found.

Automated cars is something that Google, and other companies (I think Telsa had a fully electric automated car, there’s a video of that somewhere) have been researching and developing for some time now. It’s not that far from being an actualized concept. You could have easily passed by a few of these cars over the last few months, and not realized it. What I’m trying to say is that these things are real, they are out there, and no, they won’t kill you.

At least until you’re put into this situation.

There you are merrily enjoying your car ride in your brand new (or no longer brand new, up to you) automated self-driving car. You have experience with self-driving cars before, and know that these streets are usually empty during this time. You set the car to go on auto pilot as it brings you home in a safe and normal manner. You lean back, and close your eyes for a bit as the car begins to start. You open your eyes a few minutes later to see that the car is on the highway and has almost brought you home. Suddenly you see a pedestrian about 50 meters in front of you crossing the single lane road that you’re currently going 90 km/h on. The first thing you think is “WHAT IS HE DOING HERE?!”, only to be followed with “WHAT DO I DO NOW?”. There’s no time for you to stop. In fact, there’s not even enough time for you to press the brake pedal or turn the steering wheel. It’s all up to the automated driving system to make a decision.

What can it do? Really it only has two options:

  1. Swerve to the left/right to avoid the pedestrian
  2. Continue going forward and hit the pedestrian

Obviously in this case, you want the car system to do option 1. That way the pedestrian lives, and nothing happens to you. Maybe as you drive away you roll down the window and yell at the pedestrian for being so irresponsible and stupid for walking on a highway in front of a car.

But what if to the right and left of your single lane highway are solid walls? Disregarding the bad design of having walls beside a highway, swerving to the left or right will cause the car to definitely crash into the wall. Add to that the fact you’re currently driving at 90 km/h, you will definitely die should the car hit the wall.

Now what do you want it to do?

The car can:

  1. Swerve to the side and crash into the wall, kill you, the driver.
  2. Continue going forward and kill the pedestrian.

According to the YouTube video I linked, the Google self-driven cars want to emphasize safety. Safety for the driver, and safety for the people around. But in a case like this, which is weighted more?

Before you continue reading, make a decision. 1, swerve, or 2, continue? Yourself or the pedestrian?

It’s hard isn’t it? To make it easier for you, here’s a third option:

  1. The car should save it’s driver.
  2. The car should save the pedestrian.
  3. The car should randomly choose between the two options.

Haha, it’s not that helpful, but there’s a fall back answer for you.

Have you made your decision yet? Yes? Good.

 

Take a look at the image above. What I just listed was scenario B.

What about scenario A?

What if there were no walls, but the car would definitely hit a pedestrian on the side of the road? It’s not the same guy who’s stupid enough to be unsafely crossing the road, in fact it’s just a random bystander. What if the 1 person turned into 10 people suddenly crossing at once? What do you do now that the choices are:

  1. Swerve and kill the pedestrian.
  2. Continue and run over the 10 people crossing.
  3. Randomize between the above two answers.

If you value utilitarian theories, then you would pick #1. You would prefer the car’s computer system to make a value judgement and purposefully kill one person and spare the 10 others. Don’t make assumptions on what the people are like. Even if the innocent bystander has done nothing wrong, you can’t assume them to be a better or worse person than those 10 crossing. Actually, can you?

What if the car was to make a decision after weighing the values of each person? It scans the body shape, gender, height, and apparent age, and makes a judgement based on which side’s people will benefit society the most? Say the innocent bystander will become the next Albert Einstein and bring forth new inventions beyond our wildest imagination. Suddenly do you weigh that person’s life over the other 10? What if the two values were equal? What then? Does blame fall onto how the computer system is valuing each characteristic it sees in the person? Then we get into the question of whether or not the valuing is fair. Does the value think about current society’s values, or does it try to predict future society’s values? Do the weighting of the values change depending on the society/culture/context that the car is in? Beyond all that, what if the car system can’t make an accurate judgement of the person in such a short time span? This answer seems very implausible. If humans can’t agree on how to value others, then how can we make a value judgement system that we will use for important day-to-day things like driving?

Then there is scenario C.

What should the automated car be programmed to do every time? The car can:

  1. Swerve into the wall and kill the driver.
  2. Continue and run over the 10 people crossing.
  3. Randomize between the two answers.

If the the car should be programmed to value the driver’s life more than a pedestrian, should it value the driver’s life more than 10 people? If so, then at what point is the driver’s life not worth as much as the people crossing? 15? 20? 50? 100? If you say an automated car should always save the driver, then what happens when the situation is caused because of the driver’s error (eg: the driver pressed the acceleration while auto-pilot is on, or the driver turns on auto-pilot just before crashing)? Even if it’s their fault, should the 10 or more pedestrians be punished for that? Also, what about the fairness of lives? Why is it that the driver should always be saved, and the pedestrians, however many there are, be punished? The impending public backlash for whatever company uses automated car software that always protects the driver would be awful.

Then should the car always save the pedestrians?

That can’t work either.

What if it’s the pedestrian’s fault that this situation is before us? Does the driver lose their life because the pedestrian is stupid? Even if that is so, what stops people from purposefully killing automated car drivers by walking in front of a car? That’s incredibly unsafe and nerve-racking to think about. Say it this does get implemented, and a car company starts to sell cars with this system. If the public ever find out, would anyone feel safe in their car anymore? Why would anyone buy a car that is programmed to kill them if it thinks it’s right? It’s an issue of loss of control there. Why should a program have control of my life? It won’t, because no one will buy it.

Then ‘random’ must be the best answer! If we can’t have the first or the second option, then the only option left is the third – random. Since I’ve given the option of ‘random’ a few times, you might have considered picking it. But if you give it any more thought, it’s never the right answers.

If it’s random, it’s a 50-50 chance for either option. That way, the pressure of making the decision is not there. However, you still have the problem of loss of control. What if in the situation there was a clear ‘better’ answer (not necessarily correct, but just the more acceptable outcome)? For example, lets say the following knowledge is known to everyone in the world. Our driver is an upstanding, just man who seeks to make sure everyone lives happily and safely. Then lets say the pedestrian crossing the street had actually just murdered 163 people. Or that the pedestrian is holding 60 hostages, and is walking across the street in order to execute them one by one. In this case, the majority would most likely choose to run the pedestrian over. Killing the pedestrian is still bad, but to kill the driver and allow the pedestrian is live is worse. In a situation like this, the idea that the car would roll the dice in order to choose what to do is disgusting. Loss of control to an even higher degree.

What if it actually happened? Would anyone buy a car that randomizes between the two? While I think that more people would buy this car than the previous two, it still wouldn’t be a good product to sell. Would consumers buy a car that may or may not kill them? Would pedestrians like having cars that may or may not turn towards them and kill them? Would society benefit if the cars were programmed to randomly choose an option? Just imagine all the cars in today’s society be programmed to randomly choose between the two options of swerve or continue. There is already no control over the car’s system, but there’s also no longer a clear expectation of what will happen. If there is a clear expectation, people may work around it and lower the risk. If there isn’t, there is little hope of working around it – the risk remains constant regardless of what happens.

In a way, this problem will indefinitely hold back the production of fully automated cars. While this means we might never get the joy of not having to drive to get home, this is probably a net benefit for society to keep those cars away from reality. That is, until a good solution to the problem is discovered.

Advertisements