Taking NFB’s Cardboard Crash VR App for a Spin

550575449By Shawn Trommeshauser
(Dreaming in Digital)

Would you trust your safety to a computer algorithm? What about to the people who programmed it?

Cardboard Crash for iOS and Android is a deceptively straightforward Virtual Reality (VR) experience by Vincent McCurley and the National Film Board of Canada. Mid last month, it won the Digi Award for Mobile Entertainment and this award is the 11th to an NFB production. This app was first previewed in the DocLab program of the International Documentary Film Festival Amsterdam (IDFA) in 2015.

This title has a cute stylised world filled with cardboard people, buildings, and cars. The cardboard textures add a lot of detail to keep the world simple and angular. The music is pleasant and fitting for the game’s contents. Nothing overstays its welcome as the scenario is only a couple of minutes long.

The game has a very simplistic interface. It doesn’t require any additional controllers or hardware beyond a VR headset such as Samsung Gear VR or Google Cardboard to play. All you need to do is look at an available button for a few seconds to activate it. A voice clip will play when you highlight one of the many buttons and if you only want to hear the description before you decide on anything, you have to look away just before the selection is finalized.

I had no problem with the motion tracking or response time using an iPhone 5s. The game was a little choppy at times, but I believe that’s simply due the age of my phone. However, I experienced a huge drain on the battery, approximately 20% in less than 5 minutes of play time.  I’m not sure if it’s this particular game, the Unity engine that it runs on, or it was simply too much for my phone to handle. So if you give this title a try, please make sure that your phone isn’t overheating as you play.

Spoiler Alert! I’m going to go into detail on what happens during gameplay. If you’re interested and have a VR-capable iOS or Android device, I suggest taking a few minutes to play through Cardboard Crash before reading any further. it is only about two or three minutes long.

You are about to crash your cardboard car and its adorable cardboard passenger into an overturned fuel tanker! The impact is inevitable so the game offers three choices: continue into the tanker, veer right and fall off a tall cliff, or veer left and run over a family before hitting a tree.


Once you make your choice, you are given much more information about the consequences of each possibility. none of them are good. Are you going with your first choice or is it looking less desirable now? You have time to make the choice so take the time to look over your options closely. Unfortunately, there is no right answer. A choice still has to be made.

Cardboard Crash is asking what should be a self-driving car’s guidelines and priorities should be, but who gets to choose them? Self-drivings in the real world have a stellar track record, Google’s own accident reports state that their test cars have been involved in 14 collisions, 13 of which the other, human driven car, was at fault. But when something tragic does happen and an AI has to choose who lives and dies, there’s so much more to consider. Who should be the ones to control what the AI prioritizes? Several possibilities are suggested in the end of Cardboard Crash, but none really offer all the answers in a satisfying way.

One possibility is for the insurance company to control the morality of the AI. This feels like it could lead to a scenario where the AI makes its choices based on the bottom line of the insurance provider. Would people trust their families safety to accountants and lawyers focused on profit? But people do just that every day already. The cars they drive, tools and equipment they use, even the very food we eat are already made by profit-driven companies who make choices for us that balance user safety against profit.

Law Makers could regulate how these situations are to be handled. This is how the rules of the road work in the first place, after all. But laws change in an effort to improve over time. Will it take several incidents with ‘bad’ AI decisions to change laws for the better? Is this taking the entire concept of a decision out of the AI’s hands? how flexible is the AI allowed to be when lives are at stake?

The Car Owner could have control over the AI’s decision making. This would keep most of the responsibility for any incidents squarely on the car owner. However, the system would have to be very simple to use as most people don’t understand programming. this could limit how much actual control the user would actually have.


The final option offered is Asimov’s first rule of robotics. this means that an AI is not to allow harm to come to a human by action or inaction. This could be a problem as none of the available options in this scenario can guarantee this. Someone is going to be hurt, no matter which choice is made. Would morality and ethics be left aside and have the choice made on a pure probability of the least potential harm? SHOULD this be how a situation like this is decided?

The problem is that these choices add human bias to the decision-making process. Unfortunately, this distances itself from what the game is trying to explore and that is whether an AI would or could be moral in a situation where there are no good choices. Turning  a system as complex as driving  a car over to an AI

In the end, I find myself wishing for more scenarios to explore these concepts with. Unfortunately, there is only one available collision to work with and while it is very well crafted, it just isn’t enough to help the player answer these questions.

Leave a Reply