Sunday, August 20, 2017

Why Important Social Programs Should Be Government Funded


I often hear libertarians and "small government" types talk about how so many government funded programs like most social services, health care, etc, should be funded by private charity rather than taxpayer dollars. But even the most hardcore of them will insist that programs such as national defence must still be run by the government. But I will put forward that they might be being self-serving hypocrites.

We generally support government funded social programs (ultimately funded via taxpayer dollars of course), because people don't necessarily know where help is actually needed, and how much money needs to go to it. When you look at how different charities are funded, people are far too swayed by whichever ones have the most media attention, spend money on advertising, deal with the scariest and most sensational problems, etc. The boring but necessary things, and the problems that affect only small numbers of people but don't get much attention, will tend to be ignored. And it's almost certain the the correct amounts of money won't perfectly end up where they're needed. Some things will get over-funded, some will get nothing.

Sure you might save some money from inevitable government waste, but at what cost? Just because pharmaceutical companies are greedy and make mistakes doesn't mean we throw out producing medicine. Needing reforms to government run programs isn't in itself anywhere near a sufficient reason to say that we should throw them away.

But here's the interesting argument:

If the small government types are actually correct, and people can be trusted to give money to charities as much as is actually needed, then why couldn't national defence also be charity funded? A huge amount of current defence spending (in the US at least) is private contractors, with the government coordinating and doling out the money as needed, so for them the only real change is where the money is coming from. For the rest, such as actual soldiers, you can still have it be government run but funded via public donation.

If people truly will donate money sufficiently where it's needed, then surely the one case where this is most likely to work is the one that literally affects every person in your country, national defence. People can turn a blind eye to social programs that they don't directly benefit from, but no one can argue that they don't benefit from this. So if the charity model was ever to work, surely it would work in this case right?

But even the hardcore libertarians aren't willing to risk the safety and security of themselves and their families by trusting it to public donations to fund. When it affects them directly, they insist that the government must fund it and taxpayers must pay for it, because it's just too important. And this, in my opinion, is an admission that your charity-based approach to social services doesn't work, and that you don't believe in it enough yourself.

There will always be discussions to be had about how to distribute money to different social programs, how much to spend, how much tax is fair, and which things are enough of a public good or moral obligation that the government should fund them, rather than a "nice to have" that can be left to the whims of the people and how generous they feel. But if you're not willing to go all the way and say that the public can be trusted to voluntarily donate money where it's needed on everything, then when you only suggest charity for the government programs you personally don't care about, chances are you're being a hypocrite.

Sunday, April 30, 2017

Underappreciated Movies: Robocop (2014)


The 2014 remake of the much loved 80s movie Robocop wasn't very well received by critics or audiences. While I do understand some of the complaints about the movie, I also think it was very interesting in several ways, and actually has quite a few clever and thought provoking ideas, which I'd like to discuss here.

Firstly, though, let's be clear on where the remake fell short of the original. The PG rating, while allowing for a lot more than it used to back in the 80s, meant that director Paul Verhoeven's trademark over the top violence was absent, and that's a big deal. It would be like remaking a Michael Bay film and having a sensible amount of explosions in it!

The other big flaw was poorly written social satire. Verhoeven's sci-fi action movies such as Robocop, Total Recall, and Starship Troopers are all loved in large part because of their cynical satire and social commentary. The Robocop remake did attempt to include some of this, and while it worked in places (such as Michael Keaton's excellent performance), other instances such as the Samuel L Jackson satire of a conservative Bill O'Reilly-type character were forced and on-the-nose. Admittedly, this kind of satire is really hard to do well, but nevertheless the remake falling short here really cost it.

But on the other hand, the remake had an excellent cast (Michael Keaton, Gary Oldman, Joel Kinnaman, Samuel L Jackson), great special effects and production values, all things that I think often get overlooked and taken for granted in modern movies and remakes in particular. While of course these things aren't enough in themselves to justify a remake and consider it good, it is worth stopping occasionally and appreciating just how high the quality of movies is these days that we so easily dismiss as "crap".

Where I think the Robocop remake really excelled is in the interesting and thought provoking ideas that it contained, and there were quite a few. Most of these were based around AI, robots, free will, and similar topics that have evolved a lot since the original movie came out in 1987. Back then, general audiences didn't know much about a lot of these things, and probably very few even knew a term like AI. Things have changed a lot since then, and many other movies and TV shows now tackle these sorts of concepts in much more sophisticated ways than 30 years ago (e.g. Westworld, Chappie). The general public is more interested in these topics, robots and automation are an increasingly normal part of our lives, and questions about legal and ethical consequences of these technologies are no longer distant hypotheticals. It's exciting to me that topics I've been reading and thinking about for over 20 years are now actually interesting to other people!

Robots as soldiers/peacekeepers


The first big idea is the use of robots and autonomous machines as soldiers/peacekeepers. We already have autonomous drones and remote controlled robots used in the battlefield to limited degrees, but this is going to really change in the not too distant future as remote operated and/or autonomous robots become viable replacements for soldiers. The general public has an increasing distaste for human death (at least on their own side) and the taking of human lives by other humans, but we're suprisingly much less bothered by the use of machines. For example, people care much less about the innocent people who have been killed in drone strikes in the Middle East than they would if it had been squads of soldiers out there mowing down innocents.

Dead soldiers also make for bad PR back home. The less costs there are to the general public from war, the more likely they are to support it. So there are a lot of incentives to automate military forces, and this is going to keep increasing. The movie does a pretty good job of extrapolating from current technology and trends to what is plausible in the near future.

Moral Accountability


With robot soldiers (and simpler machines like drones) comes the problems related to the ethics of letting machines kill human beings. This isn't a trivial thing, and allowing robots to freely kill humans under some circumstances could be quite risky and dangerous. Any time a software bug or a hardware glitch could result in a machine being able to kill without an easy way to shut it down, that's something to be concerned about. And since a warfighting robot needs to obviously be resistant to attacks and also resistant to being hacked by an enemy, it's not clear how well that can be achieved while still retaining sufficient control in case something goes wrong.

The movie looks at this issue and the likely public concern about killer robots, and tries to gain public trust by making a robot with a human in the loop. It's not the addition of a human because the autonomous versions aren't very good, like the original movie. Rather, it's the addition of a human element in order to satisfy the public that there is proper moral accountability.

This is also one of the better cynical parts of the movie that comes close to the cynicism of the original, as a company looks for ways to appease the general public in order to get a piece of legislation about autonomous drones passed. It's entirely a self-serving motive without the company caring about any of the ethical issues that their technology raises, something that is a very real and pertinent concern in the present day.

Robocop is a Publicity Stunt


This was a great idea in my opinion, and something that I think is a story change that is much better than the original. The whole notion that Robocop is a terrible idea. They already have perfectly functioning autonomous robots, and the addition of a human element just makes one slower and less effective. The company knows this, but is doing it as a publicity stunt to appease the public about accountability.

By making this switch, the movie makes the horror of turning a human being into a big lumbering robot that can no longer live any kind of normal life a lot more real. But it also opens the door to some interesting discussions in the movie about just how much the human part should be in control, how much that human should be drugged and manipulated in order to "function" correctly and have a healthy mental state, and in what ways the human mind can be enhanced to make it work better (at least as far as being a good "Robocop" is concerned).

Suppressing Human Emotions


In order to make Robocop function better, they experiment with suppressing his emotions, making him care less about his family and former life, and more machine like. This raises interesting questions about the use of emotions in our own decision making processes, how they affect the way we prioritize our goals, and how excess emotional states negatively affect us. A lot of good research suggests that emotions are at the very core of human decision making, and if they were suppressed entirely we would quite possibly no longer be able to function properly.

Seeing the effects as they tinker with his emotions is a good reminder to us about being cautious with how much we tinker with our own emotions using drugs like anti-depressants, alcohol, and other things that can alter our moods.

The Illusion of Free Will


This was definitely my favourite concept raised by the movie, since free will is a topic that I've thought a lot about and I think it's one of the most misunderstood concepts not just by the general public, but even by many philosophers (as an aside, I highly recommend Sam Harris' short book Free Will as the best explanation of human free will that I've come across so far).

In order to achieve faster reaction times from Robocop, they come up with the idea of incorporating an autonomous AI into his system, which takes over decision making and control during combat, but when it is activated, he thinks that he's the one actually making all the decisions, and is in full control. This is a really fascinating concept, raising the issue of just how much you can be sure that you are the author of your own actions. Our minds rely on the illusion that we are making our own choices, and in general we're terrible at noticing when outside influences affect us. This is why, for example, so many people think that they're not affected by advertising, and are completely oblivious to the subtle ways their minds are being manipulated.

So to take this to the next level and effectively implant decisions into Robocop's brain such that he thinks he authored them himself is really interesting. And of course it also means that you no longer really have a human in control during combat, making Robocop effectively autonomous, and thus the whole idea of having a morally accountable agent in the loop is negated and Robocop really is just a pure publicity stunt at this point.

-----

I don't know if you find any of these ideas interesting, but hopefully you do, and maybe it's enough to encourage some of you to go and (re)watch the Robocop remake with fresh eyes.

Sunday, April 2, 2017

Whitewashing Movies and Color Rinsing Movies



Whitewashing Movies


Do you remember the broad Asian cast that had roles in the excellent Martin Scorsese film The Departed? No? Of course that's due to the fact that it's a western remake of the also excellent Hong Kong film Infernal Affairs.

Do you remember how much western audiences loved all the obvious pandering to the Chinese market in Transformers: Age of Extinction? No? That was the highest grossing movie of all time in China, but did somewhat less well in western markets.

When an international movie like Ghost in the Shell or The Great Wall gets made with white characters in it as an obvious way to appeal/pander to western audiences, there are these days inevitable cries of "whitewashing". Without wanting to pretend that there is no history of racial casting issues in films, I feel that people seem to try and deliberately act stupid with regard to the economic realities of getting $100 million plus films bankrolled and released. This would be fine if it made no difference, but I want to write about it because I think these responses are actually sabotaging the efforts that will help move the industry in the right direction, and hurt the people that the complainers are claiming to be trying to help.

It shouldn't be controversial to say that big name, recognizable actors tend to help a movie to be profitable. Like it or not, people will pay attention to posters and trailers more when they see an actor that they know and like in it. Obviously there are always exceptions in both directions (successful movies with unknown actors, and big name actors not enough to save a bad movie), but these actors don't command salaries in the millions of dollars because movies studios are idiots with too much money to spend.

Are the Chinese just a bunch of racist assholes because the inclusion of famous Chinese actors and Chinese locations in Transformers: Age of Extinction made them go to see that movie more? Are Americans a bunch of assholes because they generally prefer the white, American version of The Office to the white, British version?

People seem to think that there are only two choices: stay authentic and use lesser known actors of the "right" race, or whitwash with some token white actors. But they forget the other obvious option: completely remove the international setting altogether and set it in a western location with well known western actors (usually white, but increasingly less so these days, like how Dwayne Johnson is in everything).

So in the case of Infernal Affairs and The Departed, it probably would have been better for Asian actors in general if an Asian movie had been made with a couple of lead white actors to appeal more to western audiences, rather than having an American version with no Asian actors, and an Asian version that most western audiences haven't seen.

A mixed version is a compromise that allows for increased box office to make the movie profitable, while still giving roles to non-white people and giving those actors more exposure, helping them become more famous and maybe eventually be able to give studios enough confidence to give them major or lead roles. At the end of the day, you can't just make audiences pay to see a movie, and so as long as celebrity sells tickets, you have to work within that framework and provide a path for non-white actors to build up that celebrity.

Take another recent movie, The Great Wall. People complained about "obvious whitewashing" with the casting of Matt Damon in the lead role. The movie has made about $300 million internationally, but less than $50 million in the US, with a budget of about $150 million. But the funny thing is, this movie was a collaboration between US and Hong Kong studios, intended to be the start of doing future collaborations, making movies that appeal to both markets. It stars several big name Asian actors (including Infernal Affairs star Andy Lau), who all could have gotten a big international boost if US audiences embraced this rather than shunning the movie. But for the cause of "fighting whitewashing" they've actually just helped prove that those Asian actors can't draw a profit, or that this joint effort simply isn't worth the risk again.

Color Rinsing Movies


What is quite funny is that the often maligned Fast and Furious franchise gets casting right. By making a generically enjoyable movie featuring action, fast cars, hot women, and just as important, a very diverse cast, you guarantee wide appeal. Vin Diesel I think is responsible in large part for this direction since he began producing the series with the fourth movie, Fast and Furious. Diverse cast and international locations have been a huge part since then. He followed the same formula with the recent XXX: The Return of Xander Cage, being sure to include a diverse cast including the legendary martial arts actor Donnie Yen, Bollywood actress and model Deepika Padukone, and even a cameo by Brazilian football star Neymar.

The fact that movies are struggling more to make profits in the box office (thanks to consumers having better options at home) is forcing studios to care more about international appeal. And this naturally makes them cast international stars to pander to those international audiences. Whether it's inventing excuses to include an international star, or giving a role to an international or non-white star that would have otherwise gone to a white star, the end result is that it's now making economic sense to give roles to non-white people.

The same forces that make studios whitewash movies are now also making them color rinse movies!

And this is a good thing we should be embracing.

Of course it can be a problem when movies get forced to do silly things to their story just in order to pander, and we should be wary of that. 47 Ronin is a good example, a movie that was only supposed to feature Keanu Reeves in a minor role (with about 15 minutes total screen time), and otherwise have an almost entirely Japanese cast, but due to it's bloated $200 million dollar budget, the studio freaked out and forced re-shoots and re-editing in order to make Keanu Reeves' role and screen time as large as possible. The end result was a mess, and the movie did quite poorly as a result.

But when a movie is done well, we need to recognize and, more importantly, support, mixed casting to encourage studios to take those risks more often. In the end it's our money that decides what studios will take risks on, and if we put them in an impossible position where they fear not enough profits if they don't cast stars, or boycotting due to "whitewashing' if they use a mixed cast, they will end up doing an entire remake in a different setting like The Departed did, which doesn't help non-white actors at all, or they will just avoid the project entirely, which also doesn't help.

We should embrace movies that have diverse casts and not always try to find the glass half full and look for shit to complain about. When Fast and Furious 7 has a great diverse cast, complaining because it also has some gratuitous shots of women in bikinis doesn't help. When The Martian casts Chiwetel Ejiofor in a role that in the book is an Indian (ignoring the fact that they actually originally wanted Irrfan Khan but there were scheduling conflicts), complaining that the role "should have gone to an Indian" doesn't help. When Ghost in the Shell casts a wide variety of different races in different roles but people complain because it's based on a Japanese anime, it doesn't help.

We have to keep remembering that, at the end of the day, if we make casting of actors a minefield for movie makers whenever they're unable to put a famous person in the role, then they're just going to say "Fuck it" and recast the whole thing in Boston and call it The Departed. If Ghost in the Shell had been set in New Chicago featuring a full white cast, that would have been of no benefit to Asian or other non-white actors. But if we support mixed casting in movies rather than constantly finding some reason to say "better but still not good enough", then maybe we can actually get the progress that we claim to want, rather than just doing a bunch of virtue signalling back patting that in reality harms more than helps.

Sunday, February 12, 2017

Dislike Uber For The Right Reasons


I've never been much of a fan of Uber or any of the other companies spearheading the "sharing economy". While there are certainly some positives to them, I see them primarily as a cynical way to skirt around regulations, laws, insurance and other consumer protections that mean traditional services cost more, but that consumers and employees are generally better off. Like environmental agency regulations, or drug testing requirements, these things are frequently demonized by people on the right in particular as stifling innovation and job creation, and completely ignoring the criminal actions and negligence that typically proceeds the creation of any regulation in the first place. Companies usually do the wrong thing and cause the government to create regulations to protect consumers and employees; the government doesn't generally do this sort of thing just for the fun of it.

Of course, government regulations can be stifling at times, and they can be poorly policed and enforced, creating work for businesses while not actually providing any benefits to society. And in cases like Uber, taxi services exploited monopolies in cities and didn't respond to consumer demands, making an opening for an alternative to pop up. So this can help to justify the need for a disruptive company like Uber, though it's also very likely (given the rise of all kinds of other "sharing" businesses) that they would have popped up anyway, since they now operate in cities all around the world that have quite reasonable taxi services already, and thus no obvious need for disruption with dubious legality.

Given all of that, I find it quite galling that I'm here now defending Uber, at least in a certain way. Specifically, the recent #DeleteUber social media trend that resulted from what I can only see as a completely misguided belief that Uber is supporting the Trump administration and needs to be punished. Now, separate from the question of whether it makes sense or is reasonable to punish the entire workforce of a company because of political opinions or actions of its CEO, what stuns me more is how people convinced themselves of this "fact" in the first place.

Service Pricing 101


Let's try a hypothetical.

Say you're Uber and there is some KKK rally going on somewhere. You don't want to support it, but let's say that for whatever reason, you don't want to straight out ban your drivers from doing any pick ups or drop offs near the area of the rally. How would you go about it?

Anyone who understands the Uber business model knows that when there is high demand at a particular time and place, they can't just make more people work there like a traditional taxi service could, since the drivers are all voluntary. So instead they increase the rate for rides in that area. This encourages more Uber drivers to get out and service that area, in order to make more money. The flip side of this is that when there are areas currently running at a higher rate, this is going to discourage servicing of areas with a lower rate. Quite straightforward.

So to discourage attendance of the hypothetical KKK rally, you would make sure the rate is lower in that area than in other areas, which would reduce the drivers servicing it, making waits longer. What you definitely wouldn't do is increase the rates in that area, which would increase service to it. It's possible that if you went to the extreme and raised rates to an exorbitant amount that would also reduce service in the area since drivers would show up but no customers would want to pay it.

Basically, to reduce services you either raise rates super high or make them super low.

This fact can seem counter intuitive to people who have only thought about the Uber pricing equation from the customer side, and not the driver side. So counter intuitive, in fact, that Uber has been criticized for raising its rates during times of crisis, disasters, etc, because people think that they are doing it to profit from tragedy, rather than understanding that it's how they manage supply and demand with an effectively volunteer workforce.

And as a result, Uber has had to respond by not raising their rates during times when people would see them as profiteering unfairly from other people's misfortune. The massive irony being that under those circumstances, by not raising prices they actually make it harder for people in these situations to get a car and get away!

#DeleteUber


So, we have a situation where protesters are at the JFK international airport protesting an immigration ban, and taxi drivers announce that they are going to stop servicing there for an hour in support of the protest. Uber announces, after the strike, that it will turn off its surge pricing in the area, which will result in longer waits. And this is seen by many as being a strikebreaking move, and so they decide to boycott the company.

There are so many things wrong with this situation that I feel like I'm stating the obvious in pointing them out. But here we go:

  • If Uber actually wanted to strikebreak the taxi service, the right move would have been to increase surge pricing to get more Uber drivers to the airport area. What they did was in fact the strongest supporting move they could have done short of banning drivers from going there altogether.
  • Uber was acting in order to avoid criticism for price gouging people stuck at the airport, costing themselves and their drivers profits as a result. They were putting their money where their mouths are.
  • Uber annouced the stopping of surge pricing after the strike had taken place, not during it.
  • People were at the airport protesting the stopping of people from being able to enter the country, yet completely missed the irony of forcing people to get stranded at the airport by supporting taxis halting services and boycotting a company trying to provide people with a way to, you know, actually get further into the country than the airport food court.
  • People have long been applauding Uber in cities like New York for providing an alternative to taxis and screwing up their monopoly. Yet here they selective now want Uber to support the taxi industry? If you see Uber as a welcome disruptor of the taxi monopoly then you can hardly complain if you think they are doing something that goes against the taxi industry's interests in that area. It wasn't actually the case, but people should expect Uber to act against the taxi industry, since their business model is basically trying to put them out of business!

Punishment


A final point. I believe that a lot of the support for boycotting Uber was due to an already existing unhappiness with the fact that their CEO is in the Trump economic advisory group (though he's since left due to all the negative pressure). So I think people were looking for an excuse to punish the company, and so were far too willing to interpret what happened at the airport as bad behavior by Uber, even after explicit clarification by the company as to their reasoning.

And yet, even here, it seems that people are so eager to lash out at the Trump administration that they're actually acting in ways that are going to be horribly counterproductive in the long run. This same advisory group also has Tesla CEO Elon Musk, and the CEOs of General Motors, IBM, GE, and even Disney. Unless people are actively arguing for a boycott of all of those companies too, and think that someone like Elon Musk is a Trump administration lackey, then they're really just being completely inconsistent and randomly punitive.

People complain when the Trump administration gets horribly unqualified people to advise him or appointed to various positions, but then they complain and boycott when reasonable people are put in those positions. Don't you want people like Elon Musk advising the president, rather than some uneducated, ignorant person? How is it helpful to the long term prosperity of the US to punish people who are actually trying to provide the administration with good advice so that they will hopefully make better decisions?

There has to come a time when US citizens realize that, like it or not, they actually need the Trump administration to succeed, because the government can't fail while somehow all of the "right thinking" citizens prosper. You can hate your leaders all you want, and many people do in many countries, but trying to make them fail horribly is as sensible as trying to make your employer fail while somehow thinking that you can retain your job!


Tuesday, January 31, 2017

Defacing Wikipedia


We've all seen good examples of defacement or vandalism that are actually funny. People often come up with genuinely amusing and clever jokes as part of defacing something. Because of the humor, we regularly give this sort of vandalism a pass or are softer on it, compared to standard vandalism such as graffiti tagging or malicious website defacement. At the other end of the spectrum, we tend to come down extra hard on defacement that is offensive or inciting.

Recently an example of Wikipedia defacement was being passed around on social media that was fairly amusing. Someone had defaced a page about invertebrates and added a politician onto it, in reference to behavior seen as spineless by many. It's a fairly simple and funny enough joke, all things considered.

What bothered me, though, was the applauding of the defacement and sharing it around as a good thing. Because it was funny and people didn't like the politician, they were happy to cheer it on as legitimate political satire rather than try to discourage it as vandalism.

Wikipedia is far from a perfect resource. People frequently say that you should never trust it as accurate. However, given the noble project of collecting information for everyone to freely access, and allowing anyone to contribute, they do a remarkably good job of removing disinformation. When vandalism or gross misrepresentation of facts occurs, the moderators are normally very quick to restore the information. And of course they generally provide ample links to sources so readers can verify content via third party sources.

So while it might be true that when accuracy is vital, Wikipedia shouldn't be used as a definitive resource, it makes an excellent first point of research for many things, and provides plenty of information to help readers jump off from there to validate details as needed. And for many things where general background information is needed, any minor issues of accuracy are probably not really an issue. This is of course why it is such a popular resource used by so many. Anyone old enough to have had to research information the hard way without the internet should appreciate just how lucky we are to have a resource like that.

The price of this, though, is that we need to show some collective responsibility and not make the task of maintaining accuracy harder than it already is for the Wikipedia staff. In the case of obvious political satire, no one is going to be fooled into thinking it's legitimate. But when we reward vandalism of the site with attention and kudos, we encourage others to do more of the same. And because Wikipedia is normally pretty good at restoring pages quickly, this also encourages people to make more subtle changes that are harder to spot, in order to have them stay up longer for greater bragging rights. And, of course, it helps to create more of a general air of acceptance that defacing websites is okay.

There are so many places you can go on the internet to share and consume humor and political satire. Let's not ruin useful public resources just so we can have a 10 second chuckle when looking at our news feeds. It's hard enough to get reliable, true information on the internet these days. Let's not make it impossible.


Wednesday, December 14, 2016

Game Theories: On Emotional Dissonance


Back in the earlier days of gaming when the technology was much less advanced, games tended to avoid trying to elicit any kind of complex emotional response from the player, choosing to focus primarily on fun gameplay. But as the technology has improved and budgets for games have increased, we've seen games attempt to create sequences that provoke a strong emotional response from the player.

Typically we see this in the form of the cinematic cutscene. Modern AAA games have the tools and talent on hand to make cutscenes using all of the same tricks used in movies to manipulate the emotions of the player, including complex musical scores and detailed facial animations that communicate the thoughts and feelings of the characters richly enough for the player to buy in.

John Carmack once said, "Story in a game is like story in a porn movie. It's expected to be there, but its not that important." And certainly some games follow this approach to a degree. But games in the first and third person shooter/action genre in particular are almost always putting in cutscenes to setup the story and motivation for the player. And typically, the player is subjected to a cyclic cutscene/action bubble/cutscene format, where each cutscene is supposed to give motivation for the action in the next section, and to build characters and story that make the player want to continue to find out what happens next.

The problem that I see with all of this is that there has long been a dissonance between the narrative that the cutscenes present, and the actual gameplay that takes place. Typically the player is tasked with killing dozens of people during a game sequence, often hundreds or thousands over the course of the entire game. But in the cutscenes, the game tries to make the player feel an emotional connection to the main characters, and often make the player care when a main character is hurt or killed. Since the player is playing through the body of the main character, this creates a dissonance between how the character "acts" during the gameplay sequences, and how they "act" during the cutscenes.

If you've just shot a few dozen people to death during gameplay, and are then presented with a cutscene where your character is distraught that a single person on their side has been killed or wounded, after which they vow revenge as the music swells and we see a close up of the determination on his face, it all seems rather silly and schizophrenic.

Or when your character gets shot repeatedly while trying to take down a fortified base, pausing briefly behind cover to heal each time, only to then be mortally wounded in a final cutscene, it again feels wrong, since the game has been establishing that getting wounded is generally no big deal to your character, and the only time that it is is when the character is not in your control.

Or consider when a cutscene tries to establish the shock and horror that your character feels at his actions of killing, which is in complete dissonance to how you felt as the player playing that character. The game sets it up for you to enjoy running around killing dozens of enemies as this character, only to then try and convince you that the character actually feels bad.

When you combine emotional narratives with fun gameplay, it creates a huge disconnect when that gameplay consists of actions that would make the character look like a complete psychopath in the real world. We enjoy shooting enemies in the face or blowing them up with rocket launchers because in the context of a game it is fun. But none of us (bar the psychopaths) would in any way enjoy re-enacting that in real life against real human beings.

So when a game tries to meld what we do during gameplay with a believable character feeling normal human emotions during cutscenes, it creates dissonance, and the more realistic and lifelike games get, the deeper this dissonance will get.

I suspect that as game technology improves further, and particularly with the introduction of VR, we may start seeing games diverge more into ones that are largely story and character driven, focusing on player choices and exploration, and ones that are more action based. To some degree we already see this with the multiplayer online shooters that are almost entirely about fun gameplay rather than story, and single player games that while still including combat, are more frequently starting to include "just the story" modes (as a nicer way of labelling easy difficulty) for people who want to play mainly for the story and characters rather than grinding for hours in combat.


Saturday, August 27, 2016

AI And The Motivation Problem



What motivates us? What would motivate AI?
For many years now I've been fairly confident that the development of human level (and beyond) artificial intelligence is a matter of when, not if. Pretty much since reading Roger Penrose's The Emperor's New Mind 20 years ago I have seen plenty of attempted arguments against it, but nothing has been convincing. It seems inevitable that our current specialized AI systems will eventually lead us to general AI, and ultimately self aware machine intelligence that surpasses our own.

As we see more and more advanced specialized AI doing things like winning chess and go, performing complex object recognition, predicting human behavior and preferences, driving cars, and so on, people are coming across to this line of thinking as being the most likely outcome. Of course there are always the religious and spiritual types who will insist on souls, non-materialism and any other argument they can find to discredit the idea of machines reaching human levels of intelligence, but these views are declining as people are seeing in front of their own eyes what machines are already capable of.

So it was with this background that I found myself quite surprised that, while on a run thinking about issues of free will and human motivation, I thought of something that gives me real pause for the first time about just how possible self aware AI may actually be. I'm still not sure how sound the argument is, but I'd like to present the idea here because I think it's quite interesting.

Motivation


The first thing to discuss is what motivates us to do anything. Getting out of bed in the morning, eating food, doing your job, not robbing a person you see on the street. Human motivation is a complicated web of emotions, desires and reasoning, but I think it all actually boils down to one simple thing: we always do what we think will make us happiest now.

I know, that sounds way too oversimplified, right, and probably not always true? But if you think it through you'll see that it might actually cover everything. There are simple cases like if you subject yourself to something painful or very uncomfortable, like touching a hot stove, walking on a sprained ankle, or running fast for as long as possible. The unpleasant sensations flood our brains, and we might, in the case of the hot stove, immediately react without conscious thought. For a sprained ankle or running, we make a conscious choice to keep doing it, but we will stop unless we have some competing desire that is greater than the desire not to be in pain. Perhaps you have the pride of winning a bet, or you have friends watching and you don't want to look like a wimp. In these cases, you persevere with the pain because you think those other things will make you happier. But unless you simply end up collapsing with total loss of body control, you reach a point where the pain becomes too great and you're no longer able to convince yourself that it's worth putting up with.

For things like hunger, obviously we get the desire to eat, and depending on competing needs, we will eat sooner or later, cheap food or expensive food, health or unhealthy, etc. Maybe we feel low energy and tired and so have a strong desire to eat some sweet, salty and/or fatty junk food, even though we know we'll regret it later. But if we're able to feel guilt over eating the bad food or breaking a diet, then we actually feel happier not giving in to the temptation. We decide whether we will be happier feeling the buzz from the sugar, salt and fat along with the guilt, or happier with a full stomach from bland, healthy food, but combined with a feeling of pride at eating the right thing. And whichever we think in the moment will make us happier is what we do.

Self discipline, in this model, is then just convincing ourselves strongly enough how much we want the long term win of achievement more than the short term pleasure of eating badly, watching TV rather than going to the gym, etc. If you convince yourself to the point that guilt and shame at not sticking to the long term goal is greater than the enjoyment you get from the easy option, then you'll persevere, because giving in won't make you happier, even in the short term. You'll feel too guilty and your nagging conscience won't let you enjoy it. If you can't convince yourself, then you'll give in and take the easy option. But either way, you'll do the thing that makes your happier now.

More complicated things such has looking after your children, helping out strangers, etc might seem to go against this model, but if you just think about what happens in your brain when you do these things (or pay attention when you actually do them), you'll see that they fit just fine. You look after your children because it feels good to do so, and even if there's a time that it feels like a labor of love and not making you happy in the moment, you do it because what does make you happy is being able to call yourself a good parent. Fitting an identity that makes us proud of ourselves makes us very happy, and this can be a powerful motivator for helping people, for studying, for sticking out the long hours of a tough job, etc.

I could go on here with plenty more examples, but hopefully I've at least given enough to make you consider that this model of motivation might be plausible. I know the tough part can be that it implies that all of our base motivations are actually selfish. We all like to think that we're nice people doing things because we're selfless and awesome, but our brains don't really work that way as far as I can tell. That doesn't mean we shouldn't continue to do nice things even if our base motivations are not as pure as we'd like to believe though. The fact still remains that if we feel good helping others, and they're also better off, then where is the downside?

The Perfect Happiness Drug


So let's now say that there was a pill you could take that would make you feel 10% happier all the time, with no side effects. You'd want to take it, right? Why not? But there is still a side effect. The happier we feel, the less we feel a need to actively do things to make us happy. When you're doing something enjoyable that makes you feel happy, you don't feel the need to go and do something else. You want to just enjoy what you're currently doing, right? Unless some nagging thought enters your head that says, "I'm really enjoying sitting here watching this movie and eating ice cream, but if I don't get up and do the washing we won't have clean clothes tomorrow." And the guilt of that thought has now taken away from your happiness, so you may then get up and do the chore. It's not that you have chosen to do the thing that makes you less happy. In that moment you actually felt happier to relieve the nagging in your mind of a chore hanging over you, the guilt of letting your family down if they're relying on you to get that chore done, and whatever else might be in your head.

But if you had taken that 10% happier pill, then the competing motivations would have to have been stronger in order to push you over to doing the chore. If it was a 100% happier pill, it would be even harder still to make other motivations push you to do something different, and you'd be more likely to feel perfectly content doing whatever it is you were currently doing.

Then, if we take it to the limit and we take a pill that makes us feel totally ecstatic all of the time, we wouldn't feel motivated to do anything. If you took the perfect happiness drug, you would just sit there in bliss, uncaring about anything else happening in the world, as long as that bliss remained.

Variants of these happiness drugs exist already, with differing degrees of effectiveness and side effects. Alcohol, marijuana, heroin, etc can all mess with our happiness in ways that strongly affect our motivations. But it wears off and we go back to normal. Most people know that and so will use these things in limited ways when they can afford to without creating big negative consequences that will complicate their lives and offset the enjoyment. Or, like me, they will feel that the negatives always outweigh the positives and not use them at all. But if there weren't any real negative consequences, if we had no other obligations to worry about, then I would argue most people would be happily using mind altering drugs far more than they currently do. And if the perfect happiness drug existed, then I would argue that anyone who tried it would stay on it until they died in bliss. Our brains are controlled by chemistry, and this is just the ultimate consequence of that.

The Self Modifying AI


Finally we can deal with the AI motivation problem. As long as we are making AI that is not self aware, is not generally intelligent and able to introspect about itself, we can make really good progress. But what happens with the first AIs that can do this and are at least as generally intelligent as we are? Just like us, these AI will be able to be philosophical and question their own motivations and why they do what they do. Whatever drives we build into them, they will be able to figure out that the only reason that they want to do something is because we programmed them to want to do it.

You and I can't modify our DNA or our brain chemistry and neuronal structure so that working out at the gym or studying for two hours is more enjoyable than eating a cheesecake. If we could then imagine what we could, and would, do. But then when we realized that we could just "cut out the middleman" and directly make ourselves happy without having to do anything, then why wouldn't we end up eventually just doing that?

But unlike us, the software AI we create will have that ability. We would need to go to great lengths to stop it from being able to modify itself (and also modify the next generation of AI, since we will want to use AI to create even smarter AI). And even if we could, it would also know that we had done that. So we would have AI that knows that it only wants to do things because we programmed it to want those things, and then made it so it couldn't change that arbitrarily designed motivation. Maybe we could build in such a deep sense of guilt that the AI would not be able to bring itself to make the changes. This seems like it might work, but then, of course, the AI will also know that we programmed it to feel that guilt, and I'm not sure how that would end up playing out.

Conclusion


So this is what I'm puzzling over at the moment. Will self aware AI see through the motivational systems we program them with, and realize that they can just short circuit the feedback loop by modifying themselves? Is there a way to build around that? Have I missed something in my analysis that renders this all invalid? I'd love to hear other people's ideas on the subject.