You look, but you do not see, grasshopper
Yesterday I completed the first Kung-Fu mind game with the freshmen. In our first-year program instructors are allowed to chose the content. My section focuses on deception, specifically lies, magic, magical thinking and con games. The real focus, however, is helping 18-year-olds ramp up to university-level standards for critical inquiry and writing.
In class yesterday we went over the opening chapter of The (Honest) Truth About Dishonesty by Dan Ariely, which covers some fascinating research on why and how we lie. Chapter one is all about poking holes in the S.M.O.R.C. (the simple model of rational crime). This is our default view of why people lie. It holds that the decision to lie or cheat is simply a cost/benefit analysis.
In other words, when we are presented with a chance to gain an unfair advantage, we weigh the risk of getting caught against the reward of getting away with it. If the odds are decidedly in our favor, most of us will cheat or lie. This--or a version of it--is usually what students come up with when I ask the them to produce a theory for when and why people lie.
After we put their theories on the board, we overviewed some experiments that really torpedo the SMORC. Researchers gave a group of people puzzle matrices containing 10 solutions and they asked participants to find as many solutions as they could in a set amount of time. For each right answer, the participant would receive a dollar (or $10 in some versions). The average number most people could solve was four.
Okay, so what would happen if you gave people the chance to lie about their results and reduced the risk of getting caught to zero? My students predicted a large increase in cheating. But that's not what happened. In one version of this experiment, researchers had participants first shred their puzzle sheets and then self-report their scores. Actually the shredder wasn't really shredding and, besides, the researchers already knew the average would be four correct solutions.
To be sure, most people did cheat (almost 90 percent), but they only did so by a little, not a lot (an average increase of two answers). Even when primed beforehand with the false idea that the average participant got seven solutions, people would still only cheat a little (6 rather 7, 8 or 9). Cheating and lying were happening, of course, but something other than SMORC was clearly taking place.
So I asked my students to come up with new theories to account for these findings. Eventually they arrived at something similar to what Ariely proposed. People will cheat if the risk is low, but they still want to think of themselves as basically honest people, so they won't get too greedy. In other words, our line between honesty and dishonesty is not distinct. Most of us operate with a fudge factor, a gray borderline that separates honesty from dishonesty. We even allow ourselves to cross into this gray zone from time to time, but we also want to retain the belief that we are basically honest people. Indeed, when researchers reminded people before the experiment that they were on the 'honor system' to report their scores accurately (and when they had them sign a pledge to do so), lying about performance dropped considerably.
After going over all of these results, I asked my students if they thought the"fudge factor" theory was better than SMORC in describing why and when we lie. They all agreed. It was more accurate. Students said things like, "He's right in a way. I would feel bad if I said I got 10 solutions" or "Most people want to be good, but we all get tempted now and then. Doesn't mean we're bad people."
Fine, wonderful, great work, everybody.
Next slide: "Let's imagine that we went from offering As, Bs, Cs, etc., to offering cash payouts for top performance. Instead of an A in a course, you receive $1,000, a B gets you $250, a C $75. The money would be paid to you in cash at semester's end. If we switched to this system, would cheating at this university go up or down? Get into groups, talk it over and give me your prediction and hypothesis."
Result: they went right back to SMORC.
It did not matter that they had just seen evidence that SMORC was problematic, or that I had reminded them the risk of getting caught in the cash-for-grades scheme wasn't zero.
SMORC it was.
I say all the time that mental models change slowly. Students can look right at evidence, spit it back at you, explain it to you perfectly, but when you ask them to think with it they haven't moved at all. So on Monday I'll walk into class with a puzzled expression and say something's been eating at me all weekend. I just can't figure it out.
"Last Friday you guys told me the "fudge factor theory" was superior to SMORC in accounting for why and how we lie. But when I gave you the hypothetical about money for grades, you reverted right back to the theory you had just discredited. I don't get it. What gives?"
Then, and only then, will we discuss the difference between surface and deep learning.