Through sheer storytelling laziness, it tells us that nothing that came before mattered.
The Last Jedi is the most disappointing Star Wars movie since Attack of the Clones. I don’t believe I’m overstating that. It’s a movie that, through its plot developments and characterization, makes the whole of the Star Wars saga less interesting and less compelling.
Its plotting undermined the characters. What was accomplished by Luke, Leia, and Han in the Original Trilogy? In light of The Last Jedi, they basically failed. The ending of Return of the Jedi is moot. We don’t know why it’s moot. We don’t know why the Rebellion’s victory turned out to be, well, nothing at all. Episode VIII doesn’t bother to tell us, because it just doesn’t care.
Instead, The Last Jedi says, “Return of the Jedi never happened. Our characters failed. The Empire still lives, somehow, though we’ve changed the names. Everything’s as dire as it was after Empire Strikes Back, without explanation, and without earning it. We simply couldn’t come up with a new story, so we inexplicably reset the universe to repeat the same story we’ve already done, with a handful of new characters.”
Why did the Rebellion fail?
Why did any of the Original Trilogy matter?
That the Rebellion failed could be an interesting story. That our characters ultimately failed could be an interesting story. But Episodes VII and VIII don’t concern themselves with that. They just want to have another Empire and another Rebellion, because that’s as ambitious as they want to be. And in doing so, without telling the story of how we got there, they’ve sapped the Original Trilogy of its meaning, and made the fight our heroes fought through three movies pointless.
Luke’s a loser. Han’s a loser. Leia’s a loser. There’s your characterization. But, hey, we got some porgs, Luke can make a hologram across planets, and Snoke’s is a generic bad guy in a bathrobe.
Let’s talk about Snoke. Here’s a guy who somehow built a war machine that toppled the New Republic, and built it out of at best a fragment of a bit of ships remaining from the Empire, which could be a great story and a great villain. But The Last Jedi doesn’t care about that. Snoke’s just, well, a plot point.
One might object that Episodes VII and VIII neededn’t answer all such questions. Except that these questions are central to what these movies are about. They’re the why that gives purpose to what we’re seeing, and that give purpose to the sacrifices the characters make.
For instance, they play up the conflict in Kylo Ren. Great. That’s important, and very Star Wars. But then they reduce his fall to Luke saying, “Snoke got to him,” and then a single scene of Luke trying to kill him. But not knowing anything about Snoke, we have no appreciation for what it means that Snoke got to him, or why Ren turned from Han and Leia’s son to someone who would murder both. He’s just bad because — handwave — some random dude made him bad?
In other words, our characters lack motivations for their actions, and so the actions are without much emotional weight.
The Force Awakens skirted this, because it was setup. We assumed we’d then learn why they were doing the things they were doing. The Last Jedi said, “Nope, we’re not going to bother with that.”
That’s why this movie was nothing like Empire, even though so many of the inexplicably glowing reviews want you to think it is. Empire was about building characters. It was about the “I am your father” realization that gave Vader so much more weight, and made everything matter in a much deeper way. That drove the characterization, making it all richer.
The Last Jedi just had characters fight. It had Luke be sad because he screwed up, somehow, but we don’t know how, because we don’t know who or what Snoke was and why he was so powerful. It had Leia lead a dwindling Resistance, but we don’t know why she’s doing that, because we don’t know why it matters what the First Order’s up to, because the universe is suddenly reset to a pre-Return of the Jedi order so there’s something to do? We’re just told the Resistance is the last line of defense, and so it matters, but we’re not shown that. The movie is all tell, not show.
The Last Jedi fakes its “emotional” weight because it has characters we love pop up, and it has characters we love in danger or dying. But why they’re doing any of that is just ignored.
It’s a remarkably lazy movie, and arguably the worst of the whole saga. The Prequels were bad, yes, but they left the Original Trilogy intact. The Last Jedi betrays the legacy of A New Hope, The Empire Strikes Back, and Return of the Jedi. It cheapens the most inspiring rebellion in film history, and turns its heroes into failures. For shame.
Update: Responding to Criticism
This essay has received a lot of attention. Which is great, especially when I hear from people who say it articulates their own reaction to The Last Jedi. At the same time, it’s received a lot of criticism, much of it good and thoughtful. In light of that, I’ve written a follow-up essay responding to some of the most common rebuttals. Here it is:
Decentralization and encryption/privacy are good principles for digital technology.
They’re also pretty good principles for effective, fair, and just government.
Those two principles are becoming more widespread within digital technology, and trend will only accelerate as more of our lives, interactions, transactions, and work move into the digital realm.
This will have inevitable, positive effects on political liberty and human flourishing.
The positive effects result from the fact that digital decentralization and encryption make it harder for the government to employ the tools it has to enable further centralization and to breach our privacy.
Centralized, large, and intrusive states require our lives — our communications, interactions, and economic transactions — to be legible. They have to know what we’re doing, when we’re doing it, and what resources we’re acquiring and using to facilitate it.
“Require” here should be read in two ways.
First, states “require” legibility because it’s necessary to their functioning. Without making its citizenry legible, the modern, officious nation state simply cannot operate in the way it has. It cannot dictate anywhere near as much of our lives as it currently does, because to dictate our lives, it must know our lives.
Second, states “require” legibility in the sense that they demand it of us. Governments believe they have the right to make us legible by watching what we do, looking into our records, making us transact and interact via systems the state can surveil, and otherwise prohibit our own efforts to make ourselves illegible. States believe we are required to — have an obligation to — make ourselves legible to our rulers.
As technology makes us illegible to the state, the state will lose its power over us. Government is well-aware it requires legibility in both senses of the term. Moving to a decentralized, encrypted, peer-to-peer communication infrastructure and economy will mean the state will find it impossible to continue to regulate us, tax us, monitor us, and punish us to anywhere near the degree it’s become accustomed to.
The state will fight back, of course. It will seek to ban technology. It will try to scare us with stories of how our technology enables terrorism and crime. It will threaten innovators and entrepreneurs and pass laws with the aim to slow the development and adoption of strong encryption, cryptocurrencies, and surveillance-proof networks.
It will try all these things, and it may even succeed, occasionally and for a time. But progress — and math — are on our side. We know that a good government is a less intrusive government, that decentralization — through federalism or just smaller states — is the key to peace and prosperity, that every person has a right to be as legible or illegible as she chooses.
And we know that this genie is very much out of the bottle. There’s no going back, no stopping it, no reversing technological progress so the state can win the war for mass surveillance and so maintain our legibility.
The emancipated individual — and the thriving communities she chooses to cultivate and participate in — win in the end.
In 1999, with a couple of friends, I founded the Gaming Outpost. For a time in the early 2000s, it was the internet’s largest tabletop gaming website, until brought low by a combination a disgruntled employee, a late-night hack, tapering revenue, and founders who decided to get real jobs. But the Gaming Outpost’s influence lived on. Mike Mearls, designer of the wildly successful Dungeons & Dragons 5th Edition, got his first paid writing gig as a columnist on the site. Shannon Appelcline’s magisterial, four-volume history of the the RPG industry, Designers & Dragons, looks back on the Gaming Outpost as the incubator and stomping ground for the ideas and designers who eventually gave us the modern indie RPG movement. While it now exists only in the Internet Archives’s Wayback Machine, GO was a pretty cool place.
It was also a favorite hangout of Wendy’s founder Dave Thomas.
The Gaming Outpost featured news, articles, and reviews about all things tabletop gaming, but its main attraction was its discussion forum. That’s where designers like Ron Edwards, Clinton R. Nixon, Jared A. Sorensen, Mike Mearls, and John Wick hashed out ideas. It’s where Mike Daisy, later infamous for his controversial The Agony and the Ecstasy of Steve Jobs theatrical monolog, lead conversations in his “Critical Hit” board. And it was where Dave Thomas talked about his love of role playing games — or where an unnecessarily elaborate hoax tried to convince us he did.
This was all close to twenty years ago, so my memory’s a little fuzzy on dates and specifics. But here’s how I remember it: For some time, a user calling himself Dave Thomas was a semi-regular participant in the Gaming Outpost discussion boards. There wasn’t much remarkable about his posts, but it was clear he was an active tabletop gamer, like everyone at at GO.
Then another user asked if he was the Dave Thomas, the guy we’d all grown up with on TV, somewhat awkwardly pitching us on hamburgers, Frosties, and baked potatoes. Yes, Dave answered, I am. This was, of course, a remarkable claim. Proof was needed.
Here’s the thing: Dave delivered. He asked for the Gaming Outpost user’s mailing address. A couple of weeks later, this incredulous gamer received a care package containing signed Dave Thomas of Wendy’s photos. When Dave Thomas died, activity on the account stopped.
This could’ve been a hoax. Someone could’ve used a “Dave Thomas” account on the Gaming Outpost with the plan to one day play a prank when asked about his identity. He could’ve bought the signed photographs. That’s all possible.
But I don’t want it to be. I’d like to think that, in addition to everything my little website gave to the flourishing tabletop RPG scene today, it also provided Wendy’s Dave Thomas, in the last years of his life, with a break from hamburgers, and a place talk about the games he loved.
Decentralization will bring about a radically freer and more dynamic world, and without waiting for the blessing of government.
Decentralized, DIY Beginnings
I got my start when I was 14, dialing into local BBSes to play text games, post to FidoNet, and download warez. This would’ve been 1993. For those of you born about that time, these just someone’s personal computer, running software like Tag or Renegade, and plugged into a phone line via a modem. They’d sit waiting for guys like me to dial in when our parents were out of the house or asleep, because a parent picking up the phone would sever the connection.
More centralized, “professional” online services existed, which is why everything anyone ever bought at that time included an AOL CD. But, to be honest, they offered little of interest over the BBS scene, with its uncensored message boards, pirated softwares downloads, and low res pornographic images.
I grew up, then, with a decentralized network. Even as the early web became more widespread, this decentralization persisted. Websites were personal. If you wanted one, you either bought space on a server and uploaded HTML and Perl scripts. Or you went to Geocities, and that place was basically the Wild West.
Centralization vs. Political Liberty
Centralization displaced this delightful chaos in stages. Even as AOL was dying, ICQ came along, and we moved our communication from distributed email servers to a single service. Blogs got eaten up by Blogger. Then came the social networks, and before we knew it, only businesses and the hardcore ran their own websites or hosted their own communications tech. Everyone else — which amounted to very nearly everyone — moved to AIM, MySpace, Facebook, Twitter, Instagram, or whatever else the kids are into these days.
Of course, centralization brings benefits. The services do more, are more reliable, and the barrier to entry way lower. But they also hold us at their mercy. Innovation slows because you have to wait for them to decide something’s a good idea, and a profitable one, too. Your data belongs to them, which means they can do what they want with it, but also they can give it, or be compelled to give it, to people we’d rather not have it, like agents of government who’d like to be sure we’re not up to subversive activities.
From the perspective of an advocate for radical political liberty, this is troubling, to say the least. For the same reasons it’s bad to turn over increasing power to the state, and to shift more and more of our economy from free market dynamism to nationalized services, it’s bad to do the same to Facebook et al, though in less acute ways. The digital world increasingly simply is the world. We exist within it, communicate through it, engage each other in exchange for goods and services via it, define ourselves and create and grow through use of its tools. If Hayek was right about the problems of centralization in government, we ought to at the least be somewhat concerned about problems of centralization in tech, and for the same reasons.
This is not to ignore a difference between Facebook and the state. The state, as Max Weber noted, gets to use coercive physical force, and claims a monopoly on legitimately doing so. Facebook can make it hard for you to delete your account, but it can’t hold a gun to your head and pull the trigger if you persist. That’s a big deal. Those on the political left too easily believe corporations are as powerful as governments, and so to treat them as just as much of a threat — or as threats that can only be reined in by giving government (i.e., the guys with the actual guns) even more power. At the same time, however, if the state gets its way and these centralized services become every more heavily regulated, ever more burdened with requirements of cooperation with law enforcement and intelligence agencies, or even nationalized outright, the lines will blur or disappear entirely. The digital world enables many amazing things but, particularly when as centralized as it is today, it also enables many awful things because it makes so much of what we do scrutable and legible to those who want more and more control over our lives.
Reclaiming Our Freedom
That’s why I’m so excited about all these emerging techs that point the way to a return to a decentralized internet. We’re fast approaching a point where the benefits of the centralized services aren’t as unique to their particular architecture as they once were, and where decentralization can bring us more security and more innovation, with fewer trade-offs.
Senator Al Franken recently gave a speech calling for more direct government regulation of social media. These companies are too big to be left to their own devices, he said.
“Everyone is rightfully focused on Russian manipulation of social media, but as lawmakers it is incumbent on us to ask the broader questions: How did big tech come to control so many aspects of our lives?” Franken asked in a speech to a Washington think tank. A handful of companies decide what Americans “see, read, and buy,” dominating access to information and facilitating the spread of disinformation, he added.
That’s why decentralization, blockchains, and strong encryption are so exciting. Yes, they will enable new avenues of economic growth and new ways for people to earn a living. Yes, they will enable us to experiment more and innovate faster. But this emerging tech will also allow us to more easily and safely ignore people like Al Franken, and get on with the business of communicating, exploring, learning, buying, selling, organizing, and self-defining, free from the possibility of officious or authoritarian interference.
Bitcoin gives us money without the state, and sidechains and level 2 tech will help us make that money more efficient and more private. Filecoin and IPFS will enable us to keep our data private, secure, and inaccessible to regimes who want to see what we’re up to and want to punish us if we don’t toe their line. The Orchid Protocol promises to hide all of this activity behind a distributed VPN, making it not only invisible to snooping eyes, but also unblockable unless a state takes the drastic step of turning off the Internet entirely. We’ll soon have distributed organizations that can self-govern and pay contributors, without the need to let the state in on any of it. We’ll be able to ditch centrally run social media networks, replace them with encrypted peer-to-peer services, and not have to worry about whether the feds can force Facebook and Twitter to turn over our data.
The result will be a freer, more dynamic, wealthier, and safer world.
Technology and Our Libertarian Future
It will also be a world truer to the principles I’ve built my Cato Institute career championing, and which provide the mission for Libertarianism.org. Our statement of principles on the site reads,
Liberty. It’s a simple idea and the linchpin of a complex system of values and practices: justice, prosperity, responsibility, toleration, cooperation, and peace. Many people believe that liberty is the core political value of modern civilization itself, the one that gives substance and form to all the other values of social life. They’re called libertarians.
Permissionless innovation matters, not just because it’s what gave us Uber, but because it’s what will give us our freedom from unnecessarily large and unjustifiably intrusive governments. Unbreachable privacy matters, not just because it means we can talk to each other without fear of embarrassment, but because it will let us think thoughts and exchange ideas that will become the foundation of a radically better world, without the crippling worry that governments opposed to that world will hunt us down and punish us to silence our voices.
This is not to say technology is always good, always a force for freedom. It’s clearly not, and we can go wrong with it in countless ways. But the technologies of encryption and decentralization and private exchange of ideas and resources put a heavy thumb on the right side of the scale. We need to work to ensure that the people developing and deploying those technologies do so consciously, with virtue, and a healthy respect for human dignity and rights. That’s why I’ll keep doing the moral and political philosophy work I do at Libertarianism.org. But I have faith in the technology community, and I’m more hopeful about humanity’s future than I’ve been in a long, long time.
Do libertarians want to destroy social bonds so we can live in a world without cooperation?
If you believe Nick Hanauer and Eric Liu, libertarians are nuts. In a recent commentary, they gave a litany of reasons for “Why libertarian society is doomed to fail.” The trouble is, they’ve managed not only to misunderstand libertarianism, but also to ignore the very problems libertarians see in the authors’ own preferred big government solutions.
Hanauer and Liu attack “radical libertarianism,” which they define as “the ideology that holds that individual liberty trumps all other values.” Yet this isn’t quite right, whether we’re talking about moderate or radical libertarianism. Liberty isn’t the ultimate value. But it is the ultimate political value. It holds this status not because we shouldn’t care about other values, but because a state that aims at liberty will enable us to realize much more of what we value than one that aims at something else. Whether the goal is wealth, happiness, health, culture or any other value we hold dear, political liberty will bring us more of it than officious government.
The authors then call out libertarians for our “defective” theory of human nature. They tell us libertarians believe “humans are wired only to be selfish, when in fact cooperation is the height of human evolution.” But libertarians embrace free markets and voluntary association, which both require and encourage cooperation. What libertarians are skeptical of is not cooperation, but the use of, and threat of, force to coerce people into taking part in schemes they don’t approve of, or that harm them, or that aren’t as efficient or effective as other means. Is it “cooperation” when the state forces poor, minority children into failing schools? Is it “cooperation” when politically connected businesses get regulators and legislators to craft rules in their favor? Is it “cooperation” when politicians send young men and women to die in unnecessary wars? Cooperation, far from being anathema to libertarianism, is in fact a core libertarian value.
Hanauer and Liu tell us that libertarians believe “societies are efficient mechanisms requiring no rules or enforcers.” Yet no libertarian thinks society can function without codes of conduct and methods for enforcing them. Libertarians believe strongly in the rule of law — much more so, in fact, than many on the left and right who would carve out exceptions in statutes and regulations to benefit political friends and powerful interest groups.
The authors also make a mistake when they claim that libertarians believe rolling back the state is the solution to every problem. It’s not. Rather, it is often the way we can enable solutions, in whatever form they may take. Private individuals are capable of amazing things if given the opportunity to exercise their ingenuity. Too often, the state stands in the way, protecting established industries and special interests by preventing the growth of new and better ones.
This isn’t a path to progress Hanauer and Liu are willing to entertain, however. Instead, they see the very act of shifting power from government to private citizens as destructive and necessarily at odds with the very idea of creation. Yet we need only look at the inventions and discoveries that have radically improved our lives to see how much creation occurs outside of the direct control of the state. Libertarians demand policies to accelerate that, not to undermine it.
Defenders of the status quo are always quick to label as unreasonable those who advocate for a different and better world. There was a time when activists for democracy were called unreasonable, and told that turning over power to the people was a laughable idea. “Reasonable people” argued for solutions within the systems of monarchy and theocracy. Hanauer and Liu are just modern versions of these “reasonable people.”
Libertarians believe the status quo isn’t good enough. Not because we’re selfish or destructive or anti-community, but because we want to make the world better for everyone — and believe freedom is the best catalyst for progress.
The virtue of humility is found in recognizing our limits — and that humility ought to make fans of limited government.
I could be wrong about pretty much anything. What I don’t know so outweighs what I do that my actual knowledge appears as little more than a small raft on an ocean of ignorance.
I suffer no shame admitting this unflattering fact, not only because there’s never any shame in acknowledging the truth, but also because everyone else is in the same boat. Our ignorance — what we don’t know — always and enormously outweighs our knowledge. It’s true of even the smartest and most educated.
Recognizing that fact ought to humble us. And that humility, informed by a realistic picture of how government operates, ought to make us libertarians. Libertarianism is a philosophy of humility. It’s one that takes us as we are and grants us the freedom to make as much of ourselves as we can. And it’s a philosophy that understands just how damaging human failings can be when coupled with the coercive force of government. Libertarianism limits rulers because it recognizes that rulers are just ordinary people who exercise extraordinary power — and that the harm that power can inflict more often than not outweighs any good it might achieve. Libertarianism rests on humility and refuses to tolerate the hubris of those who would consider themselves higher and mightier than others.
Let’s start by looking at what it means to have humility in our claims to knowledge. Each of us certainly seems to know quite a lot, from what we ate this morning to the number of moons circling Mars. We know that George Washington was the first president of the United States of America, that Boris Yeltsin was the first president of the Russian Federation, and that driving while drunk is a bad idea.
But if we look to the whole of intellectual history, we see one overturned conviction after another. What was scientific truth three hundred years ago is balderdash today. Our brightest once believed that you could understand a person’s mind and character by studying the bumps on his or her head. (It was given the scientific sounding name of “phrenology.”) The wise and the great were once certain that the Earth sat at the center of the universe.
It’s not just science that can’t seem to finally and forever get it right. Very smart people have argued about deep philosophical problems for as long as there have been very smart people. Two and a half millennia ago, Plato thought he’d figured out what justice is. Most philosophers since have disagreed — but none have offered an alternative that wasn’t itself open to strong counter-argument.
We ought to always be skeptical of claims to absolute knowledge. If you believe a philosophical point is settled, you’re almost certainly wrong. If you believe science today understands a topic fully, you’re likely to find in just a few years that it didn’t. Furthermore, if we’re properly skeptical about humanity’s knowledge in general, we ought to be even more skeptical about proclamations of certainty from individual members of our species.
But all of that doesn’t stop many of us from often feeling like there’s just no way we could be wrong.
It was in college that I first began to understand how common such intellectual hubris is. I was baffled by how broadly many of my professors saw their own expertise. A PhD in early twentieth-century American comedic film felt qualified to critique the cutting edge of physics research and to lecture his students on which types of cancer ought to get the most funding. It happens outside the university, too, especially in politics. How many Americans look at the fantastic complexity of our health care delivery system and say, “Oh, I know how to fix that”? How many voters without even basic knowledge of economics think it’s clear which candidate’s proposals will promote prosperity? It takes some effort to admit that we could be wrong about the things we think we have good reason to believe. But at the very least, it ought to be easier to recognize when we clearly know nothing about a topic.
Furthermore, many of us aren’t adequately skeptical about the move from knowledge of facts to knowledge of values. Take nutritionists, for example. They believe they know which foods are most healthy, that is, which give us the most nutrients with the least harmful other stuff. If we consume substance X, we can expect result Y. (Of course, even that knowledge has changed dramatically in recent years.) But notice this “is” doesn’t get us to an “ought.” What’s healthy is a different question entirely from what I ought to eat.
I can recognize that fried potatoes aren’t as healthy as steamed broccoli while still being right that I ought to eat French fries for dinner tonight. That’s because what I ought to eat doesn’t necessarily mean the same thing as what’s healthiest for me. “Ought” can include other values, too, such as the pleasure I’ll get, the varying prices of the alternatives, and so on. Nutrition speaks to the one value (what’s healthy), but it has nothing to say about the rest.
Proper skepticism applies to both others and to us. I should be skeptical about your claims of absolute certainty, and I should likewise be skeptical about the veracity of my own. Such skepticism shouldn’t make us abandon all claims to knowledge, of course. But it should lead us to adopt an attitude of humility. Knowing others face the same difficulties in ascertaining truth, we should expect humility from them, as well.
This is where humility urges us in the direction of libertarianism. If we embrace legitimate skepticism about our knowledge of both truth and values, then we should hesitate before compelling people who may disagree with us to live by our convictions. We should hesitate, in other words, before reaching for a club or calling on the police to use their nightsticks.
Why? Any policy may turn out to be bad or ineffective, but can’t we always go back and fix it? And what of the gains to be had in trying to make the world better by coercing others, either by our own force, or via state action, even if it means occasionally making things worse for some people? If we’re pretty sure our values are correct and our facts support them, then what’s the harm in using politics to make everyone else comply?
To show what’s wrong with that line of thinking, it may help to think about the purpose of life. The ancient Greek philosopher Aristotle believed the only thing desired for its own sake is the achievement of eudaimonia — usually translated as “happiness” or “flourishing.”
Aristotle believed that eudaimonia isn’t something found in discrete moments of pleasure or pain (what we often mean when we say, “I’m happy”) but instead is found only in an assessment of a life taken as a whole. At the end of a life, we look back and ask, “Was it good?” Everything we are, every reason we have for being, is bound up in being able to answer “yes” when our time comes.
Aristotle had his own idea of the best life, the life that exhibited eudaimonia to the highest degree. He thought it meant living in accord with that which is uniquely human: our capacity to reason — and from this he concluded that the highest and best life was one spent in contemplation. Perhaps it is not surprising that one of the world’s greatest philosophers thought happiness flowed from a life of philosophy.
For Aristotle, of course, it did. But just as we need to recognize the limits of our knowledge about the external world, we must also be humble in our prescriptions of the recipe for the good life. Happiness for me may not be the same thing as happiness for you. There is no generic “human being” who is happy, but billions of very diverse human beings. Happiness may be found in contemplation, but it can also come through raising children, experiencing great art, building a successful business, becoming an athlete, or helping those less fortunate. And if the good life for each individual is bound up in the specific features of their lives, so too are the paths to achieving it. How I go about making my life good can vary from the way you do — not just in the goals we each aim at but also in the ways we assure our aim is true.
While Aristotle may have gotten some of the details wrong, I think he was right about the broad picture. Most people want to live good, satisfying lives — and a good life is, we might say, a life lived in pursuit of the good life. As the American founders put it in the Declaration of Independence, it’s “the pursuit of happiness.” Our various pursuits may take different paths, depending on our circumstances, interests, and values. It’s the pursuit that matters.
Respecting each other — recognizing each other’s dignity as self-directing (what the philosophers call “autonomous”) beings — means respecting different forms of that quest. It means not actively inhibiting each other in our pursuits of the good — and recognizing the right each of us has to choose his or her own path.
I’ve come to the conclusion that that necessarily entails a state that is radically limited, certainly compared to the actual states we see around the world. To understand why, we need to have a realistic view of how governments operate.
In their private lives, people often act poorly, or pursue their own selfish interests, even when it means harming others. Sometimes they hurt other people just for the thrill of it. Pickpockets steal from strangers, scam artists prey on the elderly. Many people, when they think about government, assume that those undesirable traits vanish when someone enters public office. Politicians abandon selfishness and become motivated only by a desire to promote the public good.
That’s silly, of course. People remain themselves, even when given fancy titles and power over the lives of others. Being a politician or a bureaucrat doesn’t automatically make one better informed — or better — than the rest of us. There is a group of thinkers who take the realistic approach to understanding government, that people don’t change their natures when they enter government; they just change the institutional constraints they face, because they have powers that the rest of us lack. Their school of thought is known as “public choice.”
Public choice teaches us that politicians and state officials use the knowledge they have available to make the best decisions they can, with “best” being a product of their own judgment and also of their own interests. Those interests could, of course, include money and fame, but more often mean simply staying in power.
The result is that politics frequently means helping the most vocal — the people most visible to politicians — and doing so at the expense of everyone else. That’s why the state enacts and maintains such truly awful policies — such as agricultural subsidies that raise food prices and lead to wasteful misuse of resources — that fly in the face of evidence and reason. Few politicians actively want bad policies. Instead, they’re motivated by the people who show up: the farmers benefiting from these programs. And, because they can’t see as directly the harmful effects their laws and regulations have on everyone else (higher prices of food, reduced variety, etc.), they continue to support policies most of us would be better off without.
Moreover, even those harmed frequently remain unaware of the harm being done. It would cost too much to become informed — more than we could recoup even if we were able to repeal those bad policies. So we remain, as public choice economists say, “rationally ignorant,” and since we remain ignorant of the burdens those policies place on us, we aren’t able to inform the politicians whom we vote into office. The special interests tend to be “squeakier wheels” than the rest of us.
It’s important to recognize that this isn’t the result of having “the wrong people” in office. It’s not something that can be fixed by electing better leaders. Instead, it’s just the way government works when it grows beyond certain narrow limits.
Another fact about government that ought to trouble the humble is just how far its reach extends. Imagine I have very particular values when it comes to educating children, and that I have certain beliefs about the best way to achieve those values. If I don’t control the state, my reach extends no further than my kids — and any children whose parents voluntarily participate in my program.
But if I can flex the state’s muscle in support of my values and beliefs, I can extend my reach to all the children in my town, or in my region, or even in my entire country. Nobody will have any choice but to bring their children up with the educational values I prefer.
If we’re good skeptics, this should concern us deeply, because those beliefs about the best way to educate children may turn out to be incorrect, in which case it’s not just a handful of kids harmed, but all of them. And what if parents disagree — as they do — on what “best” even means in this case? What if they simply have different values when it comes to education? A state without the proper limits forces us into a one-size-fits-all approach — one that assumes some person or group can definitively know what’s good for everyone. We should all be skeptical of such claims. We should all take a good dose of humility.
So what are those limits to government? What would a state based on a proper level of skepticism look like? It would be one restricted to providing an environment in which its citizens are free to pursue the good life as each understands it.
We can’t meaningfully pursue the good under constant threat of violence, so the state should protect us from others who would do us bodily harm. And we can’t acquire and make full use of the resources we need to lead good lives if we aren’t secure in our holdings, so the state should act to limit theft — and require thieves to compensate us for those thefts that do occur.
When the state does those things — when it protects us from violence, fraud, and theft — then it fulfills the role of freeing each citizen to pursue the good life in ways as personal and unique as his or her own values.
When the state does more, however — when it takes resources from us beyond what it needs to meet those duties and when it flexes its coercive might to force some of us to live by the values of others — it fails to grant us the dignity we deserve as rational, autonomous human beings. It substitutes its judgments for our own and places barriers in our pursuit of the good life.
In the end, if we need a state, we need it because of its usefulness to us in our pursuits of happiness. We need it for that, and no more. Having the proper degree of humility means recognizing that, no matter how certain we may feel that we have things figured out, we cannot use the state to force others into whichever mold we might prefer. To do so is to succumb to hubris and to abandon the lessons of history. What seems obvious today will very likely come off as risible tomorrow.
If we become humble, we will see the world as an often overwhelmingly complex place, filled with people on personal journeys to pursue happiness. We will be skeptical of calls to give the state power to do more than protect our rights to life, liberty, and the pursuit of happiness. As another humble philosopher, John Locke, put it, “Being all equal and independent, no one ought to harm another in his life, health, liberty, or possessions.” Using violence to shape the lives of others in ways we prefer, but they do not, is anything but humble. Refraining from violence and resorting instead to voluntary persuasion is the humble — and libertarian — alternative.
Wisdom consists not only in realizing one’s powers, but in realizing their limits.
This essay originally appeared in Why Liberty?, an essay collection edited by Tom G. Palmer and published by Students for Liberty and the Atlas Network.
Ours has become a culture of hyperbole. Nothing characterizes American social interaction, mediated through politics and social media, more than our need to assure ourselves, and broadcast to others, that whatever is happening now — whatever currently grasps our unexamined attention — is the most, greatest, acutest of whatever has ever been.
Everything — sexism, racism, political differences, economic differences — is a war. A war on women. A war on blacks. A war on the poor or on the elderly or on immigrants or on Christmas. We are all soldiers for equality, religion, ideology. We engage not in debate, but in skirmishes. We face not interlocutors, but enemy combatants.
This war footing turns our interactions toxic and destructive. Twitter shame mobbing, or counter protesting, or who we allow to speak on our campuses, accomplishes little of value but causes great harm, because we’re fighting the good fight, no matter the costs and no matter the stakes — which are, let’s face it, typically enormously low.
We do this because it’s fun. Because it makes us feel like important players in battles of significance, instead of the playacting trolls we so frequently actually are. None of it matters, except insofar as we’ve opted to destroy livelihoods or lives or just faces when we thrill in punching instead parley. But we can’t admit we do it for fun, because that would be admitting we’re not at war, not really, but instead seek only the rush of pretending to be foot soldiers in whatever Battle for the Fate of Civilization strikes our fancy at the soon-to-be-forgotten moment.
Without the belief in culture war, we’d burn out. Adrenaline takes its toll. With the belief in culture war, we keep up this destructive and deranged momentum through an irrational sense of moral urgency. “This matters,” we tell ourselves and signal to our tribes. “We can’t stop now, lest we capitulate to them.”
As one side stumbles drunkenly into this process, the other ratchets up its hyperbole engines in response, and the cycle accelerates, tearing through decency and respect and social bonds. Nobody wants to be the one who calls a halt, who halts themselves, for the war of the moment must be won, and besides, it’s all so gratifying and fun.
Except it’s not. Not at all. It’s degrading and the fun is false, like the rush of skydiving without a parachute. What’s needed is a culture-wide calming down, a letting out of breath. What’s needed is an understanding that things aren’t as dire or urgent or aggressively bad or dangerous as we’ve worked ourselves up to believe.
Can we do that? I don’t know. But you can. You can step away from your hyperbolic guns and find something better to do.
Markets are overwhelmingly good, but the results of market processes aren’t always good for everyone, in every instance.
Markets are overwhelmingly good, but the results of market processes aren’t always good for everyone, in every instance. Pretending otherwise isn’t persuasive.
There’s an unfortunate tendency among some free market advocates to blame the victim: If you can’t find work, it’s because you’re lazy or you somehow screwed up. Hard work’s all that’s necessary to succeed. But of course that’s not true. It’s quite easy to think of counterexamples. We know creative destruction is a necessary part of a well-functioning economy. Market churn means people lose their jobs through no fault of their own, and shifts in technology and consumer preferences mean that skills once lucrative can suddenly become relatively worthless. Markets are overwhelmingly good, yes, and are responsible for the astonishing amelioration of poverty we’ve seen since the Industrial Revolution, but they have their victims.
A changing global economy has meant a changing American economy and a changing American economy has meant that some people who did well in the old pattern are having a harder time in the new. This harder time is felt by, among others, a segment of America’s lower-middle class who used to be able to find decent-paying jobs that demanded physical labor and the kinds of skills you don’t learn in school. That segment increasingly faces a fact about the modern economy: Unless you’re a knowledge worker, it’s become a whole lot harder to find a well-paying, stable, long-term job because the skills you bring to an employer aren’t as in demand as they used to be.
And that’s awful for the people going through it. We can say that free markets change over time and that those changes lead to more prosperity in the long term, and that’s true. But it doesn’t make life better for the machinist or construction worker without a college degree and without much retirement savings. Empathy seems an appropriate response by those of us not facing such hardship.
That even well-functioning markets hurt some people some of the time makes selling market solutions to policy problems often a difficult task. We know that the solution to unemployment or underemployment is more economic freedom. Get rid of the barriers to entry and the protectionist policies keeping afloat what would otherwise be failing firms. Enable private schools to create a robust and successful educational system so more people have the skills needed to succeed in a modern economy. Open trade with the rest of the world, so we can grow our economy, buy goods at lower prices, and sell into more markets.
But here’s the thing. Every one of those solutions ends up sounding, to the person economically hurting now, like saying, “Leave it alone and things will work themselves out. Don’t know quite how or when, but they will.” Market solutions are emergent solutions, and emergence takes time and can’t be planned or predicted. In fact, it’s the attempt to plan and predict that leads so many non-market-based policies to fail. Economists understand this and so largely trust markets. But most Americans aren’t economists.
I think this explains, in part, the appeal of people like Donald Trump or Bernie Sanders. We see them as misdiagnosing the problems and offering counter-productive, and sometimes abhorrent, “solutions.” Immigrants are taking your jobs. (They aren’t.) So let’s fix it right now but closing the borders. Trade with China is making us poor. (It isn’t.) So let’s fix it now by establishing quotas and tariffs. But to people hurting right now, people like Trump or Sanders offer something free markets can’t: certainty, even if illusory. These people right here are the cause of your problems. Punish or stop them and your problems will go away. America will go back to being great, with “great” meaning the way it was when low-information, low-skill Americans could spend their lives comfortably in the middle class. In other words, before America’s economy became modern. We don’t want that, of course. The economic visions of Trump and Sanders aren’t just backwards, but are dangerously retrograde policies that will hurt everyone without doing much to improve the lives of those who support such policies.
Liberty struggles when confronted with this combination of widespread economic ignorance and the political incentive for politicians to pander and promise solutions that are anything but. And I don’t know how to solve that. Nor do I believe there’s an easy solution. The incentives in politics run against us, and so we somehow need to get better at articulating the story of markets, of the voluntary and the emergent, and do it in a way that’s as compelling and hopeful in its rhetoric as the false hopes sold by those pitching meretricious intervention. Part of that means consciously avoiding a panglossian picture of markets, and recognizing that sometimes people get hurt by them, and that often that hurt is blameless.
We’re on our way to four(!) Avatar sequels, which is probably the same as number of people excited about Avatar sequels.
It’s pretty striking, really, how quickly Avatar vanished from the public consciousness. The movie came out at the end of 2009, and in the years since, we’ve seen really no lasting attempts to keep the universe alive. There aren’t any Avatar toys, novels, or comics being sold. No video game franchise. People don’t wear Avatar t-shirts, or reference it except in occasional satire. Nobody’s wondering what the Avatar universe holds, or about the backstories of its characters. It was a pretty 3D movie, but otherwise entirely forgettable. And “forgotten” is exactly what happened to it, except in the mind of James Cameron and as trivia about top box office receipts.
Avatar’s disappearance happened so fast, with so little cultural impact, that I got to wondering whether any other movie comes close.
The answer is “No.” Avatar looks rather unique in this regard. To figure it out, I went to Box Office Mojo’s list of all time top “Sci-Fi — Adventure” movies, and sorted it by estimated ticket sold. Avatar sits at #5. People bought 97,000,000 tickets to see it. Here’s what its company in the Top 20 looks like, skipping movies that are sequels to films already in the list, and so piggybacking on their parent’s cultural impact.
Back to the Future
Close Encounters of the Third Kind
2001: A Space Oddessy
Guardians of the Galaxy
Star Trek: The Motion Picture
Star Wars, of course, has more cultural influence than any movie ever made. The others either continue to live in public consciousness, are considered eminently rewatchable classics, or have inspired entire genres. The only that might not fit this are the last two. Guardians of the Galaxy is part of the larger Marvel Cinematic Universe, and so it’s impossible to judge what its impact would’ve been without membership in the MCU. (My bet, however, is that without the MCU tie-in, it wouldn’t have cracked the Top 20 in the first place.) Star Trek: The Motion Picture itself is something of a forgotten film, but it kicked off the Star Trek movie franchise, and there’s no doubting the importance of that. Avatar, which falls between E.T. and Jurassic Park in box office receipts, stands alone as leaving not a ripple.
And it’s not like Cameron has no experience making culturally influential films. He gave us Aliens, the Terminator movies, and Titanic. That’s nothing to sneeze at.
The easy answer is that Avatar was just a spectacle. People didn’t see it for its characters, story, or worldbuilding. They saw it because it was the first major 3D movie to make full use of that medium. But still, really popular scifi stuff tends to take on a life of its own. That’s the nature of scifi fandom. The fans want to live in the world, explore it more, expand upon it. Or, at the very least, reference it incessantly. And yet, nothing.
Now 3D’s been done. We’ve all seen Avatar. Four more Avatars will be nothing more than four more Avatars, without the breakthrough to drive ticket sales. Still, the movie’s absence from pop culture remains interesting. It’s not even parodied. To make something so big and yet so forgettable is, itself, a rather remarkable achievement.
If you watch enough people talk about politics, you’ll quickly conclude most people just aren’t very good at it. There’s often a kind of emotional intensity that clouds communication. But there’s also a general lack of skill at articulating the complex ideas and frequently-unexamined principles that motivate so much political disagreement.
That’s why it’s important to cultivate our ability to communicate our ideas and for better understanding both our ideas and those of others. You can be the best speaker in the world, but if your opponent feels you’re misrepresenting his views or haven’t taken the time to study his side of things, he’s unlikely to be swayed by what you have to say.
I can’t stress enough that when approaching any topic — whether in debate or not — it’s crucial we think clearly. We — progressives, conservatives, libertarians, whatever — tend to approach any question with a fog of beliefs, biases, and vague impressions. We seek out evidence that supports what we already think true, and look for ways to reject evidence that doesn’t. We’re more forgiving of the mistakes in reasoning made by those on our side, and pounce voraciously on the most minor mistakes made by ideological foes.
All this leads to spirited debate, but it doesn’t lead to good debate. It doesn’t lead to the kind of debate or discussion that creates a feeling of sympathy in our interlocutors or makes much progress in encouraging them to accept — or at least not so thoroughly reject — our views.
Perhaps the most important first step in ensuring a fruitful debate is also one most easy to skip over: We need to define our terms. Almost nothing derails an argument faster than when both sides use the same words to mean different things. If I say that human beings have rights and you say they don’t, it’s important that we know what the other means by rights.
This happens all the time in political debate. Take equality. Am I for it? Well, yes and no. It depends what you mean by equality. Equality of resources, including forced redistribution? Because if it’s that, then I’m against it. But does equality instead mean equal treatment by the state, equality before the law, and equality of basic rights? Well, in that case, sign me up!
So before plunging too far into a discussion, take a moment to think about whether everyone is talking about the same thing. It’s as easy as asking, “What do you mean by that?”
We should try to recognize when we’re making bad arguments. We never set out to argue poorly. But we often stumble into it, most frequently by not stopping to consider whether the arguments we’re making are at all plausible. The philosopher John Stuart Mill, in his great essay, On Liberty, pointed out that, “while every one well knows himself to be fallible, few think it necessary to take any precautions against their own fallibility, or admit the supposition that any opinion, of which they feel very certain, may be one of the examples of the error to which they acknowledge themselves to be liable.” It does us no good in communicating our ideas to have those ideas based on shoddy foundations. But even beyond harming our capacity to communicate well, it also means doing ourselves a disservice. Who wants to believe things for bad reasons?
One of the best ways to avoid making bad arguments is to spend time studying counter-arguments. And the best way to do this is to read — and understand — our critics.
Any argument made often enough will give rise to counter-arguments. Sometimes the initial argument can withstand them, and sometimes it can’t. Likewise, sometimes those counter-arguments will be strong and sometimes they won’t.
But regardless of whether we believe our own positions are inviolable, it behooves us to know and understand the arguments of those who disagree. We should do this for two reasons. First, our inviolable position may be anything but. What we assume is true could be false. The only way we’ll discover this is to face up to evidence and arguments against our position. Because, as much as we may not enjoy it, discovering we’ve believed a falsehood means we’re now closer to believing the truth than we were before. And that’s something we should only ever feel gratitude for.
Second, even if we’re not wrong, understanding and wrestling with counter-arguments improves our grasp of our own views and makes us better able to articulate and defend them.
Allow me the indulgence of quoting again from Mill, this time at length.
In the case of any person whose judgment is really deserving of confidence, how has it become so? Because he has kept his mind open to criticism of his opinions and conduct. Because it has been his practice to listen to all that could be said against him; to profit by as much of it as was just, and expound to himself, and upon occasion to others, the fallacy of what was fallacious. Because he has felt, that the only way in which a human being can make some approach to knowing the whole of a subject, is by hearing what can be said about it by persons of every variety of opinion, and studying all modes in which it can be looked at by every character of mind. No wise man ever acquired his wisdom in any mode but this; nor is it in the nature of human intellect to become wise in any other manner. The steady habit of correcting and completing his own opinion by collating it with those of others, so far from causing doubt and hesitation in carrying it into practice, is the only stable foundation for a just reliance on it: for, being cognisant of all that can, at least obviously, be said against him, and having taken up his position against all gainsayers — knowing that he has sought for objections and difficulties, instead of avoiding them, and has shut out no light which can be thrown upon the subject from any quarter — he has a right to think his judgment better than that of any person, or any multitude, who have not gone through a similar process.
Mill is absolutely right. Following his prescription is demanding, of course, but it’s worth it if we want to be better able to convince others of our deserved confidence in our positions.