s08e19: Why design experiences, when you can design states of mind?

0.0 Context Setting

Look, just remember to stretch, OK? It’s no fun turning forty and then suddenly not being able to walk for half a day due to excruciating pain in your foot that turns out to be most likely planar fasciitis, which in the great scheme of things is much better than worrying you’ve suddenly developed a blood clot, given *waves hands* what’s happening right now.

I’m writing this (actually, finishing it — I started it a good month ago) on Thursday, August 6, 2020.

1.0 Some Things That Caught My Attention

1.1 Digital Psychology

Boy, do I have a story to tell you today. As ever, this bit is an adaptation of a series of tweets from this morning.


Last (Friday) morning, before I’d even had breakfast (or, if we’re being brutally honest, before I’d really gotten out of bed), I made this mistake of taking a brief look at a notorious “hacker” “news” site to see if there was anything that would catch my attention or, more likely, annoy me. I was not let down!

Today’s thing that caught my attention is digitalpsychology.io, a “free library of psychological principles and examples for inspiration to enhance the customer experience and connect with your users” (my emphasis).

Reader, I looked at the site and did I have Comments. It was of examples from cognitive psychology and behavioral economics (many of which have failed to replicate). Here’s the first one that struck me, your basic regular anchoring example:

Like I say: does making a plan seem a bargain enhance a customer’s experience?

There was more, of course. Another example was to use quantity limits, where adding a limit may increase the number of average items in a purchase:

Again, my credulous face: how is doing this enhancing a customer experience? It felt awfully like doublespeak to me, where my customer experience would be very much enhanced by having just one more wafer thin mint that I really, really, really didn’t want to have.

Another example, then, this one about how you can use loss aversion to make pepole feel bad about not completing a purchase:

There are more, but my favorite one to include in this part is about how you can increase the response rate for cold emails. I like to call cold emails spam.

So far, so standard, right? Go see an annoying thing on the internet first thing in the morning before eating anything, take a skim through it and get Super Outraged, tweet about it, and after a few examples the structure of the bit requires that you point out these are all dark patterns, i.e. “tricks used in websites and apps that make you do things that you didn’t mean to, like buying or signing up for something.”

So off I go, Internet Crusader, Righteous Righter Of Moral Wrongs In The HTTPSpace and pull Daniel Stefanovic, the creator of the website, into my thread, asking him if he knows about dark patterns, pointing out that the top comment on Hacker News calls out the site for “using pop psychology to manipulate your customers into spending money and giving you personal information”, adding a link to an Association for Computing Machinery ethics case study on dark UX patterns, and just for good measure, a link to Evil By Design, a talk from IXD 2019.

Look at me, doing my internet call-out take-down of Someone Not Being Ethical On The Internet.

Now, I’m going to out myself here. I did the looking-up-a-person thing after I started the thread. Stefanovic puts his name and Twitter account on the site, but it didn’t look like he’d tweeted for over a year. He didn’t appear to be active anywhere else. And then I got a bit worried and sheepish: maybe the site, which I then found out had launched in 2018, hadn’t been updated because something had happened to Stefanovic?

No matter, I’d already decided to tag him in on the thread.

And then I go have breakfast, go have a call with the wonderful Sarah Szalavitz (of which more later, in the Next Part), and I open up Twitter on my phone and see this:

and this:

Which, honestly, blows my mind with all of the heart emojis.

Because, really, in 2020 I wasn’t expecting someone to listen to criticism (especially criticism in my mind that was tinged at least the tiniest bit with snark and a hint of meanness) in such a way and respond openly and thoughtfully.

So there you have it. A nice, heartwarming internet story. Lesson for me: less of the snark. Or, at the very least, not snark all the time.

1.2 It’s Simple And Complicated, But Ultimately Simple

I had an overdue conversation with Sarah Szalavitz on Friday about a bunch of things that turned out to be mostly related, but I can probably parcel out into discrete chunks.

Sarah and I started talking because I asked her for feedback on my thoughts on MIT’s search for a new Media Lab director; Sarah is a friend who was a fellow at the Media Lab in 2013, teaching a course on Social Design.

Sarah wasn’t able to talk, er, I guess what people call it at press time, but we ended up talking after she’d had time to gather her thoughts. My apologies to Sarah if I get anything wrong in writing about what we talked about.

One big piece of feedback that I got from my criticism of the job posting and description from many people was that the would be considered for tenure at MIT requirement was not-so-subtle coding for we don’t want a Joi Ito again.

Because Joi was one of those outsiders, he didn’t have that academic record (and yet, was able to earn honorary recognition during and after his time at the Lab, precisely because he was now at the lab).

And yet Joi by all accounts was tremendously successful at attracting funding for the Lab and MIT. He was—is—a consummate pitch man, selling a vision of the future in just the same way that Nic Negroponte did at the outset of the lab. But I think with maturity and hindsight now (at least for some of us, I fully believe that there were many people skeptical at the time who were not listened to), many of the promises of the Lab are convincing enough dreams that are probably not going to ever see the light of day. If I’m being generous (and I’m sympathetic to this), they are things to strive for that we may never reach, but nonetheless may push us further in directions that we might not otherwise have chosen to explore. If I’m not being generous, those visions are in the area of lies or distractions. I hate to equivocate about this, but I do think that where we are and need to be is, sigh, somewhere in between.

I very much regret not seeing the coding for not Joi in the academic requirements. I have to admit that I was quite upset (which is English for absolutely spitting furious) as I read the post and job description and missed that part. It makes absolute sense to me, and I figure it’s the kind of error that results from writing from the hip and just shooting something off. That’s something I’m thinking about as I practice my writing here and figure out what I want to do, and what I want to get better at.


But I digress: Sarah’s point about the academic requirement was that if you kept it, instead of what I was advocating for, then many of the most qualified people, those people with PhDs in the relevant subject areas for what the Media Lab should become, would likely be Black women. And yet removing that requirement would put them at a disadvantage.

So our conversation then went to what felt like the inevitable: does the Lab, and by extension MIT, actually want to change? It is, after all, an institution. The lab is 35 years old now. MIT is positively ancient for an American educational institution, founded in 1861 and 159 years old now. If MIT wanted to change, then what would that look like? The make-up of faculty would be different. They would hire differently. They would make decisions instead of have committees, which is clearly not specific to MIT, but one inherent to any institution old and mature enough to want to protect itself.

And why might the institution not want to change? Why not say the simple thing and not dissemble? Why not just say: we need a fundraiser. We need someone who can get out there and sell a vision and get money so we can do our thing. Because that is a very different job. And then you get into perhaps the complicated part, which is, well, when it comes down to it, which one is more important? The work and the outcome, or the money through which you have the means to deliver the work or the outcome?

(I’ve got a bit below where I argue this isn’t that complicated. It is only complicated when you choose to make it complicated).

There’s another point to this that was raised by Sarah and others who gave me feedback: maybe the Media Lab doesn’t need to exist? Certainly collaboration is good (such an anodyne statement that I can’t imagine anyone seriously arguing against it), but perhaps what’s needed is a more distributed Media Lab across, well, all of MIT?

Or perhaps MIT’s Media Lab doesn’t need to exist any more, and a thousand more need to bloom across the world?


The header for this part was originally “It’s Simple And Complicated”, but in the end I changed it to “It’s Simple And Complicated, But Ultimately Simple”. I updated it because what I feel is important to hang on to is the clarity. I think sometimes things are complicated because we wish not to make a difficult decision and we prefer the path of least resistance. It is actually simple to say that we want an institution that does not have to compromise itself in the way that the Lab and MIT does in terms of funding sources.

It is actually simple to say that we want an equitable institution that seeks to serve all and is serious about accountability and consequences.

It is not complicated. (I imagine there are lots of people getting ready right now to say that no, I’m not being a realist, the question of funding is very complicated). It might be hard and it might be difficult and it might require making some uncomfortable decisions, but those come from what is ultimately a simple decision. And I know it is easy for me to sit on the outside and throw commentary and opinion like this. I know that there is only so much money and only so many places that it may come from. But those places — like Epstein — may not be ones sources we are prepared to use. That, I’d say, is a simple decision and then the rest following from that is necessarily constrained. So we don’t have Epstein levels of money. Great. Then there are other things we can do, and we may be limited in resource so there are other things we cannot do.

I’ve been reading Emily Nussbaum’s collection of essays on television, I Like To Watch, which I can’t recommend highly enough. (It isn’t just about television at all). When Nussbaum writes about poetry by Pearl Cleage and Cleage’s push to find clarity and draw lines instead of blurring them when talking about how Miles Davis abused his wife. Here’s how Nussbaum writes about Cleage’s poetry:

[Cleage writes] “How can they hit us and still be our heroes?…Our leaders? Our husbands? Our lovers? Our geniuses? Our friends?”

She concludes with two sentences. The first is “And the answer is…they can’t.” The second is, “Can they?”

Some of us so frequently look for the can they and skirt past considering the much simpler they can’t. Why don’t we?


2.0 Some Smaller Things That Caught My Attention

I saw this tweet quoting Macieg Ceglowski about the longevity of data:

… and I think the problem is actually worse. The data itself collected around people has the potential to last for a long time similar to nuclear waste — or, even worse. The data can outlive the institution that manages it not just because of its physical properties, but because right now, it exists in a market environment that puts value on it. It will continue to be bought and sold and passed on and, unlike nuclear waste, copied. It can proliferate.

But I’d argue that in some ways, long-lived personal data is even worse than nuclear waste. While the data itself may live, the context which makes the data understandable and useful decays much more quickly because it likely [citation needed] has not been collected. Frequently what may make data valuable is the environment and context in which it was collected, and that context and metadata gives the collected fragment of personal data meaning. Otherwise it is potentially just a piece that can be misinterpreted because it is no longer in situ.

Devoid of context, or worse, misinterpreted into an inaccurate context, or one purposefully inaccurate, the longevity of discrete pieces of personal data might mean that its potential for harm actually increases over time as the context in which it was collected decays.


OK, that’s it for this episode. More later!

Best,

Dan

s08e18: QAnon looks like an alternate reality game

... and here's the subtitle saying "Here's How I Know..."

0.0 Context setting

It’s Wednesday, 5th August 2020.

This is a Very Special Episode of Things That Have Caught My Attention. Normally I write a combination of one or two longer-form observations about technology and what it’s like to be human, along with a few shorter links. This episode will be devoted to one topic.

The below is an adaptation of a thread of tweets I wrote on July 10, in response to a thread of my brother’s.

The ideas in these threads are not new—we are far, far from the first people to notice how the online behavior meshes with conspiracy theorists— but what I think my brother and I both bring is the perspective of having directly designed and run alternate reality games and their communities.

Since starting the below essay, life managed to intrude in various ways (I always wonder why we say this; as if there’s some conscious experience upon which life cannot intrude, which I’ll write about in the next subscriber-only newsletter episode).

My brother has also written an essay on his blog, What ARGS Can Teach Us About QAnon, as well as a follow-up interview with Charlie Warzel in the New York Times; Is QAnon the Most Dangerous Conspiracy Theory of the 21st Century?

Before you read what I and others have written, though, I encourage you to read work such as Mols Sauter’s The Illicit Aura of Information (2017) and The Apophenic Machine (2017). I wasn’t aware of Sauter’s work when I wrote my off-the-cuff Twitter thoughts; am grateful for having them brought to my attention. Sauter also graciously reminded me of the importance of citation and building on networks of previous work, not least of which because it helps move us forward and beyond working on thoughts again and again on our own.

I’m reminded that in the nearly 20 years since my involvement in alternate reality games we are fortunate to have had the practices of digital anthropology, digital sociology and science and technology studies develop. But these fields have had to struggle for funding, for legitimacy and for recognition, not least of which because they have traditionally been the subject of sexism in both academia and wilder culture. I’m but one person, not academically trained and merely curious with a personal interest, and it is clear to me that as a society, we might be much further in how we work with culture and society online if only we had treated these subjects with better care and attention.

We still have time.

1.0 QAnon looks like an alternate reality game

I used to be a game designer, making alternate reality games. Fans call them ARGs, and they’re played by hundreds, thousands, tens of thousands of people around the world who get hooked by a story, a challenge, puzzles and characters. But mainly, their players get hooked by the puzzles. 

I got involved in these games nearly 20 years ago, through a marketing campaign for the Steven Spielberg film A.I. This game, The Beast (so named because an early content production audit auspiciously resulted in a to-do list 666 items long), told the story of Evan Chan, who died under most suspicious circumstances. We needed to figure out who’d done it, and why. 

Set in the future of 2142, The Beast was a maze of websites, clues, emails, phone calls, faxes (it was 2001, after all) and live events teeming with actors. It was such a big deal in marketing and interactive entertainment that year that it ended up being one of Time’s ideas of the decade, and would later be called the first truly successful alternate reality game by The Atlantic. 

The trailer and poster for Spielberg’s A.I. included a credit for “Jeanine Salla” as a “Sentient Machine Therapist”. In credits that include directors and producers and actors, a Sentient Machine Therapist sticks out like a sore thumb, and so Googling Jeanine’s name in 2001 led you to the game’s first website. 

A secret you could discover, all by yourself or, more likely, after a tip from another fan.

(You can read a wonderful summary of The Beast in Cloudmaker days: a memoir of the A.I. game , by my friend Jay Bushman, available in the ACM’s digital library, as part of the book Well Played 2.0.)

Around six thousand people all over the world played The Beast game intensively. Obsessively, even. I know, because I and five other friends, and my brother, were community moderators who helped organize players’ efforts, manning the mailing lists and IRC channels.

The out-of-the-ordinary credit in A.I.’s trailer and poster would end up being called a rabbit hole by players and designers: once you fell down it, the point was to keep you falling deeper and deeper.

As game designers, our goal is to keep our players engaged and having fun. Sometimes, these goals end up getting switched around, or worse, perverted: you can definitely tell the difference between being engaged with something and having fun.

In an ARG, being engaged would mean solving the next puzzle or clue and unlocking a new piece of story, Alice in Wonderland-style. (ARGs love their literary references. The Beast had a character called, of course, the Rational Hatter).  

Nothing is new, of course. We’d learn we weren’t the first to tell stories in this way, as epistolary fictions through written evidence instead of narrative. Even the name of the genre, alternate reality game, would egg us on to tell a story through as many media as possible to flesh that alternate reality out, to make it feel real. We’d use letters (printed out! On paper!), emails, website contact forms, phone calls, and more. And we’d make the puzzles you’d need to solve to make sense of it all hard


The key bet that the designers of The Beast had made was to design for the curiosity not of an individual, but the collective curiosity of thousands. To design for the fact that the internet made it easier for people to do things together. They bet that that with thousands of players exploring their mystery, we’d figure out a way of coordinating our efforts. So they made their puzzles difficult: a sort of liberal arts meets technology extravaganza, a Voltron of triva, puzzle-solving and pattern-matching ability. Codes hidden in the Declaration of Independence (imagine being the one person who realizes a strange image superimposed on a graphic is exactly the same as forming part of John Hancock’s signature), codes hidden in medieval lute tablature, locations encoded in hidden latitudes and longitudes and more. 

No one person could possibly be expected to have all of this esoteric knowledge to hand: the only way to win, to solve the mystery, to find out who killed Evan Chan and why, would be to work together. 

The Beast went on for a few weeks. It got so complicated, so sprawling, that I started keeping a webpage — a trail — that acted as a written account of what we learned when, what puzzles we still needed to solve, what loose ends needed investigating or closing off. Where my document was an active to-do list, with increasingly esoteric and challenging puzzles crossed off, my brother wrote a guide: a passive, lean-back narrative retelling of the story, a sort of proto-Television Without Pity recap of the entire experience. For many, this experience of following on with the drama would be much more accessible than the down-in-the-weeds frenzy of speculation and puzzle solving.

In other words, to play this game, to get to the end, our community started making those walls of photos and post-its, notes joined by red string. A trope-y detective show crazy wall. We needed to make sense of everything. We didn’t put this kind of stuff in a Wiki because — remember — it was 2001, and people weren’t really using wikis like that back then. 

There’s a lot of pattern recognition going on, when you play games like this. There can be clues everywhere, so you look everywhere. Sometimes you get a hit. A lot of times you get a miss.

We’d make fun of this, playing The Beast: players would post their ideas with the tag SPEC or even WILD SPEC for particularly outlandish episodes of pattern-matching and red-string threading. If you posted an idea someone had already advanced, there was a chance you might get TROUTed, an in-joke for the players who hung out on IRC.

But for The Beast, and every ARG that followed, there was something important to remember: you’re playing a game that’s designed to be solved.

Right from the beginning, as community moderators and players, we’d talk about the experience of apophenia, the drive our brains have to make connections, to find patterns, to see faces in places

The use of apophenia quickly became a key part of the genre of ARGs: the compulsion for pattern finding ramped up a million-fold, explicitly designed in. And, unfortunately, variably reinforced, like the most addictive casino slot machines and algorithmic social feeds. Because just like in real life, in these games, not every theoretical connection would be true. Not every imagined connection would pay off. Some of them would, and those pay-offs would be designed to be exciting and dramatic. You say designing for addiction, we’d say designing for engaging narrative and playful, social experiences

But some people wouldn’t remember we were playing a game. They’d use tools to look up information about a website’s owner and then call that person up or, in some cases, go to their house, a sort of 2001-era doxing. 

When this happened in 2001, my fellow moderators and I would put a stop to that behavior whenever it happened. “This isn’t part of the game,” we’d say. Other players would look for security vulnerabilities in the game’s web servers and hack their way in. We’d say the same thing: “We’re playing a game. We’re supposed to be able to solve this. You’re peeking behind the curtain. We don’t need to cheat.”

But the nature of these games reinforced that behavior. In ARGs, some addresses to physical locations, hidden in images, emails and behind codes to break, were real and in-play. At the end of the day, if you really want to, you can turn anything in to a location if you want to massage numbers enough into the requisite GPS coordinates.

Traveling to the right locations would yield sweet progression of the story, narratives and yet more puzzles to solve in exciting ways: USB sticks, maps, audio recordings, more clues. More cards to put up on the wall. More connections to make. 

Even if you lived hundreds or thousands of miles away, the power of the group was that there would inevitably be someone who lived close enough and was motivated enough to drive, bus or travel by any means necessary to track down that clue. There was drama in watching the chase. Every single person playing was in the part of the guy in the chair, the trope skewered in 2017’s Spiderman Homecoming, providing technical support to the superheroes. And next time, you could be the superhero, too.  

Crowdsourcing, as a way of describing how groups of people come together to contribute ideas online was only coined in 2006 to describe emerging behavior. 

There’s been a lot more ARGs in the last 20 years. The ones I have the best memories of were for the TV show Lost; the Cloverfield series of films; Halo 2’s ilovebees; Nine Inch Nail’s Year Zero album release; and Potato Sack for Valve Software’s Portal 2. Cicada 3301, a set of online puzzles is thought of as an alternate reality game, and the inspiration for an episode of Jonathan Nolan’s Person of Interest. Unsurprisingly, many of the techniques in puzzle-solving ARGs ended up being used to recruit for state intelligence services.


In the months after The Beast, when I’d be working on these games myself and designing them, I’d learn about Richard Bartle’s taxonomy of player types, a 1996 paper based on research on people who play multiplayer online games.

Bartle suggested that there were four archetypes players would fall into: killers, achievers, socializers and explorers. His research would also be one of the elements feeding in to the early 2000s trend of gamification, of explicitly awarding points and scores and leading to concepts like Microsoft’s Xbox Live Achievements. Although Bartle had based his research in multiplayer online role-playing games, I quickly saw that we could classify every single ARG player into one of his four archetypes, too:

The killers were the ones who wanted to break things, or break other players. They were the trolls. 

The achievers were the ones also who just wanted to win, and sometimes winning would include cheating. 

Socializers were there for the chat and the company, or a more passive, tv-watching type experience. 

Lastly, the explorers were the ones interested in the story, in solving puzzles, actively feeling and finding out what would happen next. 

I’d refer to Bartle’s framework again and again over the next 20 years throughout my career, applying it not just to games but using it to understand and think about online behavior in general.


A few weeks ago, my brother brought up again a theory that’s been around for a while: that QAnon was like the ARGs we’d made and played together. He thought that QAnon was popular partly because doing the time-consuming research understand and contribute to the “QAnon community”, to be QAnon, is enjoyable because it’s active, not passive, like watching TV. This made a lot of sense to me, and it’s part of what’s scaring me about QAnon, too. 

Because if you look at QAnon through a lens of game design, it starts to look a lot like behavior I and my fellow game designers have seen amongst ARG players over the last 20 years. Only, clearly, a lot worse. Every single QAnon behavior I’ve seen feels like it’s at least an order of magnitude more intense than ARG player behavior—but uncontrolled, undirected, and unconstrained.

From the outside looking at how people take part in QAnon, there’s a lot of similarities: being a part of QAnon involves doing a lot of independent research. You can imagine the onboarding experience in terms of being exposed to some new phrases, Googling those phrases (which are specifically coded enough to lead to certain websites, and certain information). Finding something out, doing that independent research will give you a dopamine hit. You’ve discovered something, all by yourself. You’ve achieved something. You get to tell your friends about what you’ve discovered because now you know a secret that other people don’t. You’ve done something smart. 

We saw this in the games we designed. Players love to be the first person to do something. They love even more to tell everyone else about it. It’s like Crossfit. 

I brought up Bartle’s four player types earlier, and I see them represented amongst QAnon adherents, too:

QAnon’s socializers are meme-makers, and their success creates achievement and community standing. 

QAnon’s achievers are those who find connections that further the conspiracy, the ones who join the red string together on that board. They play for local fame: to be the first to find the connection. Their achievements are ripe to share and provide that socialization kick. Their local fame is quantified and made instantaneous through retweets, favorites and Facebook reshares. 

QAnon’s explorers? They’re the connection finders, too. But in ways different to ARGs, QAnon’s explorers get to create new material, too. In most of the designed ARGs with a defined story or stories, players only have limited ability to contribute to the world or plot. Explorers in the world of QAnon get to create new evidence. 

And lastly, QAnon has killers. The killers here are, well... charged with murder for actually, horrifically, killing people. And, as in the games world, they’re griefing (trolling and making life horrible for) people who aren’t doing QAnon properly and, crucially, trolling the people who aren’t even doing QAnon in the first place. Like the socializer meme-makers, the killers are spreading QAnon. Unlike most ARGs, QAnon has an in-game sense of enemies: people who don’t even have to be playing the game or aware the game exists, to be considered opponents.

When I was writing about this on Twitter, someone pointed out that a big difference between QAnon and ARGs was that, well… aren’t ARGs just a game? And wouldn’t QAnon provide an even bigger rush because it isn’t a game? I don’t think this quite tracks. Well-designed games, just like any other well-constructed media are able to lull you into a sense of disbelief. People who play ARGs want to believe it’s not a game partly because when done right, we’ve designed them so compellingly. 

The person who sees themselves in QAnon as a secret hero warrior uncovering the truths is, I think, not that different at all from someone who has that same feeling when playing a game. It’s still the same dopamine, at the end of the day. 

And some ARG players want their games to be real. After 9/11 happened, a few months after The Beast concluded, a vocal minority of our community stepped up and very strongly said they were going to solve the terrorist attacks. My fellow moderators stomped on that impulse: this feeling of achievement, we said, of being powerful and able to solve mysteries was designed so we would feel that way. 9/11 wasn’t designed. Yes, we’d brought together something that felt new and strange — people would call it a collective intelligence — but we’d used it for a maze that had been set down for us by someone who wanted us to solve it. 9/11 wasn’t that. 

This would happen again, and again, and again. In 2013, after the Boston Marathon bombings, a Reddit community declared that it would solve the bombings and find the perpetrator using the power of crowdsourcing. The Reddit community wrongly identified two suspects, one of whom would be later found dead. 

But of course, the issue is complicated and not nearly so simple. It would turn out to be true that the US intelligence community did have a need for more people with Arabic language skills. It would turn out to be true that in some way, a collective intelligence approach might have helped with picking up and amplifying worrying signals. So why not? Why not band together and find that information and do something with it? But it would also turn out, in the 9/11 Commission Report, that those signals were detected and ignored. A collective intelligence, multiplayer game would still, in my opinion, most likely have run into the same systemic, institutional failures. 

I see in QAnon that same ARGish reward of making sense of something and sharing it with other likeminded people. In that way, it’s easy for me to use the lens of game design and see QAnon as a massively multiplayer, distributed, bottom-up, undirected effort that’s strikingly gamelike. Only this game has its tendrils in politics and is a genuine threat to public safety. 

Regular ARGs, the top-down, designed ones with stories, end. When they end well, they end with the players figuring out all of the puzzles and finishing the story. Everything gets wrapped with a bow at least as compelling as the final season of LOST, or at least as definitive. Credits run. When they end badly, they end because the players burned through all the content, all the story, and all the puzzles. 

The problem is, I don’t see how QAnon ends. QAnon is a meme-directed game in the Dawkins sense. It can be understood as a game about an idea that doesn’t really have anyone running it. There’s no singular author, showrunner or writer. There’s not even really a writer’s room. There’s no game designer, no dungeon master. It can make predictions about the world and those predictions can turn out to be consistently, verifiably wrong. What it can do is just keep going and going and going, consuming more links and more information into one giant morass. 

And because it’s not clear that nobody is running it, because the board that it’s played on is the real world, anything that exists in the real world is fair game. Ian Bogost agreed, saying that “in retrospect, the obvious, mainstream endpoint of ARGs was just: the actual internet”. In The Prophecies of Q, The Atlantic’s Adrienne LaFrance realized this too: “if the internet is one big rabbit hole containing infinitely recursive rabbit holes, QAnon has somehow found it way down all of them.” This is echoed by M.R. Sauter in a Real Life magazine piece, too, that “when we impose patterns or relationships on otherwise unrelated things, we call it apophenia. When we create these connections online, we call it the internet.” 

The internet is links, so if there’s something that doesn’t conveniently exist in the real world would otherwise connect your two pieces of red string, you just… make that content exist, and now it’s linkable. QAnon doesn’t run out. It keeps going. A few previous ARGs had tried different strategies to deal with this problem. One, by Jane McGonigal and Ken Eklund was World Without Oil, a serious game where players would collectively imagine and solve puzzles to get toward mitigating climate change. In that game, as in QAnon, fan fiction and user-generated content would become canon, part of the game itself: World Without Oil as an earnest PBS stab at moving us one percent toward a better world, QAnon as a sort of feeding frenzy of pattern-matching.


The origin of QAnon is also, like many ARGs, opaque. Sometimes, ARGs present themselves without context, as pieces of interesting baubles of out-of-place information on the internet designed to catch attention and ensnare people, rather than more explicitly as parts of a marketing campaign as they usually exist now: just like the strange credit for a Sentient Machine Therapist that drew me in in 2000. 

So then a reasonable question about QAnon is whether, in its own language, it’s an “op” — a weaponized, intentionally designed propaganda operation. And through this lens, could it be possible if the people who started it had a history in designing or playing ARGs? 

I don’t think the answer to that question is necessarily helpful or relevant.  Over the last 20 years, the idea of solving complex mysteries has seeped even more into our culture. JJ Abrams, Damon Lindelof and Jeffrey Lieber created 2004’s LOST, which itself had an ARG for viewers to be involved in both during the show’s season and while it was off air. When we launched the Perplex City ARG in 2005, Dan Brown’s massively popular Da Vinci Code had been out for two years. Then Iin 2007, Abrams would deliver his Mystery Box TED Talk, and a more participatory genre of television, mentally involving viewers figuring out a narrative, would become more and more popular. 

(If you do want to go down this rabbit hole, though: some of us in the ARG and game design community did have conversations with DARPA, the U.S. Defense Advanced Research Projects Agency, also famous for instigating the internet. Representatives from DARPA allegedly attended ARGFest, a festival for designers and creators of ARGs in Seattle, in 2013. And in my community’s defense: what if we hadn’t? If the US military wanted to understand ARGs did we want them to understand it from the point of view of optimists and people concerned about the ethical implications, or others who might not care so much?) 

The pool of people who’re familiar with the idea of looking for clues and connections has, I think, only grown over time. The behaviors that we found and had a hunch for, then used to design games in the early 2000s turned out to be prevalent as more people came online and now, to a broad degree, can be relied on. 

The idea that an object could exist, inviting collaborative problem solving and discussion had coalesced in popular culture: it had already leaked out. In many cases, it also collided quite happily with online fandom. 

(I would know: at the studio my brother and I ran, we created an ARg for fans of Muse, an international puzzle-solving treasure hunt, going deep into the works of Zbigniew Brezezinski, as part of the promotion for the band’s 2009 album The Resistance. Fans ate it up). 

But because QAnon isn’t a game— there are just parts of it that look very, very game-like—, it can do things no game can do, or chooses to do, either. 

In the QAnon not-a-game, Bartle’s achiever type, can win something very different than players of videogames like Dota, League of Legends or Fortnite. 

Games don’t let you have a shot at running for congress, for one. On July 3rd, Cameron Peters at Vox wrote an explainer about the QAnon supporters winning congressional primaries; Media Matters reckons that so far there are 9 QAnon supporters running for office in November. While eSports are making money for gamers, if QAnon is a game, its winners are getting ready for public office. 

And our media ecosystem means that achievement translates into social success, too: there are news networks ready and waiting to heap praise and spread their message, from the traditionally problematic Fox News to horrifically unpatriotic upstart One America News Network.

This media ecosystem recognition of players mirrors the design of early ARGs, too. In the early stages of the genre, and following The Beast, it was easiest for ARGs to get funding as marketing campaigns. It followed, then,that any ARG that managed to  include a stunt that would gain media coverage for a player would then be, well, more successful as a campaign, with even more earned media impressions and organic PR. So, then, you could see ARGs as optimizing for more opportunities to create newsworthy events.  

In 2020, QAnon exists in an America where just the idea of a Bush administration official with an unsourced, apocryphal quote about creating reality (“[and] while you are studying that reality—judiciously, as you will—we’ll act again, creating other new realities, which you can study too”) exists. 

The thing is, viewed through the lens of what-if-it-were-a-game, the behavior of QAnon adherents doesn’t look that deranged and crazy. It looks like obsessive fandom for a TV show, or, as others have pointed out, behaviors of certain religions like evangelicism. I think that looking at QAnon from a lens of games, though, might help understand the reasons for the behavior, and why that behavior continues and is rewarded. I don’t think this means QAnon behavior is right, in any way. I do think it makes it understandable. 


My brother and I are hardly the first to have noticed the connection between ARGs and conspiracy theories, nor ARGs and the rise of QAnon. 

Mols Sauter, an Assistant Professor at the University of Maryland College of Information Studies wrote two excellent articles about how online behavior intersects with parts of ARGS in The Illicit Aura of Information in LIMN magazine, and The Apophenia Machine in Real Life Magazine. 

In The Illicit Aura of Information, Sauter analyzed two conspiracy theories and how they treated dumps of data internal emails: the 2009 Climate Research Unit hack (“Climategate”), and the 2016’s hack of Hillary Clinton’s internal presidential campaign (“#pizzagate”, a QAnon precursor). 

In widely-played ARGs, I see reflections of Sauter’s theories of how people treated information in Climategate and #pizzagate. Sauter writes that acquired caches of emails have an illicit aura for three reasons: they gain authority because they’re raw; the act of dumping out information cuts out the role of experts who can confer legitimacy; and they’re relevant because they’re secret.

In 2004, in Perplex City, one of the ARGs I worked on with my brother, I’d write ostensibly private emails between researchers from another world illicitly using a portal to talk about shoes. Players would discover these emails by printing out the teaser website of our game, which would have entirely different content than what was shown on the screen. These clues led players to find a cache of emails that weren’t meant to be discovered. 

Sauter writes that information like this has an aura because it’s raw: and the emails we wrote for our players certainly ticked all the boxes with “imprecise, casual references, professional jargon and elision, in-jokes , and other snippets of not-readily-interpersonal ephemera”. I mean, it’s textbook. We’d even play on the discovery of a cache of emails by having the system they were accessing present them as cached copies of emails that should've been deleted. 

Often, in an attempt to level the field amongst players, we’d attempt to dump out information en masse. Games that had been running for a while would establish gatekeepers, moderators just the role I’d played earlier, to determine what information was in bounds and out-of-bounds. And of course, we’d always play on secrecy as a way to lure players in. 

I used to have a provocative question for my fellow game designers: wouldn’t it be great if there were a rom-com ARG? What would that even be like? Why wasn’t there an Amelie-style ARG? (The closest I feel we ever got was Pemberley Digital’s The Lizzie Bennet Diaries, a sort of transmedia digital show adapted from Pride and Prejudice told through vlogs). 

Now, after reading what others have written and thinking more about QAnon and conspiracy theories on the internet, I’m not sure if ARGs would ever really work outside the conspiracy genre. 


So what are we supposed to do about QAnon? How do we stop it?

QAnon isn’t a game - but when looked at through a lens of alternate reality games, maybe there’s a sort of anti-game we can design to stop it? Or we could use game design techniques to slow the players down, divert them, distract them or render their activity safe. 

I worry that pretty much all tactical interventions won’t work. Because deep down, I think the drivers for QAnon are environmental and systemic. Elizabeth Svoboda at Discover wrote recently about why COVID-19 is turning so many people into conspiracy theorists

Research shows that openness to conspiracy theories comes from causes like low socio-economic status, feeling unsafe due to a lack of agency and control in your environment, and low or negative social connections. 

But what if we looked at QAnon as a piece of media that’s like a game? Then we could see it as an activity that competes for time just like Netflix does, or doomscrolling, or arguing with people on the internet. 

A game design approach to stopping people from “playing” QAnon might be to start making it boring. But again, I worry about the environmental factors. QAnon has to be less boring that the rest of your life. What if the rest of your life isn’t really that great? What if TV and videogames are always going to be interesting than even a more-boring QAnon? 

Put more clearly: you’re in a dead-end job (if you even have one, in our pandemic times). The job prospects in your area aren’t even that great to begin with. You’re socially isolated. Until recently, most things were closed anyway. Government, at all levels, isn’t doing much to help you, and even if it has promised to help you, none of that help has actually arrived. Bills keep coming, because nobody’s helping you out with rent. 

But you could be a winner at this game. 

You could discover that new piece of evidence, that connection no-one else has seen before. 

You could throw it out onto a forum or Twitter or Facebook and get the rush of social approval. 

You get to lose yourself in it because it keeps going, and going, and going. And as you’re doing this and reading and researching, every piece you learn works together to explain the world to you, and explain why the world’s been so shitty to you. There’s actual TV that agrees with it! Everything else? That’s lies. It is as if the story, the hook, the teaser and trailer were evolutionarily selected for disadvantaged and dispossessed people in fear.


I think there are two ways of looking at what to do about QAnon, and they deal with two separate but related issues. The first is that QAnon is merely the latest in a long line of conspiracy theories, and in this way is a symptom of a wider malaise and hurt in our societies. 

The second is the medium of the internet and how it encourages or enables certain innately human behavior. Broadly, I don’t think there’s much the internet can do, directly, about the first issue of the horror of inequitable, indifferent, late-stage capitalism. But there is much that can be directly done to the issue of limiting the metastasizing of conspiracy supporting, harmful group behavior. On July 21, Twitter announced taking “further action” on QAnon, and specifically “behavior that has the potential to lead to offline harm”. This further action included removing 7,000 accounts and limiting 150,000 that had posted QAnon material. 

Twitter could have done this earlier. Reddit could have acted earlier to stop or nip in the bud the community response to the Boston marathon bombings. But those actions come with costs: I feel that in the same way my co-moderators were dedicated to reading every single message that was posted to The Beast’s mailing list (in much the same way that moderatorsproactively manage communities like Metafilter), healthy community management requires consistent and persistent human involvement. There was nobody in the loop, until it was too late. 

But I want to come back to the first point and try not to be too pessimistic about what can be done about QAnon. Because I’ve written before that there are truly good things about the internet: that the same platform of Twitter allows people to truly express what it is to be like them, from discovering and sharing that others share your ability, or inability, to visualize objects in your mind’s eye, through to learning exactly how people perceive color differently. These things are important, at a basic level, because they remind us that we are more similar than we are apart. I’d argue that the internet is still the platform that has the most potential to connect us and build shared understanding. 

What we need, ultimately, is to create safety for people. You could try to fight QAnon with a game designed with benevolent ideas, but those ideas need to be rewarding -- more rewarding, even -- than whatever QAnon can provide. (And remember: QAnon’s rewards aren’t designed in a top-down way. They’re emergent.) 

Arguably, there are not-a-games that do just this, like citizen journalism efforts or, in a very wide sense, the collaborative editing of Wikipedia. But my gut feel is that those experiences are too righteous in their rewards, and not as visceral. 

I think these rewards are going to be incredibly difficult to find and deliver in late-stage capitalism. 

I think the best way to fight QAnon, at its roots, is with a robust social safety net program. This not-a-game is being played out of fear, out of a lack of safety, and it’s meeting peoples’ needs in a collectively, societally destructive way.

In the absence of disruptive, positive social change, I fear the best we can do right now is damage limitation, and of platforms actively managing their communities. 

It’s not impossible. It’s just hard, and we’ve got to want to do it. 


If you made it this far, thanks for reading, and please consider subscribing to support my writing. The customary subscribe button is below.

As ever, I love receiving notes. If there’s enough interest, I may do a follow-up.

Best,

Dan

s08e17: The One About Risk

Argh, now I’m angry

0.0 Context setting

It’s Thursday, 2 July 2020 which also means it’s my wedding anniversary. I also know it’s my wedding anniversary because Wimbledon should be happening around now. I do not know that because I care that much about tennis, I know that because my partner cares very much about tennis and was not able to watch the final while we were getting ready. (I did, because all I had to do was pretty much put on a suit and try not to ruin it. This is probably not very fair).

It’s also close to the fourth of July weekend in the U.S., which has normally been fraught ever since 2016, with the promise of a clash of jingoistic, white supremacists meeting, well, nice people, but this year it’s even worse because for some reason, the U.S. has made actual decency and caring about other people by simply putting on a mask a fucking political issue.

My one thing today is that as I had a very brief call with someone about some potential work, I could hear their toddler in the background plaintively asking to play with them and clearly wanting some attention. I emphasize with this! (Also, to be clear, children do at some point need to learn that they are not the center of their parents’ worlds and cannot get all the attention all the time).

Anyway: this call is happening and I keep saying to this person, who is repeatedly apologizing for the interruption, hey, I have kids, we can totally have this conversation later and they’re also saying yeah, I wasn’t supposed to work today, to which I say seriously, this conversation is not that important.

My point is this:

If you’re having a call or a meeting with someone and it’s being interrupted because they have to go take care of someone, think about whether you really need to have that meeting right there and then.

It can be the easiest kindness to say that the meeting isn’t that important and that it can be rescheduled. Even just saying that can be a kindness.

If you can, think about whether your meeting can wait. Think about whether, in the grand scheme of things, what might have felt urgent can be less important than them being present with someone who needs them to be present.

They might not take you up on their offer - that’s absolutely okay, because it’s their choice. I bet it’s likely, though, that they’ll appreciate you making it. (Not that you’re doing this for appreciation!)

This isn’t just about parents, either. I nearly wrote parents above, because I am one and it’s my own experience, but this is about anyone who’s a caregiver. There are many people who have to care for other people — and animals — too.

And, sometimes, the person who needs to be cared for might be the person you’re talking to, especially if they have a chronic illness they’re managing.

1.0 A Thing That Caught My Attention

1.1 The One About Risk

This one is an adaptation of a Twitter thread from earlier today.

Bobak Ferdowsi, who was Twitter’s main character in 2012 when NASA’s Curiosity landed on Mars in 2012 and he was in visible in the livestream, having worked on Curiosity in his role at JPL, posted this today about how NASA assess/thinks about risk:

It’s a table. The likelihood of an event happening is on the x axis, and goes from rare on the bottom far left, through unlikely, possible, likely and certain on the far right. On the y axis, starting at the bottom, is the impact of an event: going from neglectable through minor, moderate, critical and catastrophic at the top.

Bobak shared this because if you’re a person alive right now it’s likely (maybe even certain) that you’re thinking about risk in terms of what to do every single day, thanks to the COVID-19 pandemic.

I shared the Bobak’s tweet with some context, because my wife and I had cause to use it a few years ago.

We’re lucky to have two wonderful, healthy children. But, like many other parents, this might not have been the case. In one of the early screenings/scans for our second child, the result came back with something concerning about his nuchal fold measurement.

It was not clear what was going to happen. In the end, it felt like there was a high probability that our son wouldn’t be born without any complications, and in the end, it was one of those situations where you just don’t know until the baby’s been born.

When we got those first results — and for the rest of our pregnancy — we had to talk and think and make decisions about probability, risk and impact.

It started with the meeting with our genetic counselor. There’s someone who talks to you about these results and explains to you what they mean.

What I had a problem with was the way that risk was presented. It was something like this, and I apologize for getting anything wrong. (I don’t have any of the documents to hand, so I may be misremembering, or getting some details across in a wholly inaccurate way. Don’t use this as any sort of medical advice.)

The nuchal fold measurement (I remember now that it was from an ultrasound) meant that our risk of a condition had, say, doubled. We kept talking about doubling or higher and I wanted to ask: doubled from what?

It turned out that the risk had doubled from something small, to something still small. Say, from 2% to 4%.

I got a bit upset. Could we not have been told that the risk was 4%? It felt mean, to lead with saying that it had doubled, or for the risk presented to have doubled. What mattered, to me, was that the risk was 4%.

In the meeting, my wife and I talked about what we might have to roll on a set of dice to get a 4% probability of a result.

4%, in the grand scheme of things, is not very likely.

But it’s more complicated than that. You might not care about a 4% risk of something trivial. You might not even care about a 4% risk of something that would require routine surgery. You might not even care about a 4% risk of something that would require non-routine surgery, but with a set of consultants nearby who are regarded as experts in that area and covered by your insurance.

On the other hand, you might really care about that 4% risk if it would mean intensive care, and your personal situation means it would be difficult for you to be a fulltime caregiver for a particular period of time, more than being a parent or caregiver without complications.

You get the idea.

So now we had to talk about the kind of event as well as the likelihood of the event.

There were some things that we were relatively comfortable with. There were others, like agonizing discussions I imagine many parents have about Down’s syndrome and what they might or might not do in the event of a particular outcome.

But what struck me most was that the way probability was talked about and how we were, well, counseled about it. Because probability is really hard to understand, and it’s not something we intuitively get. There are tools and games and physical analogies to make a probability real as opposed to something abstract, as opposed to something just like a number.

I mean, for god’s sake, in this case, I would’ve killed for one hundred white balls in a box and for the counsellor to be able to take four out and have us put three green ones in, shake the goddamn box and get us to pick a ball out. And I feel like this is dumb, because I just thought of it off the top of my head.

Look, I’m just going to go with this and explain my thinking: I think the above balls-in-the-box example is interesting because you know that there are four balls in there. You put them in there. You know also that there are ninety six other balls in there. You know that you could pull out one of the green balls. You just had a talk about what the green ball means.

The nature of the example changes if you’re asked to put twenty balls in. Or forty. Or forty and you can see them all there in the box, nestling in amongst the white ones.

I am not pretending to say that I know anything about statistics (I do not! I have never formally studied statistics!) or that I know anything about probability (I do not! I often get them wrong! The extent of my knowledge about probability is reading the CIA’s and follow-on reports about how people interpret different textual descriptions of probability, like what people think likely means — go read it if you want, but it’ll probably be exactly what you think it will be, and therefore terrifying).

But I remember thinking: hang on, this feels a bit weird. I don’t really like translating what our counselor is saying in our meeting, trying to check if I understood it properly, and then checking with them if I’ve correctly interpreted what they’ve communicated in a way that makes more sense to myself and my partner.

The next part, of course, is that once you’ve had a discussion about the impact and the risk, you do need to talk to a medical professional about what to do if that result happens because, well, it impacts the… impact. (Like I wrote above, I guess). The genetic counselor, though, is not a medical professional and will quite unhelpfully and yet honestly tell you that they’re really not the person to talk to about what happens next, if this happens because, duh, they’re a genetic counselor.

This particular story is yet another illustration of how fucked-up healthcare in America is, but having grown up in the UK I’m reasonably sure that any sort of specialized healthcare without an explicit carveout for a patient advocate in every meeting making sure everything is coordinated will be a similarly frustrating clusterfuck.

2.0 The Other, Smaller Things That Caught My Attention

You know, I’m sure there are a bunch, but I’m not going to write about them today because it’s time for family dinner.


A short one today, which makes a change. (I mean, I say shorter. If I scroll up it still looks like a bunch of words).

I hope you’re doing as well as can be, and I hope the people you care about are doing as well as can be, too.

As ever — send me notes because it’s always nice to hear from people — and, if you’re not a subscriber, consider subscribing and supporting my writing. (Button’s at the end, etc).

Best,

Dan

s08e16: What's the EGOT for Technology and Arts?

also: Trying To Figure Out What To Do About Digital, Still

0.0 Context setting

It’s Wednesday, 1 July 2020 and I’m listening to Carly Rae Jepsen’s Fever just finished and the Chemical Brothers’ Hey Boy Hey Girl just started, which is as good a description as ever of my iTunes library (at my first startup, it was called a piranha tank).

It feels like everything is falling to pieces outside, but remember that fundamentally, most people are actually nice to each other and when people aren’t nice to each other, it’s generally out of fear and hurt.

There’s someone near you today who could probably do with a hug, if you can do it safely. It’ll probably be good for both of you.

1.0 Some Things That Caught My Attention

1.1 Your regular reminder continues

See my previous regular reminder.

This is your regular reminder that facial and body recognition technology can be racist. Streetsblog reported on a study showing AVs [Autonomous vehicles] May Not Detect Darker-Skinned Pedestrians As Often As Lighter Ones, here’s the underlying paper, Predictive Inequity in Object Detection by Benjamin Wilson, Judy Hoffman and Jamie Morgenstern from Georgia Tech [vanity arXiv version].

(arXiv Vanity is a Very Nice Website that “renders academic papers from arXiv as responsive web pages so you don’t have to squint at a PDF).

I had a throwaway comment that I have to admit I couched in a gee-aw-shucks-I’m-just-doing-a-bit: “at this point, is arXiv just blogger-but-for-scientists?” [tweet].

See, the bit is funny because arXiv is an open scientific site for pre-prints of papers, which means they haven’t been peer-reviewed yet and published in, I guess, a “proper” journal. Many of the papers on arXiv, at least from what I’ve observed recently, aren’t from academics, and in the field of artificial intelligence and machine learning can commonly come from researchers working at private companies like Facebook, or startups. (Interestingly, Apple had to deal with this issue in hiring machine learning talent by coming up with its own journal, because it can be a significant detriment to your career if you just stop publishing, which used to happen at the notoriously secretive company).


1.2 Set a billion dollars on fire, just for kicks

I had a thought the other week about Quibi, a… video platform? founded by Jeffrey Katzenberg and CEO’d by Meg Whitman.

Katzenberg is the Hollywood guy who you might remember as CEO of Disney from 1984 to 1994, then leaving to be the K in Dreamworks SKG.

Meg Whitman is known in the tech industry as former CEO of eBay and as I’m recapping the pertinent points of her wikipedia page, I have to admit I completely scrubbed from my memory that eBay bought Skype in 2005 for nearly 28 times the 2016 budget of the U.S. National Endowment for the Arts, ie ~$4.1 billion. Whitman then went on to become CEO of Hewlett-Packard, and I honestly can’t remember if that means she was in charge of printers-and-ink or computers', which you can take as an indictment of fucked-up corporate culture and naming.

Anyway, their background is important because in Quibi, you have a company positioned at, as they say, the intersection of liberal arts and technology. Quibi’s pitch is that it has a platform exclusively built for short-form video (hence quick bites), but that’s not enough.

The technological innovation that Quibi brings to your mobile device is that its videos seamlessly switch between portrait and landscape orientations as you turn your phone, which is something that we have always wanted. I mean, I totally remember all the times I’ve been watching landscape video and wondered: what is just above the frame that I can’t see right now? To be fair, being able to rotate video is much more compelling when the video is shot in portrait, but again, I can’t say it’s been a burning need for me.

I like to think that one of my superpowers is sideways analogies and comparisons that illuminate something about the subject in a helpful or insightful way, I guess, a bit like the arXiv example above. But I thought a bit more about this Quibi one coming up.

Quibi has clearly been stuck in the back of my head since I tried it that one time and then let my trial subscription lapse, because the other day, I was wondering what an equivalent to Quibi might be for video games:

To be clear, the Quibi analogy is the pumping of an absurd amount of money (more than a billion dollars) into a bet on the combination of a new technology and “content”, in this case, video, involving someone eminently experienced in that field of content. The expected payoff is huge, justifying the large amount of money invested at such an early stage, and one of the other attributes of this kind of pattern is using that money not only to develop a new technology, but to commission a lot of expensive material (apparently, up to a billion dollars worth) from “content producers”. The investors in Quibi’s case are a bunch of “traditional” media companies, film and television studios, the kind that you might simplistically put in the old media bucket, divisions frantically Trying To Figure Out What To Do About Digital, Still.

(Yesterday — was it yesterday? who knows anymore — Lululemon bought Mirror, one of those expensive at-home connected device exercise things that has a subscription model, and I briefly quipped that if you love non internet-native corporations buying “native” online companies so much, then name three that continue to be profitable and have not been destroyed, accidentally or through neglect or otherwise.)

Last year, Quibi had names like Anna Kendrick, Guillermo del Toro, Don Cheadle and Liams Hemsworth attached to it, along with Tyra Banks, Chrissy Teigen, Jennifer Lopez, Lena Waithe and Steph Curry [Variety]. The strategy of throwing money at creative people in exchange for content on your platform is not a new one.

The similarity Quibi has to 3DO in my head is best explained in this tweet by Christina Warren:

In the end, the consensus of the thread I started was that Nokia’s N-Gage was probably the closest to Quibi, summed up quite well by Michael French, who was previously editor of Develop, the games industry developer trade magazine in the UK, as well as MCV:

At least, I like Christina’s take, because she’s with me on 3DO being a proto-Quibi because in my head 3DO was right there in the middle of the Hollywood, Digital and Games space, with jealous parties from each camp wanting the recognition, revenue, influence and trendiness that the other had. And, like Christina says, a solid Trip Hawkins/Jeffrey Katzenberg parallel.

But, all of this is just setup to the actual interesting part!

To me, there are these repeated forays into tech and entertainment trying to get into bed with each other and, so far, most of them failing, in a gamut that ranges from spectacularly (the really big, high profile bets, like Quibi and 3DO etc) and very, very quietly (um, I don’t know. But I’m sure there are some).

Again, Christina describes it like this: you do need a true merger of tech and entertainment. I mean, you don’t really, but if you want to be successful and you want to realize the potential that you see there, then yes, you need an effective, true merge. Christina says this is what makes Netflix successful. That said, I have Friends In The Industry who are skeptical about Netflix’s chops in the entertainment side of things - many of the successful shows on their platform are re-treads of existing, popular material. Netflix does, certainly know how to reduce risk, but they feel a bit like exploring a landscape that’s only as good as the data they have and not, irritatingly, accounting for that ineffable human instinct or taste. They certainly haven’t found a way to magically replicate that success every single time.

If you’ve read this newsletter before, then you may be unsurprised to see me jump out and tout Pixar as the One True Best Successful Example of a merger of technology and entertainment, and it frustrates me that we don’t have as many other examples as I feel we should do.

In Pixar, we’ve got an organization that is industry defining in both the arts and technology fields. In technology, you’ve got industry defining achievements, award-winning advancements in algorithms, software and operations, I mean, Ed Catmull got a goddamn Turing Award for fundamental contributions to computer graphics last year, and now I want to come up with the tech/arts equivalent of the EGOT, the Emmy, Grammy, Oscar and Tony Awards. And then on top of that, you’ve got both the critical and popular acclaim that Pixar’s delivered: a stupendous number of awards of industry recognition to go along with box office success.

This isn’t to say that Pixar is perfect - they aren’t - but the partnership of Catmull and Lasseter set out to achieve something very specific: computer animated movies, that required the balance between art and technology driving each other on in the service of stellar storytelling. Pixar could not have done what they did or intended to without intimate, critical understanding and execution of technology that didn’t even exist (and still doesn’t).

I get frustrated about this, and have written about this before, because I believe that interactive media in general has yet to have an undeniable breakout moment to the same degree of critical success. I mean, yes, there are many games out there that win awards and have affected popular culture, and I love and have played many of them.

But I broadly agree with my brother that “all AAA game writing [is] shit compared to the most mediocre films and TV” and yes, I know he’s being a bit hyperbolic (there is some really shit film and TV out there).

Leigh Alexander (a brilliant game writer) has pretty much the defining insightful comment on that thread, pointing out that in general, AAA games “don’t want great storytelling” because

But I digress.

One of my life’s ambitions is, as I somewhat carelessly say, to “do a Pixar but for interactive media” and man, is it hard. I used to think that one of the reasons it was hard is that you really need to get that creative/enabling technology partnership right, and for a long time one of the problems was that the “good” writers, or the writers that the younger me would want to work with and weren’t dead and didn’t break out in panic attacks about having to write (sorry, Douglas Adams), just didn’t exist.

(The usual story I tell here is that when I was making multiplatform/360 extensions to novels and TV properties in the UK, we’d get excited about the storytelling and worldbuilding potential, but most of the time, the authors and writers/directors weren’t interested. Because, and fair play to them: they didn’t want to do that. They had what they wanted to do, which was invariably, what they were actually doing: write a novel or produce a TV show).

My hope now is that there’s a generation of writers who have grown up with interactive media and love it and see the potential and, frankly, are actually good writers so that the pool of people to work with is so much bigger. But the games industry is a bit weird what with its fascination with that simplest of verbs, HURT, or SHOOT, or PROJECT FORCE, rather than, well, so many of the other. Yes, there are brilliant people emerging, like Meg Jayanth (who interned with us at Six to Start!), like Leigh Alexander (of whom I am very jealous in terms of writing talent), like Robin Sloan, like Naomi Alderman, but, well, let me just insert this from Naomi:

I mean, come on. Naomi wrote Disobedience (adapted into a film), her second novel was The Power (which you may have heard of, I mean, Barack Obama liked it, and is being adapted into a series for Amazon). Naomi’s also the writer behind Six to Start’s Zombie’s, Run! and by all accounts wants to write games, but… crickets?

So I guess I am optimistic and angry, in that people can get a billion dollar’s worth of funding for something like Quibi, but really, all I’d like is a hundred million dollars over a few years and the time and space to build a team.

I do think a bunch of attempts were too early. I’ve written about this before. But I do think we’re ready, and I do think there are the people out there with amazing stories to be told who’re yearning to do something different, new and not just Hollywood-but-you-can-rotate-your-phone, and it’s going to be hard and require making new things and gosh at forty years old now have I learned the virtue of patience and not going for the biggest thing first and the biggest splash.

I suppose I should get back to that pitch doc and figure out how Patreon and Kickstarter works, right?

2.0 Some even smaller bits that caught my attention lately

Look, this’ll blow your mind. Via Deb Chachra, do you remember that time that

the North and South of the United States “were on different track gauges until one rail of every Southern track was bumped over by a few inches and re-secured in a forty-eight hour period. [Twitter]

That conversation/revelation itself came from one that did the rounds earlier that day, that not only does the U.S. persist in using imperial measurements, but the length of a foot isn’t even consistent across U.S. states.


Ars Technica reported on an unpublished paper that identified “more than 1,000 sequences that incorrectly trigger smart speakers”, like Alexa waking up when it hears words like “unacceptable” and “election”. There’s a brief write-up from the team on Github. This, of course, is fine.

It is left as an exercise to the reader to imagine scenarios in which this could lead to someone’s death. (Also, death is a bad thing that can happen, but clearly not the only bad thing that can happen.)


My friend Paul Bennun is the Chief Content Officer at KCRW in Los Angeles and is looking for pitches relating to Afrofuturism. I bet some of you reading might either a) be that person, or b) know a person. Pass it on.


OK, I think that’s it for today.

Thanks, as ever, to the people who’ve written in with notes, and let’s see if I write again tomorrow.

Best,

Dan

Loading more posts…