s08e23: The Defensive Patent Pool

November 7, 2020: It's Day One

0.0 Context Setting

I started writing this on Saturday 31 October 2020, which was Halloween (if you’re in a culture that does halloween), and two days until the U.S. Presidential Election. I say two days but it was actually several million years ago.

I’m writing this paragraph on Wednesday 4 November 2020.

I’m not going to write about how both Halloween and the U.S. Presidential Election are their own horrific existential reflection of our times because, well, gestures at everything.

Stuff I’ve done lately:

  • bitten the bullet and bought a whole set of clamps, articulated arms, double ball joints, camera mounts, pass-through AC adapters so that I can join all the other oooh look at you with your fancy camera video conferencers and hook up my SLR to my computer to use as ridiculously expensive webcam.

Currently listening to: Mint Royale’s 1999 album, On The Ropes. This album came out during my first year of college. It’s fair to say it has memories attached to it.

1.0 Something That Caught My Attention, And Then Something That I’m Thinking About As A Result

There’s two parts to this one:

1.1 Shot: Are you there, Alexa? It’s me, Dan.

It started when I saw a tweet from Lilly Irani a professor at the University of California San Diego’s Department of Communication. Irani had pointed to a Google patent [US9037455B1, 2015, Limiting notification interruptions] that in her words was “for a device that delays notifications by listening to whether there is human speech and waiting until the human is done talking”, with the question that “while seemingly polite, isn’t this also your phone always listening to you?”.

(Aside: there’s a certain deliciousness here: Irani points out that Tristan Harris, now of the Center for Humane Technology, is a coauthor).

Anyway, first some specifics: first, the phone isn’t always listening, the claim is about when a navigation app is running, and a set period of time during which it wants to give a direction notification.

In private conversation with a friend, there’s a lot of detail to tease apart here, which is part of the reason why society is finding it difficult to deal and engage with ethical technology issues like these. First: is it the quality of the sensor, i.e. what’s being sensed, that’s at issue here? There are other continuous sensors, like heart rate sensors that are “constantly” on the look out for whether you might have a particular heart condition. My point there was that there is a difference in degree: a heart rate sensor in the configuration of a watch that works on a wrist can only sense one heart rate at a time, whereas a hypothetical phone that’s “listening” can “hear” everything (in the case people are concerned about, “everyone”) around it.

But so far, when we talk about devices that are always listening, what they’re generally doing is the equivalent of this Gary Larson’s infamous What We Say To Dogs / What They Hear cartoon from The Far Side:

My understanding is that most (not all!) wake-word implementations rely on “listening” that happens on a digital signal processor very close to the microphone, and it’s only when that DSP trips on the wake word does any sort of digitized audio get further into a system. So what should be happening is that at an autonomic level, what a device hears is blah blah blah blah hey siri and then the device wakes and starts sending audio to the next part of the chain.

Lots of computing works this way now. One way is through computational photography. Last year’s iPhone 11 had a fantastic explanation of how taking a photo on your phone works now:

For a while now, you haven’t been the one taking your photos. That’s not a slight at you, dear reader: When your finger touches the shutter button, to reduce perceived latency, the iPhone grabs a photo it has already taken before you even touched the screen.

This is done by starting a sort of rolling buffer of shots as soon as you open the Camera app. Once you tap the shutter, the iPhone picks the sharpest shot from that buffer. It saves a shot you, the user unaware of this skullduggery, assumes you have taken. Nope. You merely provided a hint, to help the camera pick from the many shots it had taken on its own. [from the very smart people at Halide]

I bet that most people would say, on learning this about how phones take pictures, that this is different from feeling like an Amazon Echo is always listening to you. At the very least when you open a camera app, you’ve made an intentional choice to at least be open to be taking a photograph. And there’s a sort of expectation that you know that this rolling buffer is going to be thrown away. I’d argue that you’d know, if only that keeping that buffer around at a good enough resolution would quickly show up, either in your phone’s available storage, or in your phone’s network traffic.

But anyway, this isn’t the interesting part, I don’t think.

(At least not to me, the armchair enthusiast).

1.2 Chaser

Disclaimer: if you’re a lawyer, or you have more than passing knowledge of intellectual property law, then apologies ahead of time for what I’m painfully aware might sound like mansplaining.

What would be interesting is if there were some way to use existing institutions and societal systems as a tool to move the use of technology in a direction more oriented toward society, rather than short-term, damn-the-externalities capitalism.

In other words: what would it look like if, an entity were to file a patent application for a non-contact biometric identification system that you could use to pay for goods in a store, and refuse to license it until sufficient societal protections were in place to prevent it from being used to cause significant harm, or until sufficient studies of potential harm or societal failure modes had been completed?

What would it look like if a patent for a recommendation engine were to be filed and granted, and then licensed only under the same conditions?

In other words, what if patents were used defensively for collective societal needs as a bulwark against The Numbers Must Go Up And To The Right?

This type of thing already exists in a different form, but because we live in a market capitalism world, they’re known as defensive patent pools, or defensive patent aggregation which, because this is an essay, I shall describe by citing wikipedia:

Defensive patent aggregation (DPA) is the practice of purchasing patents or patent rights to keep such patents out of the hands of entities that would assert them against operating companies. [wikipedia]

Just look at that. War by any other means.

This idea - that you could use patents in a way that’s a bit sideways to the usual practice of using them defensively or offensively (ugh) - has come up before in a different way, I think. The Electronic Frontier Foundation wrote about this back in 2012, in The Defensive Patent License and Other Ways to Beat the Patent System and covering the Defensive Patent License. Here’s how the EFF describes how the DPL would work:

  • DPL patent holders must offer a nonexclusive, royalty-free license for any patent they own to anyone who requests one, as long as the licensee agrees not to sue the licensor or any other member of the DPL community for patent infringement.

  • The licensee must offer its patents under the DPL with the same conditions to anyone who requests one.

  • The licenses remain in effect throught the patent's life, even if it is later sold.

  • The licenses can only be revoked if an offensive patent suit is filed.

The goal of the Defensive Patent License is to bring in the free/open source community to the benefits (such as they are) of the patent system, on the (reasonable) belief that the patent system isn’t going away any time soon, and one option at least would be to improve how it works. So it uses legally binding restrictions in how the patent is licensed to live up to free/open source goals. Anyway, you can read the paper.

Okay, fine. Where am I going with this?

You can do pretty much anything you want with a license. You could write and use, say, a hippocratic license that prohibits the use of software to violate universal standards of human rights, and embodies the principles of Ethical Source Software.

(You could also be an overgrown manchild with more money, influence and power than sense and write into the terms of service (not a license!) of your low earth orbit mesh satellite internet service provider that you accept that no Earth-based government has authority or sovereignty over Martian activities [The Independent] which, TO BE CLEAR, IS A FUCKING PUBLICITY STUNT])

In the real world, though, words and contracts can have consequences, but they have consequences only because they are backed by the force of law. The operative word being “force”. So as Cory Doctorow points out, you’d have to defend the patent, which means money, time and effort. Licenses like this only work so long as you have the ability to enforce them.

Shobita Parthasarathy showed me that there’s been attempts in biotech:

But! I’d love to see what other people have thought about this and where the holes are. Clearly one of the holes is having the resources to actually enforce and defend the patent. That said, even if it’s unrealistic, I hereby propose the Ian Malcom Patent License, that prevents use if you haven’t stopped to think if you should.

Surprise. It took me a week to write this episode. I’m finishing it now on Saturday, November 7 2020, just a few minutes after President-elect and Vice President-elect Biden and Harris just finished their speeches.

It’s day one.

How are you?


s08e21: An amateur with opinions

No, really, I am.

0.0 Context setting

It is Saturday 17 October 2020, some time in the afternoon. Here in Portland, Oregon in the United States, it’s a mild autumn day. I can see patches of blue. There are fluffy clouds. The kids have stopped yelling at each other, temporarily.

The past two weeks have been the stupid-busy kind of weeks. The kind where I’ve had the conversation with myself about what I really need to be focussing on, and then agreeing with a whole bunch of other people that it would be a better use of my time to do the thing than to be in meetings related to the thing. So, instead, I have been doing the thing. Lots of typing and writing and then getting rid of it and a Google drive folder that is, honestly, a mess. But! The last few months worth of thinking and concepts and stuff I’ve learned float around in my head have paid off, and now the words got squeezed out and they work.

1.0 Some things that caught my attention

I’ll just get this out of the way and point to a few Star Trek things:

The inimitable Lou Huang has been putting threads of Star Trek UI (specifically, the Library Computer Access and Retrieval System from The Next Generation) and how it’s been applied in the latest series, Lower Decks (which, oh my gosh, so funny, and no, it’s not Rick And Morty Does Star Trek even if the first couple of episodes seem so, no, it’s Big And Has Heart").

His first thread covers Star Trek essentially inventing the tablet computer on screen and takes it through to its latest incarnation and how we use the words “tap”, “click” and the rich vocabulary both spoken and physical of using computers widespread familiarity with tablets has brought. The second thread is about device orientation, and the third, which I haven’t even read yet is about colour schemes. If you like that stuff, then you’ll like Chris Noessel’s Sci-Fi Interfaces website and book for all the obvious reasons.

OK, a brief interlude:

Here’s some of my favorite dumb fiction so far. Click/tap on each tweet to go through to the thread, I guess.

There’s this one, on the pivot to video:

Okay, fine, here’s a very specific thread about the time when Worf and Wesley inadvertently shut down the Enterprise because Worf wanted to cheat at completing his exercise rings:

And lastly, this most recent one, a weird, admittedly very myopic and stupendously optimistic look at an alternate future based on people making some different choices about the internet and software:

Dan Hon @hondanhon
Idly imagining a world where Facebook and Twitter don’t exist, the world’s biggest social platform is
Blogger.com with over three billion users, where people write “blog posts”, owned by independent company “Pyra”, and Google is a small search engine company.

And now some (hopefully) little bullet points:

  • I started sticking post-its of the leftovers we have in the fridge on the front door of our fridge to help us remember what food we have in there before it goes bad, like everything else in life. As someone else pointed out, this is not entirely like making your food and fridge a kanban board, which I have complicated feelings about.

  • Here is a set of sweatshirts that say THE FUTURE IS TK [via] and I like jokes based on jargon and insider knowledge which is probably super elitist of me.

  • I learned about Ursula K. Le Guin’s book Steering the Craft, about writing narrative, because I can’t figure out how to do plot, or my approach to plot is “write something and keep going and maybe there’s a plot?” NARRATOR: 99% OF THE TIME, THERE WAS NO PLOT

  • Matt Webb has smart thoughts, continued, this time about video calling interoperability and it prompted a whole bunch of reactions that I have not written down. You could read this and have your own great reactions too! One change: I have started having a Zoom room just… open? all the time for people to drop in to.

  • A collection of post-mortems, mostly technology ones. It is impossible for me to write about post-mortems without pointing you in the direction of John Allspaw’s amazing work on blameless post-mortems.

  • Software Aspects of Strategic Defense Systems (PDF) is an ACM paper that very clearly points out how software is hard and SDI was a very hard problem that wasn’t going to be magically solved by computers. It is left as an exercise for the reader to consider whether there are any modern-day situations to which this might be relevant.

  • A 2016 article from Jonathan Zittrain on subpoenas and searches and computer systems and our legal and legislative framework now that it’s relatively trivial to do population-level searches for incriminating material (if you’re unaware, this is already done in realtime for child pornography).

  • Darius Kazemi made a Person Generator that pairs generated faces with phrases from personality assessments and the choice of using the second was genius and in my opinion is what makes it work in a horrifyingly spooky way.

  • Dropbox… marketing? Content marketing? Did this a Content on how to prepare an emergency “go box” which I feel shows how content marketing frustratingly misses the mark sometimes. Putting together an emergency go box is a totally useful thing! We had to do this work over the summer when it looked like we might need to leave the area not for fire, but because the air was stupendously dangerous to breathe. But the Content here just kind of tells you about Dropbox features, like password management, and reminds you why it’s a good idea to have backup copies of your documents. And so there’s nothing that actually helps you do the task. Feels like a big miss, and I know it’d be a lot more work because that would actually be a service, but on the other hand I treated it as something that was more a waste of time. I already use Dropbox and am a customer, so yeah, I guess it’s not even really for me anyway?

  • This is a list of 351 physical visualizations which starts at 5,500 BC all the way up to last year. Admittedly there are only 3 in the list between 5,500 BC and 1 AD.

  • Some of you may already have read Flamethrowers and Fire Extinguishers, a criticism of The Social Dilemma and if you haven’t, then you should.

  • Data breach remediation efforts and their implications for hospital quality [pubmed, free fulltext] is a paper that appears to show that remediation efforts (ie what hospitals do in terms of cybersecurity after a breach) may result in more deaths. Here’s a PBS writeup, too: Ransomware and data breaches linked to uptick in fatal heart attacks.

  • Landlord Tech Watch looks good.

  • Wikipedia is an MMORPG, which should be read in my opinion as an example of where game-like conditions and behaviors exist and anything can be anything if you think hard enough. If it sounds like I’m dissing it, I’m not.

2.0 The Ministry for the Future

I started reading Kim Stanley Robinson’s The Ministry for the Future [eliot peper interview; Tor.com interview] a few days ago. I have a confession to make. I have not read much of Robinson’s work. Most of the time, I have started them and not been able to make it through. Part of me considers this a personal failing.

I’m not even halfway through yet, and my copy is littered with highlights and notes. The Ministry for the Future itself is the novel’s fictional organization, set up with the purpose of advocating for the world’s future citizens through a bureaucratic, negotiated process under the remit of the Paris Climate Agreement. The climate, as it is in real life, is fucked. The book essentially opens with a wet bulb temperature heatwave of 34 degrees celsius in India that kills over 20 million people in the course of two weeks. Those deaths are the precipitating event that kicks the world (of a sort) into action (of a sort). So far, the story has covered all of those sorts of action, and lots of discussion about why despite all the available evidence that we’re (sorry, the fictional people of a fictional earth) completely fucked, we continue to do nothing about it. And the why includes social structures, systems, incentives, economics and moral philosophy and, to my delight, not so much sneering at those abstract concepts, but the repeated proposition that, say, you only can’t afford something if you can’t find the money for it and, well, money isn’t real. What is real, at the end, is land, people, and work. And even the people don’t really matter that much, from a biosphere point of view. The planet and its biosphere will be fine. (There is a fine part about how the term natural catastrophe is somewhat of a misnomer).

Normally, I’m not a fan of big infodump, expository sequences. That said, I tell a lie, sometimes I really love them when they’re what I’m interested in. I was interested in these, and I think part of it is how KSR constructs them. They’re in the form of short, sharp conversations. Not actual dialogue, more like summaries of dialogue? Anyway, all of this is to say that one particular quote stuck out to me.

It happens in chapter forty, which opens by introducing Jevons Paradox [Wikipedia; New Yorker, 2010], about the idea that when the use of a resource gets more efficient, the resource is used more, not less. Jevons wrote it in 1865. KSR starts at this point and barrels straight into what otherwise might be a late night undergraduate conversation about what is efficiency, really, only getting to the point within two pages by saying:

“But the evidence shows that there is good efficiency and bad efficiency, good inefficiency and bad inefficiency. Examples of all four can easily be provided, though here we leave this as an exercise for the reader, with just these sample pointers to stimulate reflection: preventative health care saves enormous amounts in medical costs later, and is a good efficiency. Eating your extra children (this is Swift’s character’s “modest proposal”) would be a bad efficiency.”

Yes, right, I’m totally with you. There are lots of bad efficiencies. All of those negative externalities. And then, at least this week, what stuck in my head because it was (is) relevant to my work at the moment: (my emphasis)

“In light of that principle, many efficiencies are quickly seen to be profoundly destructive, and many inefficiencies can now be understood as unintentionally salvational. Robustness and resilience are in general inefficient; but they are robust, they are resilient. And we need that by design.”

Because my work right now is working with the State of California’s department of technology to figure out the state’s information technology strategic plan, to guide the acquisition, management, and use of information technology, of which: wow, that’s a brief for a plan.

But if you’re talking about something like, say, a resource or reservoir of technologically-enabled capability, and if your hypothetical resource has been under-invested, or misunderstood, or its hypothetical acquisition, maintenance and use has been guided by principles such as cost efficiency or by reducing risks, or, as in many (hypothetical, of course) cases of a general aim to have improved performance, then this is a money quote to help refocus in terms of defining specific areas that require spending money that might not look efficient.

If I were to put it another way: robustness and resilience are things that let you be flexible. They give you room. A capacity of robustness and resilience would mean, say, a supply chain that does not feel thin and sort of stretched, like butter scraped over too much bread.

And so the realization, a sort of slow-motion creeping horror, that we’re doing this work on putting together an information technology strategy and it’s supposed to cover the use of information technology, then… do you know what you’re using it for? Do you know why? And doing that work was fun, and also somewhat easy, because at some level you go to the amusingly domained gov.ca.gov and look at administration priorities, and the slow-motion creeping horror that it hadn’t even crossed your mind to even consider whether climate change should be in an information technology strategy. Because clearly there are dumb ways that information technology can include climate change, like saying you’ll make sure all your clouds are renewable and the like. And it’s not like you want to shoehorn information technology and have a gratuitous reference to climate change. But more a sense of personal shame for it not even having crossed my mind to make that decision. This is a serious thing!

And so, I think I figured it out in a way that makes sense, and in a way that makes sense with the entire rest of the strategy. We use technology to achieve our goals. One of the state’s goals is to combat climate change. Go neutral, then negative. You’re not going to be able to do that well if you don’t have a robust, resilient foundation of technology to support your work.

Related: Is Resilience Overrated? [New York Times]

Okay, that’s it!

I am doing better this week than before, but that’s before I start worrying that one of the reasons why I’m doing better is that I’ve done less doomscrolling, and perhaps I wasn’t doomscrolling enough? I did not listen to or watch or to be honest even read that much coverage about the supreme court vacancy nomination hearings this past week, and my reasoning for that is: whatever is going to happen is going to happen. There are other things I can actually be doing that are helpful.

Anyway. How are you?



s08e20: *gestures at everything*

Just... sliding into your inbox, no biggie.

0.0 Context setting

A reminder! This is my newsletter, Things That Have Caught My Attention, which is mainly about tech, policy, government, armchair thoughts about product and services, the occasional personal story to give the whole thing more context and some gravitas. Thank you for subscribing!

It’s been a long time since I last wrote and, well, all I need to do really is to point you to this most recent tweet in a thread of mine and encourage you to scroll up/read backwards.

On with the show, etc.

1.0 Some things that caught my attention

  • I wrote before (ha, in April/May, probably) about how remote teaching at the time offered new possibilities, some good, and some not so good. Obviously things have changed in *gestures at everything* current times, especially in the U.S. with the completely abysmal state of online/remote school infrastructure in public schools, but here’s something really reassuring about what Zoom class chat does differently that’s helpful to students. Namely: the out-of-band text chat is also an opportunity for students to praise and encourage each other in a way that isn’t normally possible in physical classrooms, without software or other infrastructure.

  • I love the film Sneakers and you should too, otherwise we can’t be friends. (It is a magnificent heist movie about technology and hacking). Here is how someone found the button Whistler, who’s blind, uses to figure out where Bishop’s was kidnapped to, by recreating the sounds Bishop heard from the trunk of the car.

  • I saw this tweet from mo mcbirney about how “when teens have big parties they make a new insta account ~for the party~ and people have to request to follow it, and an approval means you can come to the party??? and you have to show your insta to get into the party (there is security)” which feels like a super interesting *cough* native *cough* way to do event ticketing and identity authentication/management.

  • There is something about exploring digital places (keyword: liminal) that are not intended to be accessed and doggedly pursuing them just for the curiosity, fun and challenge of it, so this Polygon story about spending 13 years getting into a Halo 3 skybox (a section of a map where some narrative happens with a certain kind of environmental texturing) was *chef’s kiss*.

  • So. Blaseball is an online, browser-based game that is a bit about Baseball, but not really, and it’s super interesting because it is as much a live performance game with rules encoded and programmed by its designers as it is one that listens to its players and audience (of which the players and audience are different, and not necessarily the same people!) and whose weekly evolution has a clear creative point of view and is also reactive and honestly, oh my gosh. So you should read Cat Manning’s primer on Blaseball from their newsletter The Garden of Forking Narratives. One thing about Blaseball (I could go on) is that so much of its strength comes out in its writing and the gaps and how the game has been designed explicitly with different player places and degrees of freedom.

  • A couple of data&society reports that caught my eye recently: first, on Repairing Innovation by Madeleine Clare Elish and Elizabeth Anne Watkins, which emphasizes the sociotechnical lens of humans as part of a technical system, which hopefully I am not too glibly and inaccurately summarizing as: when innovation happens, things break (sometimes by moving fast and on purpose), and then there is a long system required of mopping up and fixing of social structures that were broken in the process of innovation. This is not necessarily a bad thing, but it is a necessary consequence of innovation, i.e. “things changing”. The report in particular talks about how the introduction and integration of an AI system “[created] breakages in social structures that must be repaired in order for the technology to work as intended”. Again, super glibly: this is what you get when at a very high level, technologists see technology as separate from people.

  • The second bit from data&society is Good Intentions, Bad Inventions: The Four Myths of Healthy Tech by Amanda Lenhart and Kellie Owens which is in effect a response to the recent documentary (sigh), The Social Dilemma. The report is in a PDF, which is a bit irritating, but I highly recommend reading it. It’s a clear outline of the problems of the idea that we don’t have agency in how we use technology (biological determinism) and of tech solutionism (let’s just tech our way out of this).

  • AND look, those reports were all written and produced by women. Who have historically (and still are) been sidelined in technology and the world of software, never mind in how social sciences work collaboratively and productively with technology.

  • Through work, Tom Loosemore’s definition of big-D “Digital” came up again as “Applying the culture, practices, processes & technologies of the Internet-era to respond to people’s raised expectations” to which I needed to add: you need to have a talk about and understanding of what “expectations” are, and in what whey they are “raised”, otherwise you know you’re applying culture, practices, processes and technologies… but without understanding the goal in your particular context.

  • Good blog post from Matt Knight that helped my thinking about the term “Assisted Digital” in service design and government service design, and why we should stop talking about the term and using it. I would expect that some people might have a different opinion in our COVID physical distancing era, but it still rings true to me. I don’t think assuming or designing for digital-first makes sense anymore (although I do see it useful in the early stages of government transformation as a politically expedient catalyst to get things going), in the same way that the point is the service and the outcome, and to use all the methods appropriate or available to help users achieve their goal.

  • There’s this article about worries about “data sprawl” now that people with office jobs are working from home and it’s the usual concerns about data being in places it shouldn’t be, normally because the existing systems are incredibly irritating, don’t meet user needs, and users frequently have to find workarounds. What’s especially galling about the article, though, is that a “top issue” for companies is “employees using personal devices for work”, and that IT executives were worrying that “their employees are not following the policies for keeping their data secure.” Policies like these are, I think, a last ditch attempt against successful and effective systems and services. Your policy is you wagging a finger and saying, yeah, you weren’t supposed to do that, and it wasn’t my fault that I made the right thing easy for you to do. And to that end, if you’re going to require employees to work remotely or on mobile give them the equipment to do so. If you want me on Teams or Slack or whatever, wherever, then you pay for a device and the network access for that device. I’m not putting an MDM on my personal device just so you can save money and because you’re worried about your corporate dating mixing with my personal data.

  • VSCode Debug Visualizer is an extension to Visual Studio Code that lets you see data structures right there in your editor which, I’m sorry, why isn’t this standard? I mean, this isn’t some of the crazy Bret Victor shit, this is just “if I have an array, can I… see it?” I’d expect something like this to be in Swift Playgrounds soonish. What I don’t understand is that this seems like a fairly elementary and (somewhat) trivially implementable concept, but that my gut instinct is that it doesn’t exist because REAL PROGRAMMING isn’t like that. The counterpoint, though, is if you look at the screenshot/video on the plugin’s website, it in fact looks like COOL FUTURE PROGRAMMING so why wouldn’t REAL DEVELOPERS want to use something like that? (Sure, fine, it’s easy mode, but, you know… turn it off? It’s not like you’re using a mouse).

  • Just putting a note here, again, that I’m not sure Substack offers enough value to people with the cut it’s taking. They’ve made getting-paid-for-writing easier, but other software stacks are going to make it easier still. Part of what they’re investing their money in is “paying people with large audiences to use Substack to write” and… am I being dumb, or do I not entirely understand that strategy? It’s paying for exclusive content, which sure, that’s just like every other content platform play and is great increasing the aggregate number of substack users up. But I don’t care about the aggregate number of substack users/subscribers. I care (or not, to be honest) about the number of substack subscribers *I* have. And the number of subscribers I get from “within the Substack ecosystem” (sorry) is… negligble? Like, less than negligble? Substack isn’t doing anything (yet) for discovery or for helping me grow my audience, should I wish to do so, other than some advice that I should give away the really good content because that’s a great advert for people to pay for the slightly less good but more regular content? In other words, I am still about to pull the trigger to move to Ghost.

Well, apparently it’s been about two months since I wrote and in that time the air outside and inside the house was (literally?) apocalyptic and maybe by the time you receive this the cold civil war in the U.S. will have gone hot.

So I’m doing just fine thanks. How are you?



s08e19: Why design experiences, when you can design states of mind?

0.0 Context Setting

Look, just remember to stretch, OK? It’s no fun turning forty and then suddenly not being able to walk for half a day due to excruciating pain in your foot that turns out to be most likely planar fasciitis, which in the great scheme of things is much better than worrying you’ve suddenly developed a blood clot, given *waves hands* what’s happening right now.

I’m writing this (actually, finishing it — I started it a good month ago) on Thursday, August 6, 2020.

1.0 Some Things That Caught My Attention

1.1 Digital Psychology

Boy, do I have a story to tell you today. As ever, this bit is an adaptation of a series of tweets from this morning.

Last (Friday) morning, before I’d even had breakfast (or, if we’re being brutally honest, before I’d really gotten out of bed), I made this mistake of taking a brief look at a notorious “hacker” “news” site to see if there was anything that would catch my attention or, more likely, annoy me. I was not let down!

Today’s thing that caught my attention is digitalpsychology.io, a “free library of psychological principles and examples for inspiration to enhance the customer experience and connect with your users” (my emphasis).

Reader, I looked at the site and did I have Comments. It was of examples from cognitive psychology and behavioral economics (many of which have failed to replicate). Here’s the first one that struck me, your basic regular anchoring example:

Like I say: does making a plan seem a bargain enhance a customer’s experience?

There was more, of course. Another example was to use quantity limits, where adding a limit may increase the number of average items in a purchase:

Again, my credulous face: how is doing this enhancing a customer experience? It felt awfully like doublespeak to me, where my customer experience would be very much enhanced by having just one more wafer thin mint that I really, really, really didn’t want to have.

Another example, then, this one about how you can use loss aversion to make pepole feel bad about not completing a purchase:

There are more, but my favorite one to include in this part is about how you can increase the response rate for cold emails. I like to call cold emails spam.

So far, so standard, right? Go see an annoying thing on the internet first thing in the morning before eating anything, take a skim through it and get Super Outraged, tweet about it, and after a few examples the structure of the bit requires that you point out these are all dark patterns, i.e. “tricks used in websites and apps that make you do things that you didn’t mean to, like buying or signing up for something.”

So off I go, Internet Crusader, Righteous Righter Of Moral Wrongs In The HTTPSpace and pull Daniel Stefanovic, the creator of the website, into my thread, asking him if he knows about dark patterns, pointing out that the top comment on Hacker News calls out the site for “using pop psychology to manipulate your customers into spending money and giving you personal information”, adding a link to an Association for Computing Machinery ethics case study on dark UX patterns, and just for good measure, a link to Evil By Design, a talk from IXD 2019.

Look at me, doing my internet call-out take-down of Someone Not Being Ethical On The Internet.

Now, I’m going to out myself here. I did the looking-up-a-person thing after I started the thread. Stefanovic puts his name and Twitter account on the site, but it didn’t look like he’d tweeted for over a year. He didn’t appear to be active anywhere else. And then I got a bit worried and sheepish: maybe the site, which I then found out had launched in 2018, hadn’t been updated because something had happened to Stefanovic?

No matter, I’d already decided to tag him in on the thread.

And then I go have breakfast, go have a call with the wonderful Sarah Szalavitz (of which more later, in the Next Part), and I open up Twitter on my phone and see this:

and this:

Which, honestly, blows my mind with all of the heart emojis.

Because, really, in 2020 I wasn’t expecting someone to listen to criticism (especially criticism in my mind that was tinged at least the tiniest bit with snark and a hint of meanness) in such a way and respond openly and thoughtfully.

So there you have it. A nice, heartwarming internet story. Lesson for me: less of the snark. Or, at the very least, not snark all the time.

1.2 It’s Simple And Complicated, But Ultimately Simple

I had an overdue conversation with Sarah Szalavitz on Friday about a bunch of things that turned out to be mostly related, but I can probably parcel out into discrete chunks.

Sarah and I started talking because I asked her for feedback on my thoughts on MIT’s search for a new Media Lab director; Sarah is a friend who was a fellow at the Media Lab in 2013, teaching a course on Social Design.

Sarah wasn’t able to talk, er, I guess what people call it at press time, but we ended up talking after she’d had time to gather her thoughts. My apologies to Sarah if I get anything wrong in writing about what we talked about.

One big piece of feedback that I got from my criticism of the job posting and description from many people was that the would be considered for tenure at MIT requirement was not-so-subtle coding for we don’t want a Joi Ito again.

Because Joi was one of those outsiders, he didn’t have that academic record (and yet, was able to earn honorary recognition during and after his time at the Lab, precisely because he was now at the lab).

And yet Joi by all accounts was tremendously successful at attracting funding for the Lab and MIT. He was—is—a consummate pitch man, selling a vision of the future in just the same way that Nic Negroponte did at the outset of the lab. But I think with maturity and hindsight now (at least for some of us, I fully believe that there were many people skeptical at the time who were not listened to), many of the promises of the Lab are convincing enough dreams that are probably not going to ever see the light of day. If I’m being generous (and I’m sympathetic to this), they are things to strive for that we may never reach, but nonetheless may push us further in directions that we might not otherwise have chosen to explore. If I’m not being generous, those visions are in the area of lies or distractions. I hate to equivocate about this, but I do think that where we are and need to be is, sigh, somewhere in between.

I very much regret not seeing the coding for not Joi in the academic requirements. I have to admit that I was quite upset (which is English for absolutely spitting furious) as I read the post and job description and missed that part. It makes absolute sense to me, and I figure it’s the kind of error that results from writing from the hip and just shooting something off. That’s something I’m thinking about as I practice my writing here and figure out what I want to do, and what I want to get better at.

But I digress: Sarah’s point about the academic requirement was that if you kept it, instead of what I was advocating for, then many of the most qualified people, those people with PhDs in the relevant subject areas for what the Media Lab should become, would likely be Black women. And yet removing that requirement would put them at a disadvantage.

So our conversation then went to what felt like the inevitable: does the Lab, and by extension MIT, actually want to change? It is, after all, an institution. The lab is 35 years old now. MIT is positively ancient for an American educational institution, founded in 1861 and 159 years old now. If MIT wanted to change, then what would that look like? The make-up of faculty would be different. They would hire differently. They would make decisions instead of have committees, which is clearly not specific to MIT, but one inherent to any institution old and mature enough to want to protect itself.

And why might the institution not want to change? Why not say the simple thing and not dissemble? Why not just say: we need a fundraiser. We need someone who can get out there and sell a vision and get money so we can do our thing. Because that is a very different job. And then you get into perhaps the complicated part, which is, well, when it comes down to it, which one is more important? The work and the outcome, or the money through which you have the means to deliver the work or the outcome?

(I’ve got a bit below where I argue this isn’t that complicated. It is only complicated when you choose to make it complicated).

There’s another point to this that was raised by Sarah and others who gave me feedback: maybe the Media Lab doesn’t need to exist? Certainly collaboration is good (such an anodyne statement that I can’t imagine anyone seriously arguing against it), but perhaps what’s needed is a more distributed Media Lab across, well, all of MIT?

Or perhaps MIT’s Media Lab doesn’t need to exist any more, and a thousand more need to bloom across the world?

The header for this part was originally “It’s Simple And Complicated”, but in the end I changed it to “It’s Simple And Complicated, But Ultimately Simple”. I updated it because what I feel is important to hang on to is the clarity. I think sometimes things are complicated because we wish not to make a difficult decision and we prefer the path of least resistance. It is actually simple to say that we want an institution that does not have to compromise itself in the way that the Lab and MIT does in terms of funding sources.

It is actually simple to say that we want an equitable institution that seeks to serve all and is serious about accountability and consequences.

It is not complicated. (I imagine there are lots of people getting ready right now to say that no, I’m not being a realist, the question of funding is very complicated). It might be hard and it might be difficult and it might require making some uncomfortable decisions, but those come from what is ultimately a simple decision. And I know it is easy for me to sit on the outside and throw commentary and opinion like this. I know that there is only so much money and only so many places that it may come from. But those places — like Epstein — may not be ones sources we are prepared to use. That, I’d say, is a simple decision and then the rest following from that is necessarily constrained. So we don’t have Epstein levels of money. Great. Then there are other things we can do, and we may be limited in resource so there are other things we cannot do.

I’ve been reading Emily Nussbaum’s collection of essays on television, I Like To Watch, which I can’t recommend highly enough. (It isn’t just about television at all). When Nussbaum writes about poetry by Pearl Cleage and Cleage’s push to find clarity and draw lines instead of blurring them when talking about how Miles Davis abused his wife. Here’s how Nussbaum writes about Cleage’s poetry:

[Cleage writes] “How can they hit us and still be our heroes?…Our leaders? Our husbands? Our lovers? Our geniuses? Our friends?”

She concludes with two sentences. The first is “And the answer is…they can’t.” The second is, “Can they?”

Some of us so frequently look for the can they and skirt past considering the much simpler they can’t. Why don’t we?

2.0 Some Smaller Things That Caught My Attention

I saw this tweet quoting Macieg Ceglowski about the longevity of data:

… and I think the problem is actually worse. The data itself collected around people has the potential to last for a long time similar to nuclear waste — or, even worse. The data can outlive the institution that manages it not just because of its physical properties, but because right now, it exists in a market environment that puts value on it. It will continue to be bought and sold and passed on and, unlike nuclear waste, copied. It can proliferate.

But I’d argue that in some ways, long-lived personal data is even worse than nuclear waste. While the data itself may live, the context which makes the data understandable and useful decays much more quickly because it likely [citation needed] has not been collected. Frequently what may make data valuable is the environment and context in which it was collected, and that context and metadata gives the collected fragment of personal data meaning. Otherwise it is potentially just a piece that can be misinterpreted because it is no longer in situ.

Devoid of context, or worse, misinterpreted into an inaccurate context, or one purposefully inaccurate, the longevity of discrete pieces of personal data might mean that its potential for harm actually increases over time as the context in which it was collected decays.

OK, that’s it for this episode. More later!



Loading more posts…