Forget about hiding your disreputable Facebook party photos from your boss and your mum. What happens to those carefully selected profile photos that you post to your social media profiles, your email accounts and your work website? Although these vetted images might not reveal much in the way of juicy details about our lives offline, they can circulate far beyond the original context in which they’re posted which brings me to my predicament.
Over the weekend, I saw a robot wearing my face.
Let me explain. Last Saturday, a student of mine alerted me to the fact that there was a photo of me on technology website TechCrunch illustrating a story about a Twitter "bot" project.
Essentially, this project involves an automated, fake Twitter account going by the name of "Jason Thorton", which uses a small amount of code to comb Twitter’s public feeds for certain words and thereby creates its own tweets from a selection of other people’s. Apparently some benighted folk have mistaken the drivel this thing puts out for communication from a real person. Those who have done so have taken my face to be that of Jason Thorton.
How did my photo end up adorning a bot?
Well, it’s a well-travelled image which I took a few years ago when I needed a profile picture for a project I was working on. Perhaps lazily, I’ve used the same photo for a whole lot of profile shots on blogs and social media. Currently, it adorns my personal web page at work and my author profile at other websites.
It’s also the photo that Andrew Bolt gratuitously stuck up on his blog when he decided to throw a hissy fit in my direction last year. I think he probably got it from a group blog I’m on or my work page. One of his commenters — many of whom were such bottom-feeders that even Bolt decided he couldn’t face the chore of moderating what they said — deemed my face "eminently punchable" on the basis of that shot. I presume its cameo on a high-traffic blog like Bolt’s is the reason that it turns up early in Google image searches for "Jason" and I can only guess that’s how the creator of this bot found a face for his creation.
It’s hard to be certain about any of this, though, as I haven’t talked to him. The creator, Ryan Merket, didn’t contact me to ask whether he could use my photo, and nor did Jason Kincaid, the journalist from TechCrunch. Neither party, evidently, considered that this might be a photo of a real human being. It’s pretty clear, when you click through from an image search, that I am an actual person, with a life and career. Who knows: I may have had an opinion about being forever associated with half-arsed AI japes.
What’s been interesting about this experience is the way that it’s brought home some more general issues around reputation, identity and ownership. This example is relatively benign and I’ve chosen to simply get some giggles out of Merket using my photo for his project.
However, other examples where images or video circulated without the subject’s permission or knowledge have been more damaging. Those who’ve really suffered from this kind of incursion include the Star Wars Kid, and Gaspar Llamazares, the Spanish politician whose photo — readily available online — was used by the US Government as the basis of a mocked-up picture of Osama Bin Laden. Witness, also, the now ritual scouring of Facebook by the media for photos of accident or disaster victims.
This kind of thing, clearly, can happen to anyone, such that we can no longer be confident in our ability to control the images we (or others who know us) publicly post online. It’s very easy now to find and use images and even video of people without needing to think too much about who the person behind them might be and once an image has appeared somewhere away from its original context, the damage might be done. It’s hard to take things back online including the things you do to or with other people.
When images are easily aggregated and searched, we can lose sight of the idea that they might in any meaningful way be owned by someone else and not just in the narrow sense that they might constitute a piece of intellectual property.
When I saw my own face staring back at me from a TechCrunch page, the moment of panic I felt had nothing to do with copyright. It had instead to do with losing control of the way I’m presented in the world, a sudden diminishment of what I might once have thought of as my privacy. I was also concerned that I might be casually mistaken for the bot on the basis of my picture: unlikely, perhaps, but the picture is out there and attached to my name in many places and the bot’s name is similar enough to mine.
These conceptions of privacy and reputation are difficult to reconcile with a global, networked communications environment in which images, text and video can be spread with ease by third parties and can reappear in strange contexts.
I’m sure public figures experience a version of these concerns from time to time but it’s precisely because I’m not a public figure that Merket was able to use my photo to illustrate his project. Clearly marked fake Twitter accounts corresponding with politicians or prominent journalists are a form of parody which has long been the price of entering into public life.
None of this is especially new, these issues have been bubbling away since even before the world wide web came along. At least Merket’s effort isn’t malicious as the republication of personal material online so often is.
The standard defence I’ve heard among those engaged in blogwars — who circulate images of their adversaries — is that any image posted online is fair game and if people don’t want their images or words to be used by their opponents, they shouldn’t post them. This depends on an understanding of privacy as something with an on-off switch; a binary in which public and private are two mutually exclusive realms, wherein once something is posted, it belongs to everyone and anyone to use as they like. At best, this attitude works to constrain the way all of us use the internet, by asking us all to take reputation management as seriously as public figures do.
Perhaps, as writers like Daniel Solove suggest, we need to rethink our ideas about privacy in an era of highly spreadable media. Solove draws on the work of researchers on social networks to suggest that privacy shouldn’t be regarded as an on-off switch. Rather, he argues, it can be violated, and damage can occur when something spreads from the relatively closed social network for which it was reasonably intended to a much larger audience.
If that understanding was to inform a new set of norms around privacy, we might not need to be so concerned about what we post online but norms are far slower to evolve than technology is. Perhaps, as Solove suggests, we may need to speed along their evolution with changes in our laws. It’s something to think about the next time your face turns up somewhere unexpected.
Donate To New Matilda
New Matilda is a small, independent media outlet. We survive through reader contributions, and never losing a lawsuit. If you got something from this article, giving something back helps us to continue speaking truth to power. Every little bit counts.