week 11

The question of ethics in technology feels very daunting — it is such a challenge, if not outright impossible, to fully or precisely predict the consequences of a new technology (until it’s too late, most of the time). Sacasas’ reading captured this really well to me: “A hammer may indeed be used to either build a house or bash someone’s head in. On this view, technology is morally neutral and the only morally relevant question is this: What will I do with this tool?” The hammer is a simple example, a physical tool with limited features; and yet the question is already intriguing. The very same features or affordances that allow the hammer to fulfill its productive goals — its heaviness, sturdiness, the way a well-designed hammer fits so well into one’s hand — can also be weaponized for the purpose of doing harm. There is no way around that; you can’t design a hammer that’s effective at pushing a nail into a wall but not at breaking a human skull. 

When we apply the same logic to digital tools with complex algorithms, the question gets exponentially harder to answer and even conceptualize. What are the potential harms of each facet of a given social media platform, for example? We may be questioning some of them — the algorithms could be making us angrier, lonelier, sadder, more polarized, more insecure, and so on — but what else might be there that we can’t even foresee right now? Of course just asking these questions — as Sacasas does so thoroughly with the 41 questions at the end of the article — is an important first step. The questions that most resonated with me were #2 (What habits will the use of this technology instill?) and #11—13 (What was required of other human beings/other creatures/the earth so that I might be able to use this technology?). Posing the questions like this lets us evaluate whether the technology is worth it; it reminds me of my feelings while binge-watching Shark Tank at the beginning of the pandemic, seeing a novel product that solves a very minor problem and wondering do we really need this?

Gebru’s talk was also really interesting, especially in its mention of our automation bias (trusting automated systems to be right simply because they’re non-human). We can recognize and laugh at very overt examples of this — I immediately thought of the iconic The Office scene where Michael drives into a lake (“THE MACHINE KNOWS, STOP YELLING AT ME”). But like any bias, the real danger is in its pervasiveness and perniciousness; how many times have we based decisions on incorrect data and never even noticed? How much inequality has been maintained or worsened because of automated systems that have bad data because of human flaws? Gebru’s mention of APIs with skewed data sets displaying racial inequalities was familiar to me, and yet it’s not something I think about often when using technology. By the nature of data sets — background information, invisible to the users — it is so easy to forget until a terrible mistake like labeling people as gorillas makes the inequality blatantly obvious. And there’s also the question of user behavior informing the algorithm — if creators of color are significantly less promoted on YouTube or Instagram, for example, is it because the algorithm has always been preventing users from finding them? Or was the algorithm trained by the users’ racial biases? Does one bias feed right into the other in a feedback loop

The class discussion/lecture of different kinds of ethics, as well as pessimistic and optimistic imaginaries, seemed very relevant to my semester project. The ethics of reality television was already a topic I explored in my research, both from the perspective of the audience (what harmful stereotypes are viewers absorbing?) and the participants (what abuse are they being subjected to, either on or off the show?). I think it’s extremely pertinent, and the concept of optimistic imaginaries affecting cultures fits into it really well — as soon as a show is perceived by the majority as too exploitative or not diverse enough, for example, the format has to change in order to keep viewers content. To use another The Bachelor example, even though the franchise surely was aware of its lack of diversity in casting and the problems it could create (going so far as producing a segment in which Black contestants recited and discussed the racial abuse they have received from viewers), it only turned its sights to a diverse cast after the public pressure that followed the 2020 George Floyd protests. Even after that, contestants and even the show’s host have been continually involved in scandals about race. I wonder at what point its reputation will be beyond apology and repair — when audiences feel that the harm is too great, or the drama is more painful than entertaining.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s