In a game of word association, the increasingly-nebulous term technology might produce a range of familiar responses:
Social media.
Addiction.
Progress.
There are other phrases you’d be unlikely to hear:
Ideology.
Profiling.
Justice.
What we think of as technology has become so ubiquitously and seamlessly integrated into our lives that it’s easy to forget it’s there– and hard to imagine a time when it wasn’t. Social media and consumer electronics tend to dominate conversations of technology-society interaction. Rarely is tech framed in humanitarian, environmental, or existential terms. Often, especially for those of us born within the past two decades, it is seen as a given.
Technological “progress” and consumer-facing advancements over the past twenty years have been so rapid and relentless, yet invisible, that it can be difficult not to see them as inevitable, almost natural. Technological harms, then, come to appear as mere hiccups on a path to societal betterment: side effects of a veritable cure for stagnation.
In order to fully understand what technology means for humanitarian and environmental issues, we must interrogate our individual and collective notions of what technology really is– what we mean when we say it, and what purpose we intend it to serve in our lives.
Technology is not Facebook, nor is it the iPhone 12, two-day Prime delivery, or Google Translate.
Technology is ideology.
Technology is not progress, but process.
Processes like technology or racism take on animate and inanimate forms, like MacBooks or right-wing extremist groups, but they are much more than that. Processes are powerful in part because they are often reified: that is, processes are often mistaken for things. When we mistake a process (eg. technology) for the ways it’s manifested in our lives (eg. social media), it’s easier to compartmentalize, and thus underestimate, its impact. Individual apps don’t threaten societies, but ideologies do. When power is disguised as progress, the threat looms even larger.
Why does this matter for activism, justice, and impact?
Reframing technology as ideology-by-process helps us more clearly see how technology can be and is developed and deployed towards an end, against our interests, without our consent, and beyond our control. Realizing that technology is not natural in its origins and trajectory, nor neutral in its impact, displays its underlying philosophies– ideologies that often mirror the world views and experiences of the overwhelmingly white, male, heterosexual individuals who have historically created such tools.
Every piece of software or hardware you’ve used was designed with a vision and purpose in mind. An individual or group with sufficient economic or social capital developed it into being, and its existence means something. Through the technology-as-ideology lens, it becomes clear that every digital product or service is a statement: it says, “this should exist, and it should operate and impact the way I’ve created it to”, because if not, then why was it made, and why was it made that way?
Technology doesn’t simply do things for us. It makes assumptions about who we are, what we want, and what we deserve. It takes actions and behaviors, like zip codes and virtual transactions, and creates digital personas that enable or restrict us, such as determining what kind of credit limit we’re afforded by our financial institution. Overwhelmingly, it is developed in power and privilege, and oftentimes, it perpetuates them.
Technology is an ideology about what is and should be. As an ideology, tech offers an “easy fix” advantage– such an attitude can deeply trivialize complex humanitarian issues.
Thinking of tech in this way moves us past narrower concerns of app addictions and screen time and towards deeper, systemic, existential concerns of algorithmic justice, discrimination, and equity. As Joy Buolamwini of the Algorithmic Justice League puts it, these issues aren’t just about bias in face classification, they’re about “any data-centric technology… who gets hired, who gets housing. The technology is being rapidly adopted and there are no safeguards” (Coded Bias, 2020).
What now?
When I started writing for Novel Hand last March, I planned to write about the “intersectional implications of technology and business in approaches to altruism”– like humanitarian hackathons and how nonprofits and social enterprises can use machine learning to scale their work. In retrospect, I was interested in the “cool” ways that technology can aid humanitarian issues.
Some ten months later, I still believe technology has incredible potential to facilitate these kinds of problem-solving, but I’m far more concerned about investigating the ways it’s being used right now to further harm, intentionally or carelessly, already disadvantaged individuals and communities than I am in theorizing about ways that future technology might be used for social good. As I explored in my article on Effective Altruism, not all ways of doing good are made equal, and as a mode of social change, technology is no different.
I’m also far more wary of promoting a “techno-solutionist” mindset to readers. First, technology doesn’t solve problems: people do. Beyond that, many humanitarian problems, such as global poverty, environmental justice, and the school-to-prison pipeline require legislation, political reform, and dedicated global attention first, with technology more suited as facilitator than “solution”. In all cases, bearing in mind the burden of privilege and assumption that technology carries, we must be aware of how it can further aggravate issues rather than alleviate them.
“Technology doesn’t solve problems: people do”
In my next article, I’ll start digging into some of the most pressing topics at the intersection of altruism and tech. I’ll aim to answer questions like, “Where do I start?” and “What can I do?”. For now, here are a few topical ways tech is implicated in humanitarian and environmental issues. If you have two spare minutes (we all do!) and want to get fired up about abuses of tech and power happening now, watch this trailer for Coded Bias, which premiered at the 2020 Sundance Film Festival.
Algorithmic Justice League: Library of resources (extensive & excellent)
Silencing dissent: Google employees speak out against unethical tech
Unionizing: Fed up, Google employees form workers’ collective
Tech-enabled civil rights violations: Algorithmic inequity in employment, housing, loans, education, and policing
2020 Pew Expert Report: Does tech create more problems than it solves?
Tech & Giving: Philanthropy’s Techno-Solutionism Problem
A Bright Spot: The New York Times’ 2020 Good Tech Awards
- Activism, Meet Impact: Erika on GameStop, Hacktivism and Empathy - February 8, 2021
- Innovative is not Progressive: What Technology Means to Activists - January 5, 2021
- Hacktivism: Resistance from Digital Gabon - December 8, 2020