7 mins

What social responsibility do you have as a software developer?

The state of the source

For Edmund Berkeley, the most important question about computer technology was ‘How shall it be used?’ He asked this question before there was even any such thing as software. Today, Berkeley’s question is still relevant.

In fact, this question has garnered a lot of attention in the open source world over the past year. As contributors and maintainers see their work being leveraged to commit human rights abuses at an alarming scale, they’re starting to become aware of the significant impact of the technologies they create on the broader world.

Open source software today is increasingly playing a critical role in mass surveillance, anti-immigrant violence, protester suppression, racially biased policing, and the development and use of cruel and inhumane weapons of war. And a growing number of open source practitioners are wondering how their work is contributing to these atrocities.

Open source purists argue that open source software is value-neutral. This attitude is widely accepted even outside of open source, with many technologists claiming that all technology is neutral: it’s just a tool whose utility for good outweighs the impact of its use for evil.

Cyberlibertarianism

Former MIT professor, Dr Langdon Winner, argued against the neutrality of technology in a number of journal articles and books, linking it to “cyberlibertarian” politics. 

He describes this political philosophy as focusing on essential technological freedom: freedom from government or industry regulation, freedom from market restraints, or indeed freedom from anything that might hinder unfettered technological advancement. And true to core libertarian principles, practitioners of this philosophy hold individual liberty as the ultimate political achievement, even when it is at odds with the rights or well-being of other people. A so-called “radical individuality”.

Similarly, open source purists insist on the primacy of “software freedom”. Richard Stallman, in his definition of Free Software, began with a premise that came to be referred to as “Freedom Zero”: the freedom to ‘run the program as you wish, for any purpose.’ 

This notion is also reflected in the Open Source Initiative’s de-facto “Open Source Definition”, which states that open source software ‘must not discriminate against any person or group of persons’, and ‘must not restrict anyone from making use of the program in a specific field of endeavor.’ In the annotated FAQ on the definition, they emphasize the point: ‘Giving everyone freedom means giving evil people freedom, too.’

The philosophy of open source purists seems to combine the naive beliefs that technology is ultimately value-neutral, and that its usefulness will always outweigh its potential for abuse. 

Winner argues that the cyberlibertarian ideology illuminates “power fantasies” in which unconstrained technological advancement will lead to the radical reinvention of society in a favorable direction.

He goes on to state what is glaringly obvious to social scientists: that what matters the most is not the technology itself, but rather the social systems in which it is embedded.

Technology and society

The cyberlibertarian perspective also draws on the ideas and ideals of a philosophy called technological instrumentalism.

The technological instrumentalist position holds that technology has no inherent ends of its own, and exists only to further human ends. Proponents insist that technology is completely under the control of humanity, and that humanity, not technology, shapes history.

But what are the human forces that control technology? 

Leilia Green, professor of arts and humanities at Edith Cowan University in Perth, has written extensively about technology within the context of social science. In her book Technoculture: From Alphabet to Cybersex, she explores the connections between new technologies and social society and governance.

Green explains that the idea of technological neutrality only makes sense when technological advancement is considered outside of social systems. But this idea breaks down when we take into account that culture and society evolve in response to these advancements – and not always for the better.

She argues that technological advancements serve the priorities of a powerful social elite – who she identifies as the “A, B, and C of social power”. “A” is the armed forces; “B” is the bureaucracy; and “C” is the corporations.

Green observes that these three powerful institutions, rather than the whole of society, take responsibility for how new technologies are implemented and deployed in our world. And indeed, these same “ABCs” are heavily invested in open source as it is practiced today.

Green also notes that with the increased globalization of technology (which we see reflected in the adoption of open source) allows Western elites to export these “neutral” technologies to other societies around the world. And its use in these other countries influences the way that the technology is used in the West. We learn from its abuse. We exploit it to the same ends back home. 

This calls to mind how the same technological advancements that let us unlock our phones touch-free were leveraged by China to surveil its citizens, and this abuse was imported back into the US to aid the government in identifying and persecuting protestors.

Technology and human rights

The United Nations Global Compact is the world’s largest corporate sustainability initiative, with participation from more than 10,000 companies across 156 countries. Launched in 2000 by former UN Secretary-General Kofi Annan, the program represents an effort to align business with the goal of bettering society at large.

The second of its ten principles discusses the responsibilities of the business world with regard to human rights. It specifically calls on corporations to support and respect internationally proclaimed human rights, and to work to ensure that they are not complicit in human rights abuses.

Complicity in this case means being implicated in a human rights abuse that is being committed by another company, government, group, or individual. And it’s not enough to simply avoid direct responsibility. Companies must reflect on whether the leverage they bring is influencing or encouraging these abuses; and if so, how their leverage can instead be used to further the mission of universal human rights.

As technologists, we should be asking ourselves the same questions. We have outsized leverage over how technology is used in the context of our own society and societies around the world. And we have a moral responsibility to consider how to use this leverage for social good.

Awakening

In September of 2019, activist Shanley Kane made a series of tweets calling out technology companies with contracts with ICE (Immigration and Customs Enforcement) and the Border Patrol agencies; both of which are involved in atrocities against immigrants at the southern border of the United States. Her tweets were inspired by an ongoing nationwide campaign calling out the complicity of tech companies in human rights abuses, organized by Latinx and Chicanx activist group, ConMijente, under the “#NoTechForICE” hashtag.

One of the companies that Kane called out was Chef, which makes open source tools for server configuration and management. Seth Vargo, a former Chef employee and author of several open source tools designed to work with Chef, was horrified to learn that software he had written was being used in connection with human rights violations. 

Vargo immediately pulled his source code off GitHub, and his libraries out of distribution through RubyGems. But Chef insisted that they owned the copyright to the code, as he had written it while employed by them, and within hours the software was restored.

Seth Vargo acknowledged the responsibility he felt about how his software was being used, but he had no agency to act on his convictions. The open source establishment not only didn’t support Vargo’s actions, but they found themselves in the position of having to side with the human rights abusers and revert Vargo’s act of conscience.

This was a monumental ethical and moral failing on the part of open source, and left many community members questioning whether the FLOSS (Free, Libre, Open Source Software) institutions would ever allow developers to act on their social responsibilities, however late they accepted them.

Vargo had to admit defeat in his courageous struggle. In the README on his original repository, he left us with these words: ‘I have a moral and ethical obligation to prevent my source from being used for evil.’

We’ll explore the impact of Vargo's words – and the fate of the locksmith – in part three of this series, which will be published on October 23.

The parable of the locksmith
Episode 01 The parable of the locksmith
From open source to ethical source
Episode 03 From open source to ethical source