Ratzinger Thought of This First
in which Joseph Ratzinger says things in 1982 that are cutting edge Silicon Valley thoughts today
In the Fall 1982 edition of the theology journal Communio, Joseph Ratzinger wrote an article titled "Technological Security as a Problem of Social Ethics." When I encountered this article in 2022, I was first surprised to learn that he had written about technology, and then more surprised when I read this article and saw clear parallels to what I thought was the cutting-edge thinking in Silicon Valley about the impacts of large AI systems like ChatGPT and Stable Diffusion. It is a dense article, so I want to provide some commentary and show how his insights are valuable to those of us who are thinking about technology. To read a publicly available version of this article, follow this link.
What I take away from this article are three temptations of building and using technology that are fundamental to what technology is. Because, these are temptations, not errors or sins, their presence is not a problem in itself. Instead, it is something we ought to be wary of, just like anyone with power over others should keep watch against pride, or good food should make us keep watch against gluttony. Then, because they are intrinsic to technology, these are issues that good design can reduce the impact of, but never fully remove. "More technology"1 cannot dismiss these temptations - in fact, they usually increase the temptations. Knowing these temptations help establish the habits for ethical thinking in regards to technology.
When trying to describe the three temptations, I've settled on using three questions as a kind of mnemonic - what, who, and how?
What: artifacts set laws. Ratzinger points out the inflexibility of the machine and compares it to a 17th century teaching condemning "tutiorism." Tutiorism is based on the Latin word "tutior" which means 'safer.' It is an idea that a decision-maker must select what "corresponds most surely" to the law in each case. This idea was rejected because it "denies a person's own capacity for ethical insights"2. The machine's inflexibility is like tutorism. Its design contains within it a set of laws, which cannot be adapted to new circumstances. Ratzinger says "Capability for the ever-new ethical decision and its restraining function in the face of varying circumstances is not included."
That inflexibility is not good in itself. It is not enough to point out that the inflexibility can annoy your customers or lead to inconveniences. Instead, an inflexible system acts as if it is a divine law that cannot be overridden. To be just, the mechanical system must be part of a larger context that permits ethical judgment. A simple [if imperfect] example of this is customer support. If a system does not permit what is due by justice - say, unforeseen circumstances caused many flights to be canceled - then it is up to customer service to make up with ethical judgment for what fundamentally cannot be in the system. The analogy is imperfect because customer service can often by an inflexible system itself, but nevertheless I like it because it is a common example.
I've found it reasonable to extend this principle more generally: the artifacts we have set laws. If we only ever follow those laws, we are under tutiorism and we cannot adapt to new ethical situations - whether that's as mundane as some product not working because one's last name is Null3, or something as grand as preventing nuclear war by asserting an alarm was a false alarm4. In the context of AI, this comes up in discussion of bias in these models. Artifacts are laws, and so using AI would set these bad laws.
Who: artifacts set up dependencies on others. Ratzinger notes that more complex technologies "centralize human achievements". They allow specialization but also introduce dependence. Recent issues with supply chains illustrate our dependence for daily goods on so many people who do not know us and therefore cannot love us - and conversely, how often our own work is for people we do not know. To give a more technological example, Thaddeus from Machina Ex Deo shares this regarding his experience fixing things on the family farm: "If it's purely mechanical, you can fix it yourself. If it involves some copper and electromagnets, you probably can fix it. If it involves silicon, tough luck, kiddo." More technology means more dependencies on other people to fix complex things.Â
This dependency is a centralizing force, and it places responsibilities on the person who is being depended upon. Christians are not libertarians - dependence on others is not a problem. It is, however, the precondition for both love and abuse. The people who centralize power - the technologists, designers, and businessmen - will be tempted to use it for their own gain, and the people who depend on these technologies risk being used for other's gain. The connection to recent discussion on AI was that if these "foundation models" are used very, very widely, that gives companies like OpenAI massive amounts of power.Â
How: artifacts depend upon materials and humans. Ratzinger rounds out the trio by noting that both material resources and human capabilities are used more and more as we develop technology. Why this is the case is obvious - artifacts are made out of materials and made by human labor. The ethical responsibility he places is one both to the natural world (which should be familiar given our call to till and to keep) and to other people, with special mention to future generations.
AI comes into the picture when you total up the energy used to collect the data, train the model, and run the model. Another recent Silicon Valley fad, cryptocurrencies, involved massive amounts of power - half to one percent of global energy usage was spent on crypto.5 AI also requires significant amounts of human labor in order to work: consider the pushback on an image generation program like Stable Diffusion from the same artists whose art was the training data that made something like Stable Diffusion possible.
These ideas have been an important starting point for my own thoughts on computing, and some of them I've taken almost directly and applied them, such as in the ChatGPT post. There's still so much to expand upon in these paragraphs, though. Consider a technology you design or use in your own daily life. What laws is it setting? What relationships is it creating? What materials and labor does it rely upon?
What do we even mean by "more technology"? It's a phrase, so I'm convinced it has meaning, but it is difficult to articulate that meaning.
This may be strange to hear given contemporary concerns about moral relativism, but note that the subject is not natural law or divine law, but human law.
https://www.wired.com/2015/11/null/ - a writer named Crhistopher Null recounts some of the problems of code assuming his last name is some error.
https://www.whitehouse.gov/ostp/news-updates/2022/09/08/fact-sheet-climate-and-energy-implications-of-crypto-assets-in-the-united-states/