Moral Codes: Essays on AI
Designing alternatives to AI.
By Marshall B. Rosenberg, PhD
TOPICS
Artificial Intelligence • What it means to be human and creative • Software design and engineering
The Big Idea
“So, how do we design software that will let us spend the hours of our lives more meaningfully, attentively conscious and creative moral agents, rather than being condemned to bullshit jobs while the machines pretend to be human? My simple answer is that we need better programming languages and less AI.”
Blackwell explains the fundamental ideas of artificial intelligence, “emphasizing the aspects that are important from a human perspective.” His explanations often fly against the mainstream views of the AI research community, and certainly contradict the promotional messages we are fed by the large AI companies.
“This book is about how to design software better—better for society, better for people, better for all people—even if the result might be slightly less efficient overall, or less profitable for some.”
Key Definitions
Two kinds of AI. “Although there are many kinds of AI system, addressing all kinds of problem, journalistic and political discussions tend to assume that they are all fundamentally the same. I want to make clear that there are two fundamentally different kinds of AI, when we look at these problems from a human perspective.”
Cybernetics or control systems [objective]. These are automated systems that use sensors to observe or measure the physical world, then control some kind of motor in response to what they observe. (E.g., home heating thermostat or automobile cruise control.)
Imitating human behavior for its own sake [subjective]. The goal is to imitate human behavior, not for some practical purpose like making a machine run more efficiently but to explore the subjective nature of human experience, including our experience of relating to other humans. (E.g., chatbot or LLM)
Machine Learning. “Learning” in this sense takes poetic license with the term, with very little resemblance to the workings of the human brain. Machine learning algorithms use statistical methods and modeling to predict the next most likely result based on what has been entered so far.
Artificial General Intelligence. This is a speculative idea that argues that machine ‘learning’ algorithms will “naturally evolve as they increase in power to think subjectively like humans, including emotion, social skills, consciousness and so on.” This argument is fundamentally flawed, based on circular logic.
“The claims that increasing computer power will eventually result in fundamental change are hard to justify on technical grounds, and some say this is like arguing that if we make airplanes fly fast enough, eventually one will lay an egg.”
MORAL CODES. The title of the book is a call for “More Open Representations, Access to Learning, and Control Over Digital Expression.”
It’s about “paying attention to what you say, investing attention in your own needs, and giving attention to those around you. A focus on moral codes is about being a conscious human, who is able to make moral choices, and not about trying to make a conscious computer.”
What’s the Significance?
Context is hidden
“When an LLM-based chatbot presents the content of the Internet via a fictional first-person conversation, the illusion that the whole Internet might have a single point of view, consistent with your own, is especially pernicious. The filter mechanisms became invisible, can’t be explicitly controlled, and depend on contextual factors that you can only imagine.” This undermines individual agency, making people more susceptible to manipulation and reducing opportunities for learning.
Built-in bias and racism
The very concept of measuring “intelligence” emerged from a period of global racism when the desire was to prove the superiority of some races above others. Blackwell does not suggest “a mathematical algorithm is necessarily racist, even if it was originally invented for racist purposes, but we do need to be concerned that the word ‘intelligence’ is also part of the racist project of eugenics.”
He says that when he “realized this myself, a lot of the problems of AI suddenly became clear. It is well known that AI systems are routinely biased, making decisions in ways that are racist and sexist. Most AI researchers believe that this is an accident, and can be fixed through better (unbiased) training dat, or even mathematical methods to identify racist bias as a statistical deviation to be corrected. However, scholars of technology who understand the history of race, in particular Ruha Benjamin, have fought against those naïve assumptions, demonstrating the many ways in which these systems are racist by design, not by accident. Indeed, racism itself is a kind of technology, invented to defend the industrial processes of slavery.”
Not inclusive
The computer science (and cognitive psychology) that AI grows out of is “based on studies of people in countries that were Western, Educated, Industrialized, Rich and Developed” … aka WEIRD. It is important to consider the biases and assumptions that underlie the technology before we accept and employ the models at face value.
“If you believe that all good things in the world are created by WEIRD white people, and that the world’s problems will be solved by WEIRD white people continuing to tell poor Black people what to do, then I guess you might think that AI research is going in the right direction and will solve the world’s problems. On the other hand, if you have started to notice that some of our engineering advances have made the world worse instead of better, you might wonder whether more of the same is such a good idea.”
Dangers of magical thinking
Many experts and advocates “downplay the difference between the two kinds of AI: objective mechanical tools and imitation of human subjectivity. This fallacy encourages magical thinking, in which future AI systems are imagined as being able to do everything a human can do and more, but it has little practical impact today, other than where important decisions being made by investors and policy-makers might be misguided.”
Making things worse than when we started
At times the logic of software development “seems to lead to new business models, inequalities, bureaucracies, and dysfunctional societies, even worse than the ones we started with.”
“Decades ago, we were promised that robots and computers would take over all the boring jobs and drudgery, leaving humans to a life of leisure. That hasn’t happened. Even worse, while humans keep doing bullshit jobs, AI researchers work to build computers that are creative, self-aware, emotional, and given authority to manage human affairs. Sometimes it seems like AI researchers want computers to do the things that humans were supposed to enjoy, while humans do the jobs we thought we were building the robots for.”
Conclusions
Generative AI is not “creative” in the human sense. It’s probabilistic. “There is no logic or reasoning in an LLM, other than knowing that certain words are likely to follow others.” Creativity requires and intention, or else it is simply noise. Intention and creativity are fundamental human activities giving attention to ourselves, other people, and the world around us.
Consider the cost. LLMs require massive amounts of infrastructure at immense cost to the environment and human well-being, especially for those in exploitative working arrangements doing the basic and difficult labor not suited for the algorithm.
Don’t be frightened that AGI will take over the world (and humanity). LLMs won’t learn to code themselves and turn into Artificial General Intelligence. It’s a fallacy to expect LLMs to turn into super-intelligent, self-coding AIs that could define themselves.
Watch for exploitation. Creating an algorithm is expensive, so many “automated” tasks are actually completed by a human behind the scenes, often for little or no pay.
Choose your AI with awareness and understanding. Various AI tools will undoubtedly provide tremendous benefit to people and organizations. But they are also problematic and over-hyped, promising results that may not be possible. Developing an understanding for how an AI tool actually works and the hidden costs behind it will allow decision-makers to select the tools that align best with their objectives and values.
Blackwell says that his “alternative to AI does involve paying closer attention to the tools used to make software, and in particular to the opportunities and ways of thinking that tools provide.” He continues that “a more effective response is to ensure that more people know how these tools could be used, so that alternative actions are clear.”
“Technology is a thing that we do, not a thing that happens to us.”
Who should read this book?
Leaders providing policy guidance to their organization
Tech-curious
Anyone wanting to make business decisions that contribute to the well-being of workers and the environment