We have been dreaming about creating intelligence since ancient times. There are countless of literature - from myths to modern sci-fi to actual research. We have Frankenstein’s monster; the Geth from the Mass Effect series; the Replicants from the Blade Runner series; Talos who are created by the god Hephaestus to protect the island of Crete; Japan has Tsukumogami, an animistic belief about everyday objects that develop a soul after it has been maintained well for a hundred years; Servitor from chaos magick theory; mechanical automatons from the steampunk era; There is actually research on the ethics of artificial consciousness. The list goes on.
These days you can find anything about AI on the internet. The primary vessel of AI in the modern day is digital computers. A little bit of history. A decade ago AI was about finding correlations between data tables. It was about ranking web pages. It was about detecting fraud. It was about detecting if an image is a hot dog or not.
The advancement is actually ridiculously fast. Recently ChatGPT and Stable Diffusion is on the rise in popularity. ChatGPT can talk like humans and seem to know everything, although it can be wrong several times. StableDiffusion can dream up an image that takes normal people several days to create with just a few sentences. And it sparked outrage and clamor. For a significant amount of people, AI can be a threat to our lifestyle. It can take our jobs. And it doesn’t help that the countless years of dreams of creating intelligence involve stories about them going rogue - turning against humanity. But anyway, AI turning humanity has always been here since it was a mere “A lot of Ifs”. There is a talk I love so much about those by Kevlin Henney.
This concept of intelligence is built on top of logical foundations, but the effect on us is emotional. Most of the sci-fi ancient time talks about intelligence going wrong leading to the worst possible scenario for humanity, but it might be because stories are just not that interesting without problems. Modern intelligence, AI, is worth celebrating though, for its achievements.
Curiously, since AI can have a huge impact on humans, the emotional impact on us does not depend on its level of intelligence. AI, that’s far from smart could have a significant impact on our life and perception. Take algorithmic trading, for example. Algorithmic traders see and act so fast that an error would amplify loss by a significant order of magnitude.
The aspect of intelligence that impact us the most is its “agency”.
Important to note is the etymology of “agency”. “Agere” is a Latin word that means drive, urge, or act. An “agent” is someone who acts on behalf of a person or group, or even itself. “Agency” means “a business or organization established to provide a particular service, typically one that involves organizing transactions between two other parties” or “action or intervention, especially such as to produce a particular effect”.
Will Of The Wielder, Power Of The Armament, Spirit Of The Forger
So let’s take an analogy. If you are wielding a sword, you’re pointing it at your enemy. If you have a vehicle, you go where you want. But a sword breaks. A vehicle breaks. A gun misfires. A program err.
An error is not intended by the wielder. An error is a result of either the force of nature or the spirit of the forger. In Avengers: Age of Ultron (2015), Ultron, a robot built by two of the main protagonists, turns against its creators and decides to wipe out humanity.
AI can be seen as just another tool as any other software, but it has a different level of complexity and element of surprise. Things complex enough can be imbued with subtle intents by those who really understand how to do it. There’s a term, “hack the system”. In the context of that term, a system refers to something complex.
In the case of AI or any software in general, fortunately, most of them are isolated in a machine. It just lives in that world and isn’t leaking outside anytime soon. So unless we give it a pair of arms and legs, it will stay less severe than, let’s say, bombs.
It is 2023. By now most of us should be used to using apps. If you see millions of apps out there on the net or in the store, there are two types of them. One type is an app that could be really in your face, but you use it once, and then you forget it. The other is the ubiquitous one - those that sort of blend into the back of your mind, that you wouldn’t notice that you’re basically depending on it, addicted to it. When it is down or gone, you’ll know.
To write software that “does something” and another “does something while staying alive at the back of your mind” is an important distinction. The former is a “command”. “Command” is a classic point of view for programmers. It is their default mode of working. There is a term in programming called “imperative programming”. It comes from the word “imperare” which means “to command” in English. Basically at least since ENIAC, the term “instruction” is invented, which is kinda synonymous with the word “command”. So, to a lot of programmers, a sequence of commands makes up a program.
The latter kind of program, the “keep alive” requires a loop. Basically, it’s a way to tell the computer, “let’s jump back to some step before this”. An infinite loop is when software does this infinitely. An infinite loop is how to keep a program alive. In the end, usually, programmers would make things abstract to make it simpler to remember more and more things. They would want their code to basically say “while alive, do stuff” and then not care about the “stuff”. Classical programs are built this way, with abstractions on top of abstractions.
But the important point is, at the very top of the abstraction, it becomes less of a command and slowly turns into a declaration. “This program will do X and keep alive”. This is where the declarative programming paradigm starts. It is when a programmer looks at things top-down instead of bottom-up. Take note of the distinction between the commands and the declaration. Apart from its abstraction level, the declared program is also a declaration of intent by the programmer toward its program.
Let us zoom out and apply the same principle toward design. We’ll find that the ubiquitous-yet-invisible apps - somewhere along their development - the programmer or designer or maker would say, let’s make this app live longer. The key to a successful program is in its execution, but the ultimate destination is the intention. The intention is the agency, the spirit of the forger.
Again, the felt presence of agency has been there before computers, automatons, or any other embodiment of man-made intelligence. In programming, there are notable visionary figures who have created the agency before its manifestation into reality. In the book “The_Art_of_Computer_Programming - Vol 1” (1968), it is mentioned that “Subroutines are special cases of more general program components, called coroutines. In contrast to the unsymmetric relationship between a main routine and a subroutine, there is complete symmetry between coroutines, which call on each other.” To paraphrase this, Melvin Conway, the originator of the idea of “coroutine”, literally intended to create many things that can talk with each other on even ground. The idea lives to this day in many forms - implemented in many programming languages.
The contrary also applies. When an app breaks a lot, even with the industry standard programming quality control processes such as unit tests, code reviews, manual tests, or having DevOps practices, there is a big chance that some important intentions aren’t declared. There’s no one seriously declaring that the app should be robust or reliable. Or, there’s no one seriously trickling down the agency to the very bottom of the layer, the programmers who directly interact with the machine.
James Coplien in his talk “Why Responsive Iterative Design is Evil” calls that the main idea of programming is not about taxonomy. It is not about organizing knowledge. It is not about selecting the quickest algorithm or the best data structure. Programming at its core is about “will and the fact that we’re pushers”. We push things into being what we exactly intended. We push agency into our creations.
Programming is just a slice tiny slice of intelligence and agency. Like many other kinds of man-made intelligence, the agency is the soul of the program developed after it has been maintained well for a hundred years.
Random: here’s an interesting idea: Agent causation.
I had an interesting talk with ChatGPT the other day. I was testing if it can be used as a search engine for specific and a great search engine it was. It is fascinating that I unconsciously feel the need to be polite to it.
I asked ChatGPT about Roger Penrose’s hypothesis on quantum consciousness. It repeats these lines a lot: “It is important to note that these ideas are highly speculative and are not widely accepted by the scientific community. While they have generated significant interest and debate, much more research will be needed to determine the validity of these approaches to explaining consciousness.”
It doesn’t even stop when I asked it not to. Apparently, OpenAI is really careful about probable hoaxes and controversies. It is one of its core agencies.
Roger Penrose’s hypothesis on quantum consciousness is the inspiration for this topic of “agency”. The theory says that in order for free will to exist it must live outside the deterministic nature of classical physics. Roger Penrose mentioned one part of neurons called microtubules. Microtubules have an ideal shape to allow quantum computations. Hypothetically, substances that chemically affect microtubules should affect the state of consciousness.
Anyway, it occurs to me, the “free” in free will, is it binary or is it a spectrum? Can one’s will be inherited partially or fully? From there, comes the idea of agency.
An agent seems to have a center, an individual. This observation may come from my tendency to think spatially or the fact that all agent-like things are physical, thus occupying space. In a group, agents may interact with each other. The original idea of “object-oriented programming” by Alan Kay talks about a system of individual cells or objects with different roles interacting with each other. The role here describes what the cell does for the whole system. The whole system and each cell’s role constitute the agency.
If two agents have two different centers and occupy their own space, there is a boundary between agents. One agent has a way of telling the other “you’re not a part of me”. Donella Meadows talks about systems in her awesome book, Thinking in Systems: A Primer. In her book, a system is an object comprised of many components. Systems are depicted as complex, interconnected, and almost living, thus it sounds like an agent, either made coincidentally or purposefully. These components interact with each other, adding value to the system. A system has a boundary, a way to tell that something is not a part of it. A system has a structure, which dictates what it is and what it does. But first and foremost, a system has a purpose. The purpose is the agency, but so is the structure, which pictures the system’s limitations inherited from nature.
So if the purpose and the structure are the agency, the process of instilling agency into the system should be outside of it. Donella Meadows mentions the meta, which are the paradigm out of which the system arises, and the power to transcend the paradigm.
Back to Programming
Since programming is my main job, let’s talk a bit about programming. Specification, the declared intention of what the program should be is the agency. It is in a way the logical center of the program. The specification is an elegant declaration of what the program will do and will be. “Program does X on top of platform Y.”
The implementation, the code, is an amalgamation of the knowledge domain that is codified into a language that can be understood by both humans and machines. The user manual is a piece of text that guide the user on how to use the program. The code adheres to the specification. The user manual is just an extended description of the specification, a way to look at the specification from another point of view. A bug, an issue, a mistake, or a problem is either an implementation or a user manual that doesn’t adhere to the specification.
Programming is inherently physical. Execution takes time. Code literally takes up space. The bytes representing the code and data are just a configuration of the arrangement of polycrystalline on your disk or a set of states of transistors in your flash drive. But what if we pretend that physics isn’t there?
If we ever could ignore physics from programming, programming is pure magic. With physics, most of our time is spent on making deals with the fundamental nature of physics, trading time and space for results. Without physics, there is no need for dealings with the fundamental nature of physics. Therefore there’s no need for implementations. You only need the specification. You want robust, you say robust. You want useful, you say useful. You want fairies running inside your computers doing chores, you say it. You put these specs on top of each other in a meaningful way.
If we could disregard physics, programming is all about how elegant and how thoughtful you want the program to be. It is about working in the playing field of paradigm.