“We walked around in circles singing whoa oh
I said we could walk around practically forever singing whoa oh
I said our heads were filled with things that
Didn't really matter anyway we're singing whoa oh
I said we could walk around for practically forever singing whoa oh”
~A. Mccarthy, C. Dorschuk, I. Dorschuk, S. Dorschuk
Where Do the Boys Go? © Universal Music Publishing Group

Metroponus Statement of Purpose

What am I doing? Why? What is my purpose? I have a difficult time accepting that life alone is enough, particularly in the face of the damage humans are wreaking on the ecosystems that support vertebrates. This includes variations like a purpose of procreating and protecting my offspring. While protecting my offspring is a responsibility, I do not accept that it is my special purpose, despite the insight of Naven Johnson in the movie The Jerk:

special purpose

Unlike Steve Martin, I did not study much philosophy. This statement of purpose is a form of outsider art, a naïve philosophical framework that I've patched together to explain my place, well after my nerd fervor played out my career in information technology and electronics. Looking back, I am not sure I would change that. I like the freedom I get as an outsider.

Before I can establish purpose, I need context: Where do I come from? How did I get here? What are the current problems? What are my strengths? What are my limitations? What interests me? While the answer to these questions are way too broad on their own, I am constraining the scope of the answers to the amount that they can help determine my purpose. This is not meant to be academically rigorous. It is my map of what I think, a summary of writing and thought in these areas. I don't expect that the answers are the same for any two people, but it might be generally interesting. This also has an aspect of commitment, in a Citizen Kane sort of way, by posting in the form of a statement of purpose. The ideas discussed in this essay are frenetically described in detail via the links behind the icons at the bottom of this page.

One limitation humans have is seeing patterns where none exist. Consider popcorn ceilings, textured ceiling that was sprayed on for appearance and sound deadening, popular in the United States in the 1970s and 1980s, where I grew up. Often, when I've found myself staring off into space, I've stared at popcorn ceilings. Like clouds, if I let my clockwork conscious mind drift, the texture in the ceiling can easily morph into a dragon or tree or whatever else my brain imagines. That is how my brain works. I make order out of chaos. This is part of what it means to be human. It is also one of our superpowers, but I do need to be wary of patterns, as they might not be there. I may be basing my purpose on an illusion. Another form of limitation is that humans have limited active focus. I can make out detail on an object I'm looking at directly, but the items on the side of my vision have neither the detail, nor the ability for me to cognitively distinguish the side objects in real time in the same way.

My unconscious mind also limits my ability to rationally evaluate my purpose. There are two aspects of this. First, I have unconscious desires that play out without me being consciously aware. Second, there are parts of my psyche where I share cultural artifacts of meaning with others that I haven't even met. I experience shared cultural artifacts with dreams. Rooms and cars have similar meanings in dreams by people who have never directly interacted. Some of these artifacts go back thousands of years, and show up as mythical figures. These artifacts form my perceptions, just as my human tendency to find patterns in chaos and my limited focus, guide, filter, and transform my cognitive abilities. While I am mostly blind to the terrain of my unconscious mind, I am able to occasionally map the territory with words as I wake from dreams.

The limitations I have are both what I have observed in other humans, and what I have observed personally. I don't think it is a fallacy, with the types of aspects I'm discussing, to extend this to all humans as I try and make sense of how we got to where we are as a species. Dolphins, crows or primates might share some of the same cognitive features as humans, but it is language that perpetuates our rich culture, that sets us apart as a species. With spoken language our species was relatively stable for hundreds of thousands of years. We laid our hands on the same place on top of other human imprints on the same cave walls for five thousand years in some cases. We grew our culture slowly with stories, an ebb and flow between geographically distant but occasionally intersecting groups of humans. We created art, reflections of the world we saw, reflections of our cultural memory as a foil to the flickering light in the caves we sheltered in.

It is written language that changed everything, exploding the balance. I'm including pictographs in my definition of written language. This corresponds to the earliest civilizations, starting six thousand years ago. Civilization requires written language, because of supply chains necessary to support large population centers. Even the simplest supply chains, like storing wheat to plant the next year, survive on through the winter, and then planting in the spring, harvesting, and distributing, requires written language to scale to the needs of a city. We also use written language to create the rule of law. Documents capturing rights, freedoms, manifestos, and religious teachings are all woven into civilization. Similar to agriculture, science relies on written language in most cases, as it is the only way to collaboratively capture hypotheses and test predictions with any kind of scale.

Written language captures and improves chains of knowledge, the frameworks of tangible things. There is a weave back and forth, leapfrogging between the supply chain and technological advancement, pushed forward by science. Written language changes over time to capture knowledge itself. Consider the word entropy. While it is true that different humans filter and understand the word differently, engineers or scientists have a more precise, roughly shared understanding of the concepts behind entropy. An engineer might relate entropy to dismissing fraudulent claims of an energy machine. An astrophysicist might apply the concept to understand creation of stars. While it is true that knowledge can be passed on verbally, the collaborative and explosive, exponential nature of written language makes it fundamentally different, just as concentrated energy via fossil fuels propels civilization forward in severely varied versions vs. energy from burning wood.

Models, like trees, graphs, and blueprints, perpetuate knowledge at scale, similar to how the written word can scale civilization. The idea of a tree is particularly important in studying the taxonomy of living things. Not only does the word mean something, but the placement in the model means even more. Consider the word mustela. Mustela is Latin for weasel, and the word still means this in English, Spanish, and likely many more languages. More importantly, though, it is part of a taxonomy of animals as a genus that includes weasels, polecats, stoats, ferrets and mink. If we look one branch up in the tree we find the family mustelidae which has a subfamily of lutrinae, of which the otter is part of. These words don't mean much on their own. It is placement in the tree that adds most of the meaning, as well as translations to local language. In this case, we might infer the true nature of otters based on placement in the model, even though in our every day language we don't couple weasels and otters. Otters are cute and cuddly, but weasels are... well, weasels. The reality is that otters act like weasels, and that can be inferred from the model.

Models are an important component of science. Science formed our understanding of the moon, aerodynamics, and harnessing energy to land on the moon. This created entire areas of technological advancement and supporting supply chains that persisted both as a way to generate particular material items, but also as knowledge that could create new technologies. The tree of knowledge itself, as a model, is woven in through our collective cultural artifacts. We understand the consuming aspect of knowledge, and the tree and snake emerge in our cultural memory, both as a warning and as a symbol of power. Myth, structures in our unconscious mind, and our ability to discern patterns all play into how we perceive knowledge. Our civilization grew on this mix, in concert with our knowledge base and shared cultural artifacts. And here we have some closure in the consuming circle, the ouroboros, the snake among the branches in the tree of knowledge, a symbol as old as civilization, that appears in various forms across all cultures. We scaled civilization by reading, standing on the old words and bootstrapping to higher and higher levels, leveraging science and models, to the moon and beyond. This is the power of knowledge, knowledge of everything is at our disposal as we scale these last six thousand years. It is our goals that we need to be conscious of, as well as our path to those goals that is our concern.

We now live in a hyper-accelerated time, starting in the nineteenth century, when we started leveraging oil for energy and created the first computer. Ada Lovelace wrote this about Charles Babbage's computer in 1843:

“The Analytical Engine has no pretensions whatever to originate any thing. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths. Its province is to assist us in making available what we are already acquainted with. This it is calculated to effect primarily and chiefly of course, through its executive faculties; but it is likely to exert an indirect and reciprocal influence on science itself in another manner. For, in so distributing and combining the truths and the formula of analysis, that they may become most easily and rapidly amenable to the mechanical combinations of the engine, the relations and the nature of many subjects in that science are necessarily thrown into new lights, and more profoundly investigated.”
~Ada Lovelace, Note G, p. 722,Scientific Memoirs, Selections from The Transactions of Foreign Academies and Learned Societies and from Foreign Journals, edited by Richard Taylor, F.S.A.,Vol III London: 1843, Article XXIX. Sketch of the Analytical Engine invented by Charles Babbage Esq. By L. F. Menabrea, of Turin, Officer of the Military Engineers. [From the Bibliothque Universelle de Gnve, No. 82 October 1842].

Oil provided energy to build out even more sophisticated technology and supply chains. Electronic tubes and transistors accelerated knowledge even further, particularly with computers. With the help of computers and oil, supply chains could grow even more complex, and the synergistic build of knowledge domains gave us many of the current miracles of modern civilization. There is something more. There is artificial intelligence (AI). I'm not talking about Terminator AI, Skynet, I'm talking about artificial cognition towards goals. Let's set knowledge aside a bit and talk about cognition towards goals, and what that means. We have limited cognition as far as sight, see patterns (sometimes where none exist), and have various cultural artifacts and archetypes in our psyches that play out and emerge through our knowledge interplay in our cultures. It is shared intention, though, that is an un-intuitively limited cognitive ability that threatens our persistence as a species.

Unfortunately, our particular state of civilization cannot persist, and in more than the normal "everything changes" way. I could ask 10 different people why, and they would likely give me 10 different answers as to why civilization cannot persist in its current state. Perhaps it could be argued that civilization shouldn't persist, but that is out of scope of this essay. I think it is fair to say that the nature of the acceleration due to oil and computers and human cognitive limitations brought us to where we are. Ada Lovelace, in the quote above, both corresponds to the launch of computer science, but she also points to the idea that computers will not solve the problem of analysis and design of systems towards a goal. As humans and creators of the computers and their rule sets, we need to program computers towards the correct goals.

Shared intention is similar to goals, but when it plays out in real time is almost like mind reading. I use my assumptions about those I see, my knowledge of particular tools, and imagine how my shared intention either matches others I see or doesn't. It is a form of mind reading, and humans are surprisingly good at this within certain constraints. As an example, imagine that you are in a football game. You huddle, get some instruction, and then in real time you interact with the other team and establish shared intention of getting the ball across into the end zone. With football, there are no shared tools, just shared intention and a ball. In Lacrosse there is an addition of a stick as a tool, but there are still two teams. With Polo, there is a mallet and a horse. Now, imagine that there are several tools and teams. It is hard to imagine that cognitively this is something we can manage in real time without technological help. For instance, if I had to understand how to respond with a mallet in one hand, a glove in the other, amber and green balls, one you could only hit, and one you could only catch, with a team of ten people on my own team, some interacting physically per the rules of the game, and others that could only shout directions, and all of this with multiple teams, my cognitive abilities could not keep up in real time. And, while humans do have an advantage in this area as far as shared intention, this has limits.

Our cognitive ability does not give us global shared intention, then, as we can only cognitively play with a limited number of tools against one team for a shared goal. Even if everybody's team had a shared goal of world peace, we all have different sets of tools, and we can't deal with multiple teams. While we do build knowledge, we are limited in our shared understanding of that knowledge because of cultural differences. We have many strikes against us, then, so to speak, as we try and work collaboratively to win the game, to persist our species in a way that is beneficial over time. Nobody *wants* to burn down all the forests or pollute all the water, or flood our coastal towns. We just don't align, and we can't do it in real time. It gets worse. As we struggle, as we lose the game, we think our team is right, that our beliefs reflect reality. This bit from a TED talk helped me understand how we got to where we are:

“Think for a moment about what it means to feel right. It means that you think that your beliefs just perfectly reflect reality. And when you feel that way, you've got a problem to solve, which is, how are you going to explain all of those people who disagree with you? It turns out, most of us explain those people the same way, by resorting to a series of unfortunate assumptions. The first thing we usually do when someone disagrees with us is we just assume they're ignorant. They don't have access to the same information that we do, and when we generously share that information with them, they're going to see the light and come on over to our team. When that doesn't work, when it turns out those people have all the same facts that we do and they still disagree with us, then we move on to a second assumption, which is that they're idiots. They have all the right pieces of the puzzle, and they are too moronic to put them together correctly. And when that doesn't work, when it turns out that people who disagree with us have all the same facts we do and are actually pretty smart, then we move on to a third assumption: they know the truth, and they are deliberately distorting it for their own malevolent purposes. So this is a catastrophe.”
~Kathryn Schulz, from TED2011 talk On Being Wrong

For all of the marvels of human cognition and culture, we are crippled in several areas. We can create massive, sprawling, organic, consuming spreads of global supply chains, but we can't work towards the same goal with the same shared intention because any models are filtered through our culture and psyche, and real time shared intention is limited to two teams and one or two shared tools. To make things worse, our assumptions, as we attempt to communicate about what we see as a natural outcome of our knowledge, are often false. Humans don't naturally have shared intention at scale, nor an effective means to cognitively evaluate shared intention. And, in this failure, we vilify other teams and undermine what possible shared intention we had left.

That all seems pretty bleak, then, but there is hope. I've identified several problems above that can be addressed by knowledge if we are aware of our cognitive limitations. Think about the last conflict you had in a collaborative work session. It is likely that various people argued their stance, their narrative, but there was no real progress as far as a decision for the group. There are many reasons for this. Sometimes people see teams; for instance, they might be on the visionary executive team and see the other team as techno-nerds focusing on rabbit holes. One way forward is to focus on interest instead of stance, because sometimes people have more shared interests. This is an example of overcoming cultural artifacts and filters. Another way is to use alternative ways of sharing knowledge, like a tree or a graph. Sometimes simply writing down the understood requirements is sufficient to get agreement.

Let's go further, though. Let's scale the approach to the size of the problem using advancement of semantic techniques of capturing knowledge in relation to a goal. This forms the basis for some forms of AI. What might be a shared goal around the world? How about access to fresh water, sustaining food, and a wet bulb temperature of less than 32C? In order to have sustaining food, we need a sustaining ecosystem, which is also related to temperature and fresh water. A semantic technique forms relations between things that are well defined and put into a simple sentence. Like a sentence, the relations are subject, predicate, and object. Collections of knowledge in this form are called ontologies. For instance, there are ecological ontologies with entries like "An organismal quality inhering in a bearer or a population by virtue of the bearer's disposition to survive and develop normally or the number of surviving individuals in a given population." These are published definitions that anybody can use. Here is how this particular entity looks in a semantic graph:

example entity

Because they are published definitions, the can build on each other. A pioneer in this area is Barry Smith. He coached scientists that used these techniques so that they captured meaning. This is knowledge, but it is exponentially more powerful than a model like a tree, as different domains can hook together. There is a side bit here, in that Tim Berners-Lee imagined domains as domains of knowledge, so relations between the domains would be possible via hyperlinks. Much of the current work in semantics is mirrored in the world wide web.

Computers can take these semantic relations and build models of the teams and available tools in relation to shared intention. It is possible to plug in various shared goals and figure out how to reach those goals. Unfortunately, although AI is currently used extensively in our civilization, the goals are not conducive to the persistence of civilization at this point. AI optimizes the global supply chain, maps out logistics for shipping containers, delivers products via drones and autonomous vehicles, maximizes revenue for social media platforms, and will play songs or buy products for us by understanding verbal commands. In short, almost all large systems of AI are used for profit or power. I am not aware of any general goal related to the persistence of civilization or global shared intention. Off the top of my head, the only global shared intention that I am aware of is a fictional scenario in the movie War Games, where the military computer modeled global thermonuclear war and concluded that the only way to win was not to play the game.

The Internet, they say, was created by the military, but it grew and was financed by porn, at least in the beginning. I can believe that. And, so, by extension, just because most AI is used in concert with the global supply chain and is divorced from global shared goals, doesn't mean we can't harness the power of AI to further shared goals. Now, if you are running a large piece of the global supply chain, or a social media platform with the goal of profit, or even a search service with the goal of advertising revenue, it isn't particularly in your interest to change the AI. And, so, the natural cognitive limitations of humans play well in this sense. We devolve into us vs. them, with limited awareness of shared tools. Our limited models lead us to the conclusion of malevolent purposes, when, really, it is a limit of cognition and the overall human cultural filter that keeps us apart. Meanwhile, the AI is used for different goals.

Within that context, then, I see the patterns of usage and the effects appear repeatedly. People generally split off into two teams. They usually use a limited set of tools in relation to a shared goal with a relatively local focus, or a focus that is ridiculously vague. The world is increasingly run by AI with goals that do not align with real global shared intention, and the AI itself is promoting this because of the programmed goals in a feedback loop. For instance, attention is caused by division into sides, and fed by tiny dopamine shots of likes. On the positive side, there is a large ecosystem of related tools available to model knowledge in more and more sophisticated ways, and the ability to create AI that has more universally beneficial goals. We are still in the midst of decades of relatively unrestrained growth of the global supply chain, with neglect of negative externalities, primarily because the goals were growth and profit and not the health of ecosystems. We are approaching the limits of that growth, and we will face the consequences. At the same time, we have the tools to more effectively determine our own goals and map to existing ontologies.

I don't see the value of a statement of purpose that is negative. My conclusion should be within the positive things that I can do. While I have no illusions about just how far we have pushed the limits to growth, nor the extent of the ecosystem feedback loops that are now in play, I still think it is useful to create a positive statement of purpose:

I come from the human perspective with limited cognition, as I described above. I have skills in computers, and can model knowledge using semantic techniques. I am interested in the myth and the fabric of written words, and how this has played out through the last 6,000 years, as well as how common structures/artifacts of our unconscious mind is revealed in dreams. What is needed in the world is understanding of our shared cultural artifacts and our personal journey that reverberates through our perception, and how knowledge can guide us towards shared, intentional goals. I continue to write both fiction and strange hybrids that layer archetypes and other cultural artifacts through my understanding of how the written word got us to the place we are, 6,000 years later. I publish software to map our personal journeys through journaling software. This helps us track and understand our unconscious mind. I continue to publish methods to create our own shared knowledge, with our own goals.

l1g3r logoluvsig qmcjmcjorngmhccodare_we_resilientare_we_downtributary_softwarelog_integritycruft_bustersystkitty

Privacy Policy: