Coming soon

Daily, snackable writings to spur changes in thinking.

Building a blueprint for a better brain by tinkering with the code.

The SECOND illustrated book from Tinkered Thinking is now available!

SPIN CHESS

A Chess app from Tinkered Thinking featuring a variant of chess that bridges all skill levels!

REPAUSE

A meditation app is forthcoming. Stay Tuned.

THE CRAYON QUESTION: CREATION IN THE AGE OF AI

November 21st, 2024

Why are refrigerators plastered with crayon drawings in the homes where there are young children? Are these drawings products for the parents and adults to consume? Perhaps. But phrasing it this way is a little ridiculous. It's infusing a situation that is somewhat devoid of capitalistic structures with the terminology of capitalism. So why do these crayon drawings exist? If the answer is obvious, keep it in mind.

 
There is much squabbling over AI art. Most of it can be safely ignored because almost all of it misses the point that should be the obvious answer to our crayon question. But the mere existence of this fussy, constipated and shit-slinging dispute is itself proof that parties involved are blind to the answers to similarly fundamental questions with simple answers.
 
Much of this squabble is rooted in anxiety over financial stability. 
 
If the computer can produce a better image than I can, and do it ten thousand times faster than I can, then how can I possibly make a living as a designer? Replace image with almost any form of creation that can be seen or read on a screen and the concern is the same across industries. As I understand it, the majority of people connected to Hollywood for their livelihood are very nervous about the future of their professions.
 
Infusing capitalism into areas of human activity that we deem "art" radically alters the conversation. It comes to bare almost no similarity to our Crayon Question. The constraints of life and "making a living" pollute the answer with a vast network of incentive structures that are not present for the child who is plying a crayon to the paper. The child is not thinking "if I don't make a good enough drawing, then mommy won't give me food in exchange for it." Ponder for a moment how utterly brutal and heart breaking it would be for a child to even conceive of this question. But this is essentially the question of adult artists, and the reality is that it makes one a "starving artist" simply due to the fact that most output is simply not deemed valuable enough - unless of course the artist "makes it" and becomes plugged into one of the systems of mass distribution, be it traditional publishing, or Hollywood - or rather Netflix, etc. 
 
One of the largest bottlenecks for the success of the starving artist is the amount of time and effort it takes to get good at something. Some people get lucky and grow up in a situation, and with the right random proclivities that they speed run this training period while "making a living" isn't yet a problem. This is rare, and rarely conscious: no one knows what they want to be when they are young, and many adults, remembering their own cluttered and haphazard upbringing will say to such people: you're lucky you knew what you wanted to do at such a young age. Such balance of intense proclivity with the accidental discipline that it creates is rare and this outlying situation doesn't really apply to the conversation.
 
Those striving in an artistic direction who were not lucky enough to train young have a far more difficult time because now a training period that does not produce anything that supports a living has to be balanced with actually making a living. Time is the essential resource and as more of it gets allocated to making a living, less and less of it is available for the training period, making the time this period requires to be much much longer. This weighs on the human psychology: progress is slower. Success feels further and further away, and the dream of "making it" often starts to feel more like a delusion than an actual, tangible possibility. Let's put it this way: if a parent said to a child "if you're crayon drawing isn't good enough, you don't get dinner." How many children would just give up right then and there and start crying? If you have any experience with children, you'll likely agree that the percentage is high. Very high. Again, this is essentially the psychological situation of your average "starving artist".
 
There ought to be a distinction made here about degree of creativity in a given production. Perhaps controversial, but is the creative engine involved in writing a novel script that is thought provoking and incredibly entertaining the same has the creative engine involved in the graphic design for the movie's poster? This is a weird and uncomfortable question. Uncomfortable because it forces beloved activities into a hierarchy that may imply that one is better and one is worse. The reality is this isn't the right question to ask, but it is a relevant question in terms of the fear of AI. There is a hierarchy of tasks for which AI is steadily climbing. The question is not to point out which creative "skill" is better or worse but to say that there is an order of which one will be subsumed by AI first.
 
This order of subsumption represents a spectrum of creativity, and at the end of this process of subsumption there is only one tiny piece of the spectrum of creativity that will remain. Let's consider a couple examples: traditionally a sound engineer would be tasked with removing dead space from a conversation. Having recorded and produced over a thousand podcast episodes myself, I'm well acquainted with this drudgery. I was exceedingly happy when this process became automated and I could get it done in a couple second instead of spending many many many minutes laboriously doing it myself. Compare this "creative" task which is on the low end of our spectrum of creativity (ie. it really doesn't require much creativity but it is part of this creative process) to the complete opposite: me sitting with a blank page and dealing with the cognitive situation of: I want to write a short story, what should I write? Or even better, how about this very essay you are reading. This morning while lifting weights I had some thoughts about creativity and AI that felt novel, and decided that I needed to explore the ideas. Now how does AI relate to this part of the creative process? Should I ask AI: Hey I have an idea for an essay about art in the age of AI and I think the title might be something like "The Crayon Question", can you write that essay in the style of Tinkered Thinking for me?
 
If anyone thinks this is a good idea then I'd like them to consider a couple analogous questions:
 
Hey, AI, I can you eat my food for me?
 
Hey AI, can you do my bench press for me?
 
 
Hopefully the point is obvious: Even if the AI is hooked up to some kind of robotic mouth where food can be physically placed and "eaten" this process is completely ridiculous because you'd fucking starve... since, you aren't actually eating the food. Or if the AI was hooked up to robotic actuators that could lift your bar, it's useless because it isn't your muscles that are using ATP to do it. The same applies to things AI can do which are truly creative. AI cannot run the neurological process in your mind necessary for producing something truly creative. It may be able to produce a similar outcome: but your brain will not change as a result of the process it takes to create it yourself.
 
When I had the thought that I would like to write this essay, I knew from years of experience writing over a thousand essays and short stories, that the experience would yield things that are simply not possible for an AI to accomplish. I know, and have known for a long time that the actual process of writing and essay or a short story is a process of discovery. This process doesn't just exercise my mind, it organizes and sharpens my thoughts. I get just as surprised by the next sentence as you do, because the reality is, I can't predict my next thought, I can only have  that next thought, write it down and then review it. AI can never replace that process, and that process is exactly what's going on when the child is plying crayon to paper. While the initial urge might be "I want to make a drawing for mummy" or it might be an after thought "I have a nice drawing, I'll give it to mummy," the literal action of creating the drawing is one of identical self-discovery. The child might have a topic or subject in mind just like I did while working out, but which line will be the 3rd one drawn or written? I have as little idea of that as the child does. Neither of us know until we actually get there, and it's the experience of the act and the changes it makes to us as a person which drives the behavior at a core level. 
 
Yes, this fundamentally core reason gets corrupted in a capitalistic framework. But it's entirely ignored in the current discourse because it unveils a very unsettling truth: much of the creative process in creative industries involves jobs that are the equivalent of color-by-numbers. This isn't to say there isn't skill involved. Sure there is, but it's not a skill which is unique
 
Unique is probably the only component of how people use language that grates my soul. Almost all misuse of language, I can understand and often appreciate: if someone understands what someone else is trying to communicate, then they are using language correctly, bad grammar and novel constructions be damned. But saying very unique is uniquely concerning, because the presence of an adverb to modify an adjective whose definition categorically excludes adverbs of this kind is to spout actual nonsense. Saying something is very unique is like saying that the color blue smells very century. Sorry what? Yes, exactly. A tangent on the word unique might seem uncalled for, but it's vital for a discussion of AI and its impact on artistic productions. AI might be able to produce incredible output that is commercially viable, but what it can never do is provide an artist with the unique experience of creatively exploring and discovering something new based on their unique perspective. The fact that AI can and will subvert the commercial viability of the final product misses the point of why art exists in the first place. We do art to engage in a highly personal process of exploration and cognitive development. The fact that we need to "make a living" is not a fundamental reason for making art; its a supremely inconvenient variable that pollutes incentives by linking our output to the procurement of basic necessities needed to operate a functioning human body in a tribalistically oriented society.
 
Anxiety ensues as AI ramps up to rob crafts people of tasks connected to creative activity. But why does AI have to come after the fun things? Why can't it do my dishes instead of replace me as a designer at work? Again we need to revisit the Order of Subsumption. AI can't do a janitor's job because AI doesn't have a body. AI currently only exists on a screen, and anything that can exist on a screen (writing, picture of a painting, etc) can be part of the training for AI. Now AI is starting to subsume digital tasks: agents, as they are called, which can  write and respond to emails - more color-by-numbers tasks that are far closer to drudgery than they are to true creative exploration. 
 
What the discourse on AI when it comes to art and jobs seems to lack is imagination that can extrapolate to definitive conclusions. The range of imagination on such topics seems to be like a weak lantern in a very dark field. They can imagine changes they can see, and that's it, and they regard those who can extrapolate to logical extremes as fanciful and unrealistic: Foomers and Doomers as they are termed. This is evidenced by the worker who is very anxious about AI taking their job, but doesn't really care or think about AI eliminating all of humanity, which one might think is the logical extrapolation of such an anxiously-focused person.
 
The most prevalent question in the discourse is: Well if AI takes all of our jobs, what are we going to do!? This question is further evidence of a total inability to imaginatively extrapolate. Your AI can't live your life for you. But it can and will replace you if you're doing a lot of monotonous work. But still, it can't live your life. And if your life garners the majority of its meaning from a job that is ultimately monotonous, then brace yourself for a very cold and very hard, spiritually infused slap in the face. I say this as someone who was laid off from a job that most people think is impervious to AI because I was replaced by an AI. I am not speaking from some protected pedestal claiming that everyone should eat cake. 
 
The human ability to adjust to new circumstances is, well, it's ridiculous, because it results in two entirely polar aspects of perspective. The assumption as a reader of this essay might anticipate me claiming that "we'll adapt!". While, eh, sure, but that's always true. I bring it up because of how fast and completely we resettle into new circumstances to such a degrees that we're blind to relative improvements. The best example of this is the Louis C.K. skit about being on a plane and hearing for the first time over the intercom that it was a Wifi enabled flight. The Wifi inevitably crashes within a few minutes because it's a brand new system and the guy next to him on the flight says "This is bullshit." To which Louis C.K. acts out a mock reply to this person: YOU ARE IN A METAL TUBE HURTLING THROUGH THE SKY AND YOU GOT INTERNET ACCESS. It's funny because the guy calling bullshit is so completely and thoroughly located in his current situation that he fails to realize how utterly incredible that situation is when you compare it to say.... a 14th century peasant who has worked the same field for 25 years straight. This ability to adjust to new circumstances is both a blessing and a curse. We adjust, which is often uncomfortable and sometimes painful and requires us to grow and change, but once the change is complete, we settle in, and we do so with an intensity of laziness that is ultimately debilitating - even crippling - to our perspective.  It often requires another forced change due to abruptly altered circumstances in order to shake our heads free from our own assholes that are so tight they deprive our brains of the oxygen necessary to power an imagination capable of having a novel thought.
 
Even after reading this, most people are still going to be unutterably chained to the question: Well if AI takes all of our jobs, what are we going to do? I'll respond to the question with a question:
 
Think about the 14th century peasant and Louis C.K.'s airplane companion who thought broken wifi was bullshit. In a couple centuries, or perhaps even in a couple decades, Louis C.K's airplane companion will be the new 14th century peasant. Imagine what needs to change about technology and society and people so that a guy complaining about wifi on a plane seems like a 14th century peasant. 
 
Instead of focusing on what AI might take away from you, invert the concern and think about all that AI might come to be able to do for you. Imagine if the entire food production system became controlled by AI, and it could even perform maintenance on itself in any physical capacity. Imagine billions of robots that could do all the tasks that we don't like but which we do because we have to. The costs for such a fully autonomous system eventually trend to zero, because all that's needed is the energy to run it, which will get for free from the sun. It's just a matter of getting the requisite atoms into the correct configuration. The beginning of this process will be very expensive, but as the process moves forward, it will take over its own construction. 
 
The same can and may happen to an industry like housing: imagine, instead of hiring a contractor who has to buy all the materials and extra labor, you have time to design the home yourself, you have the time to learn principles of design, to create a home in VR and walk around in it, make changes to it, study the principles of Christopher Alexander, and build a living space that is so well attuned to your personality and your family that it is itself a unique work of art. Robots show up with materials that have been harvested and generated by automated systems and these robots build your house in a matter of hours, or a couple days. And when you want a change, robots show up again and renovate. 
 
But who will pay for all of this?
 
Ultimately, the sun. But the start-up costs for getting this all going are huge, and rooted in dollars.
 
Utopia, is essentially, at its core, a coordinated set of automated systems that provides for humans in a way that parents provide for children, thus giving all humans the freedom to explore the way we endeavor to give children the freedom to explore and develop. If this seems fanciful and far fetched, please ask yourself how bright the lantern of your imagination is in this dark field of future unknowns. Certainly things could go wrong, be it nuclear war to blast us back to the Stone Age, or a paper-clip maximizer that turns the galaxy into a pile of paper clips.
 
The utility of discussing extremes is to try and induce some yoga on a mind that is too narrowly focused on local anxieties like: AI is going to take my job!
 
But what about now, and tomorrow? Instead of the next decade or next century. All of that utopia shit sounds great but how do I pay for groceries while I wait for heaven to materialize on earth?
 
My best advice is: Get weird. Embrace new technologies and try lots of things with it.
 
I have a good friend who is a film director and I peppered this friend with questions to try and get at the root of concerns around AI, the industry and making a living. The conclusive thought for this conversation was the realization that big studios with all the money control distribution, and it's this business & distribution issue that's at the core of the bottleneck for individual creatives who really shy away from thinking about "business stuff". I responded by saying: You could build a website, with payments and maybe subscriptions and just start making scrappy movies and put them on this website. It's not a global release in theatre chains, but it is global distribution! Youtube has a monopoly on this, and sure that could be part of the springboard. Perhaps release the first few minutes for free on YouTube and gain traction that way, but convert interested people to paying customers on your own website. Would this work? Maybe? The point is, it's never been easier and cheaper to run this experiment, and the sooner a person does it, the more time they'll have to grow it. 
 
Tinkered Thinking recently release www.printedthinking.com which is a Blog -> Book platform. I've had this idea for years, and realized a few months ago just how fast I could build and launch it now that AI effectively functions like a small team of software developers for me. Just yesterday I launched another product unrelated to Tinkered Thinking - an idea I had a few weeks ago which might be useful. I have a couple more ideas lined up which I plan to build and launch within the next few months. This speed of development and depth and range of experimentation was simply impossible a few years ago. Will these make money? Well Printed Thinking has paying customers. But the real answer is: you have to build it and launch it to find out. Same as any business.
 
The ground has started to shift under our feet, and it may dissovle into a veritable ocean, where many may drown if they are not quick enough to realize the change, stubbornly keeping their feet firmly planted where there no longer is any ground. Some are building their own sailboats and skiffs and some have some have Arks ready for the coming flood, and some will be quick enough to assemble rafts from driftwood in the swells. And while that all might seem terrifying, there's a good chance that after the initial flood we'll find floating islands in the wake, lush with a way of living that may even be incomprehensible from our current perspectives.
 
What is fundamental to understand is this: 
 
As employment opportunities contract due to technology, personal agency will expand due to technology.
 
How you uniquely use these new technologies will be completely up to you, and that's not something an AI could ever replace.







VASA SYNDROME

October 6th, 2024

The Vasa was an enormous and beautiful Swedish warship that sailed about 4,200 feet, and then sank. Building a ship, especially in the 1600's before the Industrial Revolution is no small feat. It requires a staggering amount of elbow grease, from cutting the trees down, to shaping the wood, to making the rope to nailing everything in place - even the nails had to be made by hand.

 
Apparently the thing wasn't designed correctly, which means it was designed differently from older designs that have proved to stand against the test of time. This isn't by any means an indictment against the new, rather it's a critique on how we explore the new in relation to our connections to the past. Loads of resources can safely be poured into a proven design. This doesn't mean that new designs shouldn't be explored, only that the resources we allocate to new designs should be proportional to the degree to which they have proven their worth. Though, even this doesn't seem correct. Many radical new innovations required enormous amounts of tinkering in order to get right. Thomas Edison for example is famously known to have gone through 10,000 iterations before he finally got the lightbulb correct: that's an enormous amount of resources poured into something completely unproven. But, that being said, with each iteration he did not make the largest possible lightbulb. So, it's not simply all or nothing when it comes to resources, but a matter of which resources we allocate heavily with and which we are sparing with.
 
A full sized ship is an enormous amount of wood. But a radical new design can likely be tested with a much smaller model if the ratios and proportions are correctly calculated. The Vasa was unbelievably unstable, with most of its weight in the upper structure of the hull, making it top-heavy. When a wind stronger than a breeze completely toppled the ship, it sank. One could have figured this out with a tiny model of the ship, and yet.
 
Many art projects (novels) and even start-ups - companies, can suffer from Vasa Syndrome. When founders raise unimaginable gobs of money for a product that can be prototyped and tested with customers on an incredibly slim budget, the practice seems more akin to building a Vasa. Why amass so much money and dedicate so much time on something that might not work? 
 
Let's compare the novel with Edison's lightbulb. Both take an enormous amount of time, ie, a huge amount of resources. But there's a crucial difference. Edison is getting feedback, the aspiring novelist is not. The naive novelist is much like the designer of the Vasa: imagining something radically new and envisioning it will be a triumph on the day it is finally launched into the world - only to find that no one wants to read the book and the few that do manage to crack it's pages find little to hold their attention. Edison is more like the short story writer, each iteration of the lightbulb a new little story. Each time he tries to turn on a given iteration of the lightbulb is like publishing a short story for all to see and read and give feedback on. Someone reads it and loves it and shares it? It's akin to the bulb flickering on briefly. A publisher reads a couple stories and offers a book contract? Well now that bulb is glowing brightly and steadily. 
 
Oddly, the hockey stick of exponentials is prevalent here. Whether that exponential goes up or down is dependent on how we go about our projects. A short story writer, or an inventor of a light bulb and see small gains with consistent feedback, and it seems linear - much like exponentials look in the early stage of the curve. But then, seemingly overnight, the effectiveness of the writer or the honed design of the lightbulb, turns on and takes off. 
 
Unlike the novelist who cloisters their effort from feedback or the founder who fails to acquire or interact with customers. The lack of feedback is creating a totally different kind of linear trend, one that leads to a total flop, and in the case of the Vasa, a literal flip - a sunk ship. 
 
The moral of the Vasa Syndrome is to seek consistent feedback. Don't work on the idea until it's perfect, let reality have it's say about how the design should evolve.







LINGUISTIC PACIFIER

September 11th, 2024

What do we say when we don't know what to say? 

 
There's a feckless panoply to pick from. Thought-Terminating Clichés reign supreme in this unproductive arena. It is what it is. That's life. So it goes. This too shall pass. It could be worse. Here we go again. It will all work out. There effect is emotive and mechanical. They give off an effect that something profound has been said. But the profundity is a ruse, an illusion created by a sense of being dumbfounded by an inability to respond. These sentences function mechanically like a punctuation, as in, a punctuation without a sentence. Even a question mark is impossible to respond to if there is no substance preceding the punctuation. Such linguistic implements are the equivalent of turning and walking away from a conversation. They not only fail to provide a means to further discussion, they emphatically kill the possibility. 
 
Defenders might squawk about intentions: there's good intention behind saying such things. It's a comfort to be told "It will all work out."
 
There's two problems with such buffoonery. Specifically regarding "It will all work out" the simple truth is that it certainly does not all work out. Every time someone says this to me, I point out that it did not work out for the malaria ridden child who just died of starvation in sub-Saharan Africa. This of course is, quite negative, and comes off as offensive because it's a backhand to the good intentions behind the person sputtering clichés. But perhaps a backhand is deserved in the now memed old-school-batman-comic slap way. Why? Because these linguistic pacifiers are a searing indication of cognitive laziness, which highlights the other problem with such buffoonery: 
 
 
Good intentions only matter so much. If good intentions consistently do not match actual outcomes, then good intentions become increasingly meaningless. If the disconnect between intentions and outcomes remains unchanged, then it's a sign that either this person is incapable of changing in response to the feedback from reality that they are not having the desired effect, or they simply do not care enough to dedicate the time and attention required to observe, understand, learn and change in adequate measure to resolve the insidious inequity. In short: a person is either too stupid or just doesn't care. Or worse: both. However, chances are it's only the later. Unproductive discussions about intelligence aside it's a robust fact of life that if someone cares about something, it generates a nearly inexhaustible well of energy to draw from in order to learn and understand: even the stupidest person can change when their heartstrings are sufficiently plucked by some unintended consequence. 
 
It's likely that the majority of language we employ is the result of habit. One need only wonder and ask: how can someone with many many decades behind them be such a bad communicator? Doesn't so much history force practice? Unfortunately the answer is no. The years require only a habitual way of communicating in order to get through all that time. Improvement only comes from conscientious practice, and most habit is unconscious automata. Most communication is a set of automatic linguistic patterns. After years of lukewarm communication, the rails of expression are more like ruts of habit. The consistent disparity between intention and outcome is not resolved, but it's manageable, at least in an emotional sense - the consequences are not so bad and they fail to bother heartstrings. Even the emotional fallout of poor communication can become just another part of the habitual pattern. Here we go again. And in these ways whole populations can spend many thousands of hours practicing without ever advancing beyond the s of a simple novice. 
 
 
The question at the beginning should now carry with it an appropriate amount of horror: What do we say when we don't know what to say? The consequences of how we each individually answer this question have tremendous and far reaching impact. The answer to this simple question may readily define the health of all our relationships. And if at this point, you, dear reader find yourself grasping in frustration: Well what are you supposed to say! If you don't know what to say, and nothing comes to mind and you have good intentions and you want to provide some comfort, what do you say!
 
There's one root issue weaving between, around and underneath this whole topic. It's silence. It makes us uncomfortable. I'll always remember asking my grandmother: why do you always have the T.V. on? Her answer was so candid I don't think she registered the magnitude of what it meant. She said something to the effect of "When Harry was dying I didn't want to think about anything, so I put the T.V. on so I didn't have to think about it, and then after he was gone, it was just comforting to have the sound."
 
Sound. That's it. Why is it quiet in libraries? Because people are trying to think. Sound, particularly human voices, hijacks thought. All these linguistic pacifiers merely fill the space drawing a compromise between communicating in a way that really doesn't help (and may even truly hurt) our relationships and staving off the horror of silence.
 
As Blaise Pascal once said "All of man's problems arise from his inability to sit quietly in a room alone."
 
I'd take it one step further and say many of man's shit relationships arise from his inability to think quietly in the presence of a loved one until something better to say comes to mind. 
 
The answer to that question: what to say when you don't know what to say, is to not speak, but sit with the issue. Allow your mind to explore the topic in a deeper and broader way. 
 
Often in conversation we are tailoring our own mind to try and see the point of the other - which is a good thing, one of the very best things. And so when a distressed loved one comes to the fraught terminus of their concern, we arrive with them at a confusing injunction.
 
But a good listener doesn't just follow the trail of their companion in conversation. A great listener understands that healthy conversation benefits most from a dynamic set of perspectives. 
 
I see where they are coming from. But what can I see that perhaps they haven't considered.
 
We've all had the experience of offering one or two points to consider and being immediately shot down. Again, it's the emotive aspect that is the problem. We feel shut down instead of realizing: gee, I'm talking to a relatively intelligent person who has clearly spent a LOT more time thinking about this than I have, should it really be a butt-hurt surprise that they've already considered the points I bring up?
 
Again, the answer is to use silence as a tool. It creates a surprising amount of space. Neil Gaiman, when questioned about how he thinks up all his ideas for stories has said: I just allow myself to be bored. After a while the mind begins producing ideas to entertain itself, and I just write them down. 
 
Sit with silence, sit with the issue, and, if you care, new ideas will arrive. But it's important to realize that they never arrive with the same alacrity as we expect. We've been habituated by linguistic patterns that responses are supposed to come at a quick interval, like a volley of tennis. Silence in tennis means the game is over. But the problem is that conversation, despite it's usual similarity to the back-and-forth of tennis is not tennis. 
 
Good conversation is chess. The main object of chess is to try and see something about the situation that your opponent hasn't realized. This is exactly what the object of conversation is when we don't know what to say. The answer is not to say the first innocuous thing that comes to mind. That would be like blundering a game of chess by moving any old piece just for the sake of hearing the sound of the piece hit the board when you place it on a new square. The answer is to sit with the conundrum in silence, to focus on it from many angles, to consider all of its parts and its possible directions. To work hard to try and find some aspect your companion in conversation has failed to see, something that might truly help that person you care deeply about.







MESSY

September 10th, 2024
"Cleanliness is next to Godliness" 
           
                - St. Thomas Aquinas
 
 
 
"A spotless home often has cluttered closets." 
 
                 - Tinkered Thinking
 
 
 
A perennial debate rages between the tidy and the disheveled. Steve Jobs was apparently famous for demanding beauty on the inside of the products at apple - not just the outside. Order is valorized and we seek to use it to tame nature, either by perfectly manicured lawns, or immaculate rows of spotless corn, or even in our own homes that present with the veneer of a museum. Even programmers are infected with this debate, with the loudest worshipping "clean code", as opposed to the derogatorily termed "spaghetti code". It's exactly like it sounds like, code that is intertwined with itself in countless, innumerable, and untraceable ways. Well, almost untraceable. 
 
There's a couple key distinctions that don't often enter the debate. One is that spaghetti code is worst when it's written by someone else. Clean code is necessary when working in teams. It's mostly about readability, and quick comprehension. In fact, clean code is a declaration that humans simply suck at understanding and following complexity. We don't have particularly powerful short term memories, and clean code is the answer to it - it's easier to understand so its quicker for someone new to the code to read it, understand it, and successfully make changes to it. 
 
Perhaps the most important distinction that never enters the conversation about clean code vs spaghetti code is whether the computer cares or not. The computer absolutely does not give a flying fuck whether or not the code is "clean" or now. The computer doesn't care period. It simply injects electricity through circuits that are arranged by the code. That electricity either successfully makes it through the maze, or it gets hung up, and crashes. In theory, a certain arrangement of spaghetti code might be much more efficient than human-readable "clean" code. So, which is better?
 
Depends on one's priorities. If one's work reputation is on the line, then leaving behind code that is incredibly difficult to deal with is not exactly something to aspire to. Hence why so many valorize cleanliness. They have very clear incentives for such. They don't want to put up with more spaghetti code.
 
But what if you're working on your own? Well this is a totally different scenario, and while it's a common humorous meme to liken old code one has written to hieroglyphics, moving fast and making a mess has its benefits. Those who toss aside concerns about clean code have different priorities because they have different incentives. A solo hacker trying to build a small software business cares about one thing above all: does it work for the customer? The customer is a bit like the computer in this respect. The customer doesn't give a flying fuck how pretty the code is, the customer just cares if the product works or not, because, naturally, they're trying to use it for some specific useful end.
 
Looking at other solo creatives we often see something far different than the lifeless museum organization. Albert Einstein's desk when he died was famously a disaster. (Google and image of it.) Or, pull up a picture of the complete human circulatory system - to highlight the o.g. creator - and ask whether it looks like clean code. It literally looks like spaghetti molded into the shape of a human.
 
So what's the deal with this debate? Tidiness is mostly a form of communication to other people, and it's due to the fact that our oh-so-powerful brains are actually quite allergic to complexity. We interpret what we don't understand as chaos, so we seek to make the chaos orderly, and often, as a result we drain the magic that was once contained within. there is of course subtler forms of organization. Things like permaculture, for example, which seek to strike - not just a balance between the chaos of nature and the order we humans desire; but a true symbiosis that results in a greater result than can be achieved by either rampant "untamed" chaos and the deathly museum-like order. Such virtuous cycles require a different understanding, one that doesn't eschew chaos, but seeks to understand it without destroying it, and by doing so, glimpse untapped leverage hidden within reality.

 







ANTI ECHO CHAMBER

September 8th, 2024

An echo chamber occurs when all the inputs are equal or reinforcing to the internal belief system. Human psychology appears to have some hardwiring that impels most people to seek out and create echo chambers for themselves. It's a simple equation of seeking out what you like, and what you agree with generally accords with what you like. The echo chamber is an aspect - or perhaps the  cheif mechanism - of tribal psychology. The group functions like its own cognitive organism, seeking to maintain its own harmony, which means keeping all of its constituents in general agreement. Belief grouping occurs, meaning that if someone from group A believes in belief #3 and group A also cherishes belief #4 then that same someone is almost certainly an adherent of belief #4. This is disturbing. A real life example might help: if someone is pro guns then you can guess with extremely high accuracy what their position on raising taxes will be. But the two topics aren't particularly related. Within the context of a larger group belief system they can be made to relate via an interpretation on government authoritarianism. Raising taxes and prohibiting guns smell of authoritarianism. So perhaps these two beliefs have some alignment. But if the modus operandi of such a group is limiting government control of citizen's lives, you'd expect that group to be very pro-choice. The reasoning is built into the name; you'd expect someone who wants to limit government control would want to give as much freedom of choice to the individual as possible. But this is not the case. Group belief systems (like individual belief systems) are rife with contradiction and hypocrisy. 

 
People who are die hard adherents to their "group" often surround themselves with reinforcing inputs. They listen to people they agree with. And there's something eerily satisfying about this. It pets the fragile animal of certainty that seeks to thrive in all of us. It's the reason why people rage when they hear positions they disagree with: it threatens the comfort of that fragile animal.
 
It's surprisingly productive to think of emotions and belief systems as their own organisms. They fight for their own survival, and seek to thrive by spreading, by mimetically replicating themselves in the minds of others by getting you to speak.
 
This mimetic survival impulse is very real, so real that it can have the most dire consequences imaginable. Here's a quote from Practicing Radical Honesty: After studying suicide notes left behind and examining all the stories and interviews with friends and families, they found a theme that seemed to apply in all cases. They concluded that every suicide can be explained as 'an attempt to maintain or enhance the self.' The mind is maintained at the expense of the life of the being. The mind survives by being right. The mind would rather be right and die than be wrong and live."
 
Suffice it to say, our ideas are so powerful they can kill us. There are far less depressing examples that are nonetheless just as tragic: sacrificing one's own life for the life of loved ones. Such an act is undertaken on the notion of what those other people mean to the person who is self-sacrificing.  All this to say that we are little else other than our beliefs.
 
But that little else can make all the difference. What we have other than the beliefs we already have is the ability to decide what we will pay attention to, and what we pay attention to determines how our beliefs will continue to persist, or evolve or die off and be replaced by new beliefs. Echo chambers make it very hard to pay attention to anything other than what a person already believes in.
 
Enter the Anti-Echo Chamber.
 
"Years ago, I found that I listened to all sorts of people whose perspectives and beliefs I really liked and appreciated; now, I find I listen almost exclusively to people I disagree with."
 
If it were a joke, the speaker might reveal that they're still listening to all the same people. And for some this is the case. But let's examine the later practice a bit more.
 
Listening exclusively to people with whom one disagrees will cause some degree of rage in most people. But, if emotion can be well regulated, and snap reactions can be done away with, something interesting can happen. Like Michelangelo defining beauty by removing what shouldn't be there, the friction of disagreement can hone and clarify a person's beliefs. This is not likely to happen in an echo chamber. Presented with a sea of opinions a person mostly agrees with, how will they react to the opinions they don't necessarily agree with? Compared to someone who is not apart of such a belief-tribe, they will likely let those slide and not worry about being associated with something that is "pre-approved" by their tribe. Compare this to the inverse. In an anti-echo chamber where a person listens exclusively to people and positions they don't agree with - if emotional reactivity can be set aside - how will the mention of something they DO agree with strike? Likely it's a surprise. I say this because my diet of podcasts has evolved from an echo chamber to an anti-echo chamber. I now quite regularly listen to people I don't particularly like and who I think are lazy, poor thinkers. It might sound like an unpleasant experience, but natural selection rarely is. At least, natural selection is the ideal that rational thinkers would imaginably like to achieve when it comes to the robustness of their beliefs and ideas. One can easily argue that's exactly what's happening when it comes to global culture: whether it's this capitalism vs communism, or this religion vs that religion, all of them are merely belief systems that are vying for continued longevity... often at the expense of people's lives. 
 
A cognitive trick that humans haven't yet figured out is how to be wary of ideas that are both seductive and dangerous. There are some belief systems that are old but still vigorous, meaning they are seductive and sticky and they mimetically replicate in new minds with efficient alacrity, but which also result in a lot of death and misery. Such belief systems are akin to the charming serial killer, who lures a victim into a false sense of security, of certainty, and, only when it's too late, the real danger of the situation becomes evident. 
 
Death only really has three tools: disease, accidents, and bad ideas.