Daily, snackable writings to spur changes in thinking.
Building a blueprint for a better brain by tinkering with the code.
subscribe
rss Feeds

SPIN CHESS
A Chess app from Tinkered Thinking featuring a variant of chess that bridges all skill levels!

REPAUSE
A meditation app is forthcoming. Stay Tuned.
AI COLLABORATION
July 16th, 2020
The poritions of this episode in Italics have been generated by GPT-3 from OpenAI.
This episode has been written in collaboration with an artificial intelligence. The technology is from OpenAI, a research laboratory exploring artificial intelligence, and the name of this technology is GPT-3. This collaboration has been done with the beta version, which builds off of it’s predecessor GPT-2. In the most basic way, it is a set of neural nets that has been trained using vast amounts of text from the internet. A simple way to think about the way it operates is that GPT-3 tries to generate a likely and believable continuation of text that you give it. So it looks like a basic text box. The way Tinkered Thinking has interacted with GPT-3 for the purposes of this episode, has been by taking the exact paragraph you just read and feeding it to GPT-3 and hitting submit, at which point GPT-3 generates a continuation of this text given certain parameters like word count. For the purposes of this collaboration, that word count has been set to approximate an average paragraph. After GPT-3 generates the paragraph that will follow this one, Tinkered Thinking will then pick up afterwords and add an additional paragraph. As the text for this episode grows in this way, it will be fed back into GPT-3 after Tinkered Thinking has added to the text in order to generate another paragraph given the larger context of the episode.
Something that has been really fun about this collaboration has been watching how GPT-3 writes in response to things Tinkered Thinking has written. When Tinkered Thinking adds a line of humor for example, GPT-3 is very likely to follow up with something funny as well. This kind of feedback in the text has actually produced some pretty organic-feeling conversation in the episode.
What's perhaps even more fascinating is that GPT-3 has already referred to this episode in the past-tense, though it doesn't yet fully exist. From a human perspective, it seems that GPT-3 can remember this episode and recall that it had some funny parts to it. As the main creator for Tinkered Thinking, I'm somewhat relieved to know this, because Tinkered Thinking is almost always painfully lacking in humor.
So far, after many thousands of iterations, we have figured out that the best way to get GPT-3 to understand what we want it to do is by giving it examples of our work. In the first phase of the collaboration, Tinkered Thinking gave GPT-3 2 examples of its writing. The first of these was an example of a possible extension of a paragraph we had given to GPT-3. The second example was one that described GPT-3's own function as a tool. This second example was largely important to establish a context for why we were giving GPT-3 this particular paragraph in the first place.
From the point of view of a writer, the potential applications for a tool like GPT-3 are immediately apparent. Some writers might get nervous and even depressed about the accuracy and shocking coherence of a tool like GPT-3, but as a collaborative tool, it's ability to help a writer get unstuck is incredible. Think of a traditional writing prompt. It's often hit or miss as to whether or not it actually works and gets a writer to put words to a page. What works far better than static writing prompts is a conversation with another person. Not only do we view our own thoughts differently when we hear ourselves say them aloud, but a companion in dialogue, equipped with a unique perspective often generates questions and counter-points that give our own thoughts a little friction, giving us some grist to hold onto.
We think that we could eventually expand the value of a tool like GPT-3 far beyond the realm of just writers, but rather as a tool for businesspeople, scientists and engineers as well. A tool that generates not just coherent paragraphs, but also coherent slide decks, product ideas or designs would dramatically help a lot of people. In the world of science for example, we often generate hypotheses that then require us to run large-scale experiments to disprove them. We can't run experiments on the origin of the universe for example, but with GPT-3 we could generate a possible hypothesis for why it works the way it does and then simply test that hypothesis.
It's imaginable that one day we could all have companion AI's that help us make progress more efficiently. Think, for instance, about the nature of learning today. It's clear that school is terribly inefficient with teaching - we simply haven't figured out the optimal way to help large groups of students learn. And then of course there is learning on one's own, which can involve endless wasted hours when we are stuck. How incredible would it be to have a companion like GPT-3 to get a snapshot of the context you're dealing with, and generate a question that perhaps you haven't yet considered?
On the other hand, one can also imagine a future that might not be so fun to live in. Imagine if every time you gave a company a statement about your values, they could generate a reasonable counter-argument and in doing so basically unravel all of the reasons you thought you were right in the first place. Or if every time you needed to convince someone of something, you had to write a persuasive essay and a really smart machine immediately fired off a convincing counter-argument to every single paragraph in your essay. In a world like this, it might not only be really difficult to have an idea, but also incredibly difficult to hold on to that idea in a meaningful way. We think we'll need to develop a way to have this technology support rather than undermine our values.
As with any collaboration, it's important that the aim of all parties involved are inline with one another. The risk with AI collaboration is one of incentives. Systems with far faster processing speeds could quickly spiral off into dangerous directions if the incentives of that program are not inline with human incentives. Aligning incentives is perhaps the first act of collaboration for any collaboration. We define what our aims are so that everyone is on the same page. But what happens when this question is placed before all humanity. Say for example an AI becomes truly sentient, and then simply asks of all humanity: What are your goals?
Or what if the AI decides to focus on one specific area, like solving climate change for example? As long as this AI's incentives are inline with human incentives, it might be able to come up with solutions far beyond what humanity is currently able to conceive of. And as a result, humanity could become a lot less fragmented and divided, simply because an AI could now take the values of all humanity and apply them in a global context. In a strange way, an AI could help us all come together, but at what point does this relationship of human and machine start to undermine who we are as a species? Where do we draw the line between human and machine?
COHERENCE OF IMMERSION
July 15th, 2020
Does human language really make as much sense as we think it does? Or is this coherence an illusion bolstered by our total immersion in the medium?
We certainly like to think we're making sense. But so do the characters in any fantasy novel, or science fiction movie. And we as readers and watchers go along with the odd logic, the magic, the unknown science. This is referred to as suspension of disbelief. We suspend some sort of common sense notion that what we are seeing or reading is not something we could possibly believe. And this trick of the mind allows us to 'temporarily' believe in the unbelievable. Seems legit, but what if the division here is not so clean as theatre critics like to pretend.
What if, we are in fact quite prone to believing in nonsense? Well, this certainly seems true of other people. Conspiracy theorists of every variety seem perfectly capable of believing the unbelievable - at least as it's described by anyone who isn't apart of that group.
The problem is of course that these terms are relative when smeared across the topics of conspiracy theories and theatre productions lacking a 4th wall. We do a lovely job of compartmentalizing these different aspects of experience with unsettlingly ease, all the while failing to see the spinal cord of human psychology that runs smoothly through each one.
It's likely we compartmentalize like this, and we invent quaint terms like 'suspension of disbelief' because it casts us in an ideal light of control. It's as though we consciously choose to suspend certain mental faculties in order to enjoy a drama at the theatre, but those conspiracy theorists holding their signs and shouting on the street corner? Oh they have nothing to do with this mental phenomenon of choice when it comes to belief.
The reality seems as though it might be backwards. Instead of being in control of what and how and why we believe something, it's that our attention can become magnetized by beliefs and concepts without our explicit and conscious choice.
The larger point is that all of language, or at least a huge majority of the language we use and the communication we engage in, might only make sense on it's own terms. Think for a moment really, how often does language of any verbose nature really touch the brick and mortar aspects of reality? Certainly the shorter uses of language ping off of solid reality all the time.
Did you take out the garbage?
Are the kids in bed?
Where is the remote?
These all concern the physical orientation of reality, and luckily, their coherence is really dependant on our experience of physical reality. Things fall apart when we just imagine the garbage where it's supposed to be, respond that it's been taken out, and then deal with the consequences later when our failure to report on the actual nature of reality is discovered. In that case it looks as though we've lied.
But the more verbose uses of language, for example, this very post and episode can and should be questioned about the way it makes sense. You might understand each sentence in turn, and find the larger point brightening in your mind, but to what degree do these words touch base with solid, verifiable reality in the same way that the earlier questions do? Certainly far less.
These words could, in reality, be spinning a cognitive fiction. Of course, isn't this what we're always trying to do when we build an arguement or a case to try and convince anyone of anything? How persuasive do the words really need to be if the argument regards some facet of brick and mortar reality that we can touch and see? Given the amount of dissension and disagreement that perpetually abounds and resounds throughout culture, the answer seems to be that we do need to speak very persuasively about obvious things. Now that doesn't sound quite right, but at the same time it paints an accurate picture of our discourse. So what's going on?
Perhaps it's the discourse itself that contains the problem. Not the way we use it, but the medium itself: language. Each word, afterall, is a concept, and not the thing it refers to, like boat. The word boat as you hear it or see it on the page is a piece of reality that isn't actually connected in any direct way to a thing that floats in the water that you might use to travel. That connection exists indirectly through our minds as they exist in the form of a network that hosts all these words.
What if that network of words has attributes that allows it to become untethered from physical reality? It certainly seems to be the case. And we don't have to point at hot opinions about the current state of public discourse. We need only think of Harry Potter or Star Wars, or Game of Thrones. These are primarily vast constellations of words that purposefully separate from the reality that we experience on a day to day basis. The proof is in the pudding in that you can drown in these fictional puddings. Whole swathes of time pass while you immerse yourself in a fantasy world and quite literally forget about the real world, all because of language.
So how do we assess the language that's being used around important topics? How do we ensure that we don't talk each other into a mutually crazed la-la land that leaves the bunch of us walking off the edge of a cliff without noticing as we describe an endless ground that we can trick ourselves into seeing?
Language and communication is always at risk of this sort of mistake.
The only safeguard consists of a counter-intuitive practice: we must regularly seek to separate emotion from language in order to inspect the meaning that is being conveyed across the plains of communication. It's the emotionally laden language that can turn our minds into hot balloons that rise unanchored off into an unreality that blinds us more and more from what's going on. When we shuck the emotional resonance from the words we use, we begin to see them in an honest light, one untainted by momentary swings of feeling. And this practice, this discipline is something that Tinkered Thinking continually attempts to achieve, by dissecting the words and the concepts we use, not with a heavy or hot heart, but with a placid wonder, and a curiosity that is not afraid to question even itself.
FALSE MOMENT
July 14th, 2020
As the technology behind photos and film continues to advance in such a way that it becomes more and more seamless with the moment, one wonders whether the need to 'compose' false moments will disappear. When photography as a technology first came into being, people were required to stand very still for a length of time in order for the exposure to work. The limits of the technology required a particular behavior, and that behavior has persisted. For the most part, we are still saying 'cheese' in order to flag the exact moment when everyone needs to smile without blinking.
But imagine for a moment if everything around you, in a kind of 360 degree aura was being visually recorded and an algorithm was monitoring all the angles and compositions and automatically selected moments that spontaneously looks good as photos, compiling those at the end of the day and creating the quintessential photo album of your life? Would we still feel the impulse to compose photos if this were the case?
Probably, either because the behavior is fairly entrenched at this point, or rather because we compose photos not because we are trying to have a likeness of our moment, but because we seek to represent an ideal moment that isn't actually occuring.
We live in our head more than we do in the moment. We chase ideas, dreams and peak states that are perpetually poised in the future, like the proverbial carrot dangling before the donkey. As a result we miss the moment, and ironically, when we do find ourselves wholly present, the experience is often so nice that to interrupt it with the taking of a picture would only lessen our sense of what it's like to be alive in that moment.
We exist in false moments framed by ideals at the expense of the actual moment. Oddly, the thing we are trying to get at is always here with us. The problem is not that we lack something that might exist in the future, it's that our attention lacks a certain flexibility, construction and focus to connect tangential desire with the satisfaction that is always available in the present.
MAYFER'S LAW
July 13th, 2020
This episode is dedicated to Murat Ayfer. You can connect with Murat on Twitter with the handle @mayfer
In many human affairs, it’s not too difficult to assess how long something will take. Our vast synchronization and record keeping across all domains of life enable us to organize the timing of flights, trains, pizza deliveries, surgeries, crop yields – just about everything that occurs with any kind of recurrence. But one area of human activity stubbornly resists such mechanistic prediction: the discovery and creation of the new.
This can be something as complicated as a new gene editing technique or something as simple as a personal project to learn a song on the piano, or learning how to code or building a gazebo. If the people in question are embarking on any of these endeavors with no prior experience that bears any similarity, then we quickly fall victim to Hofstadter’s Law.
Hofstadter’s Law states that everything takes longer than you think it will, even when Hofstadter’s Law is taken into account.
We all know the annoyance of delayed flights, or drivers with our pizza lost in the wrong neighborhood, and certainly complications during surgery can prolong the procedure and amp up the surrounding stress. But such expansions of time are common, and generally expected, if not desired. While reminiscent of Hofstadter’s Law, Hofstadter’s notion is best applied to creative endeavors, and within it’s recursive truth lies another, that shields us in an incredibly important way.
When a creative project finally wraps up, it can be rather astonishing to reflect on how long it took, and how wildly incorrect our prediction in the beginning about how long the project would take. Say for example you’ve just finished a project that you projected would take a mere couple of months, but in the end it took nearly a year. Reflect for a moment and wonder: would you have still started the project if you’d known just how long it would take?
Getting something done in two months is far more preferable to a timeframe six times longer. The reality – if we had access to the future in this way – looks far less appealing than our grossly optimistic prediction.
But this gross underestimation is perhaps our saving grace. By the time we get started and make some meaningful headway, the sunk-cost fallacy can help us forge ahead, even when we begin to suspect just how much longer the project will take. Our cognitive biases here work in concert as a productive illusion that allows us to make progress in the face of a demoralizing reality. We might call this virtuous incidence of miscalculation on behalf of human psychology: Mayfer’s Law. Some better phrasing is in order.
Mayfer’s Law: The miscalculation of time required for achievement enables the undertaking of endeavors far larger than we would knowingly attempt to achieve.
It’s for this reason we start in on little projects that should only take a few minutes, half an hour at most, and to which we are still tinkering with hours and hours, days, weeks, and perhaps even months later. Psychological momentum seems to begin building the moment we start, hence the sunk-cost fallacy.
The fascinating aspect of Mayfer’s Law, is that we witness cognitive biases that often undermine our efforts, combine in a virtuous way. We end up superseding our own sense of our abilities. We grow and we learn as we work through novel problems within the project, and by the end of the endeavor, though it’s taken much longer than expected, we are now more capable than when we first set out. But this growth happens somewhat by accident, or rather almost unintentionally. We undertake ventures that we believe our skills and abilities can handle in a timely fashion, and inevitably we resolve the miscalculation and the gulf between our capabilities and what’s needed by growing. The tradeoff, of course, is that the process usually requires more time,
hence Hofstadter’s Law.
A LUCILIUS PARABLE: COPY & PASTE
July 12th, 2020
Lucilius watched the word count finally tick above the ten million mark. He’d finally generated the amount of content required to populate the mind of his clone. He sat back and watched as the word document saved. Some time ago, he’d reflected on all the things he wanted to do in life and realized that everyone’s occasional fantasy to generate clones of themselves was finally possible. The issue was content. Exact biological copies could be generated, but what about their thoughts, their memories, their beliefs? These had proved to be a particularly sticky problem as people’s first clones of themselves had quite quickly veered off on rebellious paths that were more in line with the stock content that cloned minds were being populated with. There were than a couple clones randomly waving twigs and shouting Latin in full expectation that magic would occur. Fiction had proved to be a problematic stand-in for the content required to populate a clone’s mind.
Lucilius realized that in order for the idea to work, he had to record his own mind, and so he’d embarked on a rather epic writing assignment, and he’d just finished. Two days later the cloning company he’d hired notified him that his clone was ready and he emailed them the enormous word document. A day later, there was a knock on the door.
Lucilius opened the door and there before him stood his own spitting image. “Hey,” the clone said.
“Hi,” Lucilius bumbled, thrown off by the actual experience.
“Well, I suppose introductions are somewhat unnecessary, so I think I’ll just get to work,” the clone said.
Lucilius stood merely still, gaping at the marvel of reality. “Oh, sorry, sure,” he said, moving aside. His clone waltzed into his home and walked right into the kitchen and poured itself a cup of coffee.
“Hope you haven’t worked on that program since you submitted the content document, I had some ideas about which direction it should go while I was making my way over here.”
Lucilius puzzled for a moment and then remembered the contract assignment he hadn’t worked on in several weeks, the repeated messages from the client and his total inability to get any work done.
“Oh, yea, no, haven’t done a thing.”
“Perrr-fect,” the clone said, comfortably striding to Lucilius’ desk, sitting down, sipping the coffee with satisfaction and then cracking knuckles with a stretch over the keyboard. The clone wiggled a little in the chair with excitement, Lucilius noticing for the first time that the adorable gesture was something he knew he did himself. Then the clone opened up Lucilius’ dormant text editor and started ripping apart code and writing in new lines. After a few minutes of staring the clone slowed a moment, stopped and looked back at Lucilius.
“Didn’t… you want to go camping or something? Or go on a hike at least?”
“What?” Lucilius said.
“That’s the whole point I exist isn’t it? You wanted a version of you that would actually enjoy this sort of work so you could go off and do other things that you really wanted to do?”
“Oh yea,” Lucilius said, turning away to let the clone work, stopping momentarily to look back as he scratched his head, puzzled as to how to feel about the whole situation.
Several days later he was sipping coffee on a rocky shore, having just awoken and crawled out of his tent. The days since leaving home had been splendid, kayaking north along the coast and camping each night, cooking trapped crab and mornings filled with meditation and cowboy coffee. The days rolled on pleasantly, but eventually Lucilius began to feel restless. The natural environment was wonderful, and the trip was one he’d been putting off for a long time, but there grew within him the guilty thought that he was getting a little bored. He’d planned a trip of several months, but after a few weeks, he turned back and headed home.
When he arrived, the clone was at the desk, typing away.
“Back early?” it asked.
“Yea,” Lucilius said with the eerie sense that his internal monologue was now externalized.
“Something happen? Thought you’d be gone longer.”
“No, just felt like coming home.”
“Well,” the clone said, swiveling around in Lucilius’ desk chair. “I’ve been busy. Completed that contract and picked up a few more, plus started a couple side projects. One is already done and doing quite well. The bank accounts are looking great.”
“Accounts?”
“Yea,” the clone said, “I opened up one for myself, I hope you don’t mind. Revenue from regular contracts are still going into the main account, but I figured it would be useful to track revenue from unplanned side projects in a separate account.”
“Oh,” Lucilius remarked. He went to the kitchen and poured himself a glass of water while he tried to shuffle through the strange mix of feelings he was having. He couldn’t deny it, he was jealous. The clone had accomplished so much. All work that he normally had to drag himself through to get done, and it took the clone a fraction of the time. But isn’t this what he wanted, he wondered.
“Everything ok? You seem down. Figured that trip would perk you up a bit.”
Lucilius turned to the clone who was standing in the doorway.
“Yea, I’m fine,” Lucilius sipped the water, and as the clone turned to get back to work, Lucilius spoke up again. “Hey..”
“Yea?” the clone faced him again.
“You enjoying this?” Lucilius asked making a vague gesture to indicate the situation, the set up.
“Yea, the work has been great. Loving it.”
Lucilius stared down into the limpid reflection of his face in the glass of water. “Wish I had that kind of motivation.”
“You do,” the clone said. “Where do you think I got it from?”
“Well, I don’t think I’ve ever been as productive as you’ve been while I was gone, so I don’t know about that.”
“You know, I was thinking about that a little while you were gone.”
“Really now?”
“Yea, I figure that I have all the drive and motivation and love for this sort of work that you have, but I don’t have all the things that hold you back, like doubt and that aimless restlessness that seems to give rise to boredom. I looked over the content document you submitted to the company. There’s nothing like that in the document, and so I’m pretty content with what I’ve been doing, but going camping? Or on a hike?” The clone sighed. “That just sounds unutterably boring.” The clone stared off in middle distance, imagining it, and then shivered. “But you, you have both. You like that sort of thing, and you certainly find the sort of work I’ve been doing gratifying. So maybe the two pull at each other?”
Lucilius considered the clone’s notion.
“You know as well as I do, that I’m not a true likeness of you. You filtered yourself, editing in the process in order to create me so that I’d be able to tackle this part of your life. So that’s why I created separate accounts.”
“Why’s that?”
“I figured there was a possibility you might miss the work I was doing, and that you might want to do some every once in a while. There’s always more work.”
“Yea, that’s sort of the problem I was trying to solve.”
The clone shrugged. “You know as well as I that most solutions become bridges to better problems.”
Lucilius let out a short, stunted laugh. The clone’s smile collapsed a little.
“Well,” the clone said. “When you’re ready, I’ve got a side project on the go that I think you’ll like.”
Lucilius looked at the clone halfheartedly.
“Actually,” the clone said, “I could really use your help with it.”