Coming soon

Daily, snackable writings to spur changes in thinking.

Building a blueprint for a better brain by tinkering with the code.

The SECOND illustrated book from Tinkered Thinking is now available!

SPIN CHESS

A Chess app from Tinkered Thinking featuring a variant of chess that bridges all skill levels!

REPAUSE

A meditation app is forthcoming. Stay Tuned.

MENTAL MODULES

October 1st, 2020

 

Our own reasoning can play some fairly devious tricks on us.  It’s perhaps not hyperbole to say that we are in a constant state of conflict our elaborate talents of self-deception.  We reason our way back into bad behavior, and later look back with an almost amnesic astonishment at our own stupidity, impulsivity or mindlessness.  The question Why did I do that?  Seems suddenly unconvinced by the reasoning that premeditated our ill-advised decision.

 

Because that’s exactly what it often is: ill-advised reasoning.  Benjamin Franklin once wrote: So convenient a thing to be a reasonable creature, since it enables one to find or make a reason for every thing one has a mind to do.

 

All he’s really saying is that the human mind is great at coming up with a good story about why anything is a good decision.  This story-telling ability, phrased in such a way, sounds eerily similar to a new technology that recently became available to a limited public called GPT-3.  Tinkered Thinking was fortunate enough to gain access to the limited access api of this new technology and did a few collaborative episodes with GPT-3.  In short, GPT-3 is really good at continuing any sentence that you give it.  It generates languages that sounds really plausible, and in many if not most cases it’s impossible to tell if something was written by a human or if it was generated by GPT-3.  For a more in-depth discussion of this technology, check out Episode 828: What is GPT-3.  

 

Now what’s very important to realize about GPT-3 is that it can ‘talk’ in any direction.  Give it a topic like “the future of humanity” and it’ll talk about gloom and doom, but re-prompt it with the exact same thing, and you’re very likely to get the opposite: GPT-3 will start spinning up a new perspective about how there’s so much hope for humanity and that we’ll soon populate the stars.  And in many instances of this generated text, there will be the semblance of reasoning. In short it’s very good at generating a believable and cogent story.  In one moment it can sound like an awful racist, and in the next it can generate a humbled and caring perspective.  If you remember back to high school English class and the assignment of writing a ‘persuasive essay’ you’ll perhaps recall the exercise of coming up with the argument for the ‘other side’ of the point you’re trying to argue.  GPT-3 is essentially excellent at this exercise.

 

Now in the context of our own self-deception and the logic that underpins it at different moments, it seems we too are quite good at this game.  In the brain, there’s a location that’s referred to as Broca’s Area, named after Pierre Paul Broca.  This area is located on the frontal lobe, usually on the left side, and it appears to be responsible for generating language.  It’s a bit strange to realize that we don’t use our whole brain to produce the things we say, and that it’s primarily the product of a small area.  Certainly we are using other parts of our brain when we do talk, but the fact that Broca’s Area exists makes it seem like our brain is more like a set of modules, and when we speak about something visual, it’s as though the language module and the visual module at the back of the brain link up.  It’s as though our language module can spin up a convincing rationale for anything, but it depends on which other modules have hijacked our own personal GPT-3.  We think about getting on a better diet, and Broca’s Area rattles off a bunch of very sensible reasons why this is an excellent idea, but then a day or so later, when the receptors that usually get a regular dosing of dopamine from the sugar we have so regularly, suddenly Broca’s area gets hijacked by the limbic system, and we start hearing an excellent rationale for why it’s a good idea to buy that lava cake that is on the menu.

 

Seen with this framework, it suddenly seems to make a lot of sense why so many people undermine themselves constantly.  The brain has a bunch of modules or modes that are in conflict with one another when paired together, but seem totally sensible if listened to one at a time.  The day trader buys the low, thinking it’s a good price and that the long game will be excellent, but then sells a day later when the price has continued to drop and the logic has changed because the feelings around the situation have changed and those feelings spin a much different story when linked up to our language module.  We convince ourselves twice in opposing directions and we fail to move forward because our rationale lacks consistency.  

 

This is why a practice like mindfulness can be so powerful.  Many people think meditation grants a person control over their mind, which isn’t necessarily correct or incorrect so much as it is just poorly worded.  In this context of self-deception, and rationale and language modules and it’s best thought of as an additional module - a mindful module.  And in this context, what it does is that it allows the mind to take a bird’s eye view of itself.  Instead of being constantly intoxicated by the cadence of our own moment-to-moment rationale and self talk, the mindful module regards that deceptive song in a larger context and pauses the process in order to commandeer that language module for a moment and say: wait a minute, this doesn’t really fit exactly, in fact, that reasoning is just plain bullshit.

 

Just imagine for a moment if you could accurately call your own bullshit.  How much more efficiently would you move toward your goals?