Coming soon

Daily, snackable writings to spur changes in thinking.

Building a blueprint for a better brain by tinkering with the code.

The SECOND illustrated book from Tinkered Thinking is now available!

SPIN CHESS

A Chess app from Tinkered Thinking featuring a variant of chess that bridges all skill levels!

REPAUSE

A meditation app is forthcoming. Stay Tuned.

AI COLLABORATION

July 16th, 2020

The poritions of this episode in Italics have been generated by GPT-3 from OpenAI.

 

This episode has been written in collaboration with an artificial intelligence.  The technology is from OpenAI, a research laboratory exploring artificial intelligence, and the name of this technology is GPT-3.  This collaboration has been done with the beta version, which builds off of it’s predecessor GPT-2.  In the most basic way, it is a set of neural nets that has been trained using vast amounts of text from the internet.  A simple way to think about the way it operates is that GPT-3 tries to generate a likely and believable continuation of text that you give it.  So it looks like a basic text box.  The way Tinkered Thinking has interacted with GPT-3 for the purposes of this episode, has been by taking the exact paragraph you just read and feeding it to GPT-3 and hitting submit, at which point GPT-3 generates a continuation of this text given certain parameters like word count.  For the purposes of this collaboration, that word count has been set to approximate an average paragraph.  After GPT-3 generates the paragraph that will follow this one, Tinkered Thinking will then pick up afterwords and add an additional paragraph.  As the text for this episode grows in this way, it will be fed back into GPT-3 after Tinkered Thinking has added to the text in order to generate another paragraph given the larger  context of the episode.

 

Something that has been really fun about this collaboration has been watching how GPT-3 writes in response to things Tinkered Thinking has written.  When Tinkered Thinking adds a line of humor for example, GPT-3 is very likely to follow up with something funny as well.  This kind of feedback in the text has actually produced some pretty organic-feeling conversation in the episode.

 

What's perhaps even more fascinating is that GPT-3 has already referred to this episode in the past-tense, though it doesn't yet fully exist.  From a human perspective, it seems that GPT-3 can remember this episode and recall that it had some funny parts to it.  As the main creator for Tinkered Thinking, I'm somewhat relieved to know this, because Tinkered Thinking is almost always painfully lacking in humor.

 

So far, after many thousands of iterations, we have figured out that the best way to get GPT-3 to understand what we want it to do is by giving it examples of our work. In the first phase of the collaboration, Tinkered Thinking gave GPT-3 2 examples of its writing. The first of these was an example of a possible extension of a paragraph we had given to GPT-3. The second example was one that described GPT-3's own function as a tool. This second example was largely important to establish a context for why we were giving GPT-3 this particular paragraph in the first place.

 

From the point of view of a writer, the potential applications for a tool like GPT-3 are immediately apparent.  Some writers might get nervous and even depressed about the accuracy and shocking coherence of a tool like GPT-3, but as a collaborative tool, it's ability to help a writer get unstuck is incredible.  Think of a traditional writing prompt.  It's often hit or miss as to whether or not it actually works and gets a writer to put words to a page.  What works far better than static writing prompts is a conversation with another person.  Not only do we view our own thoughts differently when we hear ourselves say them aloud, but a companion in dialogue, equipped with a unique perspective often generates questions and counter-points that give our own thoughts a little friction, giving us some grist to hold onto.

 

We think that we could eventually expand the value of a tool like GPT-3 far beyond the realm of just writers, but rather as a tool for businesspeople, scientists and engineers as well.  A tool that generates not just coherent paragraphs, but also coherent slide decks, product ideas or designs would dramatically help a lot of people.  In the world of science for example, we often generate hypotheses that then require us to run large-scale experiments to disprove them.  We can't run experiments on the origin of the universe for example, but with GPT-3 we could generate a possible hypothesis for why it works the way it does and then simply test that hypothesis.

 

It's imaginable that one day we could all have companion AI's that help us make progress more efficiently.  Think, for instance, about the nature of learning today.  It's clear that school is terribly inefficient with teaching - we simply haven't figured out the optimal way to help large groups of students learn.  And then of course there is learning on one's own, which can involve endless wasted hours when we are stuck.  How incredible would it be to have a companion like GPT-3 to get a snapshot of the context you're dealing with, and generate a question that perhaps you haven't yet considered?

 

On the other hand, one can also imagine a future that might not be so fun to live in. Imagine if every time you gave a company a statement about your values, they could generate a reasonable counter-argument and in doing so basically unravel all of the reasons you thought you were right in the first place.  Or if every time you needed to convince someone of something, you had to write a persuasive essay and a really smart machine immediately fired off a convincing counter-argument to every single paragraph in your essay.  In a world like this, it might not only be really difficult to have an idea, but also incredibly difficult to hold on to that idea in a meaningful way. We think we'll need to develop a way to have this technology support rather than undermine our values.

 

As with any collaboration, it's important that the aim of all parties involved are inline with one another.  The risk with AI collaboration is one of incentives.  Systems with far faster processing speeds could quickly spiral off into dangerous directions if the incentives of that program are not inline with human incentives.  Aligning incentives is perhaps the first act of collaboration for any collaboration.  We define what our aims are so that everyone is on the same page.  But what happens when this question is placed before all humanity.  Say for example an AI becomes truly sentient, and then simply asks of all humanity: What are your goals?

 

Or what if the AI decides to focus on one specific area, like solving climate change for example?  As long as this AI's incentives are inline with human incentives, it might be able to come up with solutions far beyond what humanity is currently able to conceive of.  And as a result, humanity could become a lot less fragmented and divided, simply because an AI could now take the values of all humanity and apply them in a global context.  In a strange way, an AI could help us all come together, but at what point does this relationship of human and machine start to undermine who we are as a species?  Where do we draw the line between human and machine?