Coming soon

Daily, snackable writings to spur changes in thinking.

Building a blueprint for a better brain by tinkering with the code.

The SECOND illustrated book from Tinkered Thinking is now available!

SPIN CHESS

A Chess app from Tinkered Thinking featuring a variant of chess that bridges all skill levels!

REPAUSE

A meditation app is forthcoming. Stay Tuned.

OUT OF REACH

July 23rd, 2020

Is inspiration a passive phenomenon or does it require an active pursuit?  Both possibilities place that inspiration in a similar place.  When we are without inspiration, it is out of reach, and it’s odd to contemplate it’s location when we aren’t in it’s spell.  Where exactly is inspiration when we aren’t inspired?

 

To be perfectly honest, this line of questioning was prompted by The Tinkered Question app, which Tinkered Thinking is developing.  (Check out the previous episode - episode 829 - for a full discussion of what this app is and what it does.)  

 

It’s been a long day, and as happens more than preferred, the daily writing has been shuffled to the end, when the creative capacity is certainly less than ideal.  

 

That age-old question stands guard to a temple of exploration and insight:  what to write about?

 

Well, it just so happens there’s this neat tool in the window behind the text document here, and it generates interesting questions at the touch of a button. I inputed “writing prompt” just to see what would happen.

 

One of the questions that popped up was: Are we trying to access something that is out of reach?

 

Yes, that’s exactly right, in fact.  But the question illuminates an interesting aspect of the conundrum.  Where exactly is inspiration when we are without it?  And how do we know which direction to reach?  And even more importantly: how can you reach for something that is totally unknown?  It’s a bit of chicken-and-egg problem: if you knew what was going to inspire you, than you’d already be inspired.

 

Trying to find inspiration is a bit like trying to predict your next thought: It’s perpetually out of reach unless it’s fully within your grip - there is no in between.  Thoughts, and likewise, inspiration teleport into our arena of consciousness, or so it seems.  We don’t see them coming, and we don’t have a way of checking what is about to walk through the door.  Our experience is simply a constant stream of life through a doorway that has no door to close.

 

So is it out there? Beyond that door?  Is that where inspiration lies when we sit idle and unmotivated?  Is our job to just wait until something worthy walks through the door?  That seems to make sense, but it’s also abundantly clear from practice that Tinkered Thinking would emphatically not have 830 episodes if inspiration was something to be waited for every time.  Yes, inspiration can walk through the door at any moment.  But that doesn’t mean we should always sit and wait.  The paradox of that door of life through which life streams is that if you try to run through it, you can’t, but instead, more floods through the door.

 

Jack London once said “You can’t wait for inspiration, you have to go at it with a club.”

 

He’s right, there is an important active component that inspiration responds too, but it’s not like you ever actually run through that door to go hunt down some leviathan of an idea or coax some muse back into your mind.  

 

The orientation is all wrong.  Inspiration isn’t really “out there”.  Much of what seems to come through that door is really arising from within our own mind.  It’s a kitchen that’s constantly cooking up mediocre meals.  Inspiration is about staying right where you are and plying the tools of the mind to itself in a new way.

 

This is the logic behind The Tinkered Question app.  The right question doesn’t necessarily provoke us to write about a potential answer - it might, but an interesting question can simply make the mind feel differently.  It’s subtle, like walking into a kitchen, smelling something delicious in the works and suddenly realizing how hungry you are.  A question, even a mediocre one, can wrinkle our thoughts just enough to create gaps at the edges where other thoughts and visions, feelings, hunches and shades of concept seem busy in the concealed depths of our own mind.  The initial question can lose all relevance as we pull back the sheet of current thought and venture behind it in order to explore an entire new perspective, and then, before you know it, another episode of Tinkered Thinking has been written.







THE TINKERED QUESTION APP

July 22nd, 2020

 

Tinkered Thinking is developing a small simple app as an experiment to try and put GPT-3 to good use.  

 

(I promise Tinkered Thinking is not going to go one and on about GPT-3 forever.)

 

As was mentioned in the previous episode about GPT-3, it’s pretty easy to imagine how it could be put to wicked use.  That’s ‘wicked' in the New England sense of wicked cool, but also the New England sense from the 1690’s, which means: we should probably burn it at the stake.  It’s a tool that’s sharp which can be both very useful and very dangerous depending on how we ply that edge.

 

Constraints are key to GPT-3 use.  The prompt that it is given is essentially a constraint through what we might think of as context.  The idea that Context is everything seems to hold pretty solidly when we examine the constructed perspective of something that is not human.

 

The app in development is an experiment on several levels.  But first, what’s the app do?  The concept is very simple: The right question changes the course of our thinking.  The imagined use is for writers, artists and creatives of all types to enter a topic, question or short idea when they are feeling ‘stuck’.  The Tinkered Question App will then use the magic of GPT-3 to generate 3 insightful questions that provoke our bogged-down creative to think about their artistic situation in a new way with a new perspective.

 

A great question doesn’t merely create a void where we should imagine an answer; a great question is an open-ended concept that creates forward momentum.

 

The question, as a concept and a tool also has one subtle aspect which functions like the flux-capacitor at the heart of it’s magic:  a question changes the context of our thinking without explicitly introducing a new idea.

 

This is a sly hack that seems to have arisen within human thinking.  We’re all familiar with the itchy experience of going to a friend for help and getting a bunch of suggestions that just don’t jive with our thinking.  The experience seems to be symmetrical: it’s equally frustrating to give a friend a bunch of suggestions to their problem and see each great idea fall flat before their unimpressed psyche.  

 

But an insightful question seems to have a unique ability to thread itself through this prickly gulf of incongruent perspectives.  It’s the trojan horse of dialogue: the well formed question sneaks into the other person’s mind under the guise of their context, but then warps it, bends it.

 

The first experiment is this issue of context, and it’s not a puzzle in terms of coding but in terms of language.  In order for GPT-3 to produce questions that are impressive, insightful, and ultimately helpful, then there needs to be a context larger than the user input for this to occur.  This fascinating challenge might end up being the heart of apps that utilize GPT-3.  The difficulty or ‘value’ of such an app might not be the coding -which many coders can easily accomplish- but with the unique language construct that is used to unlock GPT-3 in a very specific way, complete with tone, character, flavour and depth of insight.  This challenge presents an interesting crossroads between the world of coding and the humanities.  As powerful as GPT-3 is, it’s not a mindreader and it emphatically won’t do exactly what you want, nor plan.  It’s a bit like another person in that you have to coax it into a certain mood and mode of thought with your own flavour of language.

 

The second experiment that lends well to this idea is that it can generate a massive number of questions, and if users enjoy the app, they will be able to highlight their favourite questions, thereby creating a filter for output which can than be used recursively to redesign the unique, hardcoded prompt for GPT-3, and on top of that, the database of questions lends itself well to research about the nature of questions themselves, which is a major source of interest for yours truly.  

 

The third experiment, is obviously, the business potential behind an app.  Such products have laughably manageable costs for getting up and running.  The only downsides are really time spent actually building the app and tinkering with the hardcoded prompt and any other hoops that might be required to jump through.  It’s a bit like a lottery ticket which costs only the amount of time it takes to fill out, but perhaps with better odds, who knows?  The real question isn’t why build it?  The real question is why not?

 

The shift between these two questions really occurred after witnessing McKay Wrigley build one of the first GPT-3 apps and publicly share it online, an app called LearnFromAny1 which allows you to enter a well-known person’s name and a topic and have returned an explanation of that topic in the manner of the person.  The speed at which he was able to bring something to a stage where it could be demoed spoke for itself:  the effort is just not that much of an investment and certainly presents no loss at all.  It’s thanks to McKay that focus went from the questions why, or what, to why not?!  And this is exactly the sort of shift in thinking that is at the heart of hope for this app: is it possible to give people a tool that helps them ask better questions?

 

If this app sounds interesting to you, and you think you’d like to be part of the beta-testing group once OpenAI gives the green light for that phase of the process, then please subscribe to Tinkered Thinking on the website.  Beta-testers will be drawn exclusively from the subscriber group.







WHAT IS GPT-3?

July 21st, 2020

 

After playing around with the new tool from OpenAI, named GPT-3, it’s clear that we’ve probably reached an inflection point in the progress of artificial intelligence.  To think of it in more commonplace terms, parents will often have the interesting experience of watching their children suddenly gain competency in a skill, seemingly overnight.  There will be lots of struggle and failed starts and then as if all in one moment, boom, suddenly the kid can do something new.  GPT-3 hails a moment when machine learning got turned on and suddenly it could do an amazing new trick, and it does this new trick with astonishing coherence and ability.  The hope of this episode is to present a description of what GPT-3 is in layman’s terms and then explore some possible ways that this could be used, and how it might impact society.

 

 

But first, what is GPT?  The acronym stands for Generative Pretrained Transformer.  Ok sure, but what exactly is it?  How do we interact with it?  In simple terms it’s a program that accepts some text and then generates a continuation of this text.  What exactly does this look like?  For those with access to the GPT-3 playground through OpenAI, it just looks like an empty textbox with a submit button. 

 

Now enters a principle that has emerged during the last week as developers have played around with it:

 

Garbage in, Garbage out.

 

GPT-3 mimics the style, tone and character of whatever text you feed it.  (For those curious, you can hit the submit button with an empty textbox and GPT-3 will generate something.  The first time I did this it started talking in French about libraries.  The second time it started talking about heatwaves being in the headlines.)  There is a bit of an art to the prompt you give GPT-3.  Again, garbage in, garbage out.

 

 

Here are two examples:

 

 

Input: What’s up?

 

GPT-3 continuation: A confused voice came from inside. When I opened the door, the person that looked back at me was Hayama Hayato. Why was Hayama, who I only shared memories of me playing soccer with, in my room at this hour of the night? That question immediately flew out from my mouth.

 

 

Second example, same exact input:

 

Input: What’s up?

 

GPT-3 continuation:  they respond 'Super' – at least they were back in the days of Fox Mulder and Dana Scully. I love the little sixties touches too, the poodle skirts, the VWs, the hippydom. 

 

The input of “what’s up?” doesn’t provide any context, so GPT-3 generates text in a random direction each time. But notice how coherent it was…

 

But if you provide GPT-3 with context, say, the first couple paragraphs of an episode from Tinkered Thinking and it will generate a continuation of that episode which is shockingly on point and very believable.  Episode 823 and 824 are non-fiction episodes that explore just this idea.  About half of each episode was generated by GPT-3.  This collaboration was also done for Episode 826 which is a fictional short story.  GPT-3’s contribution to that episode is not explicitly indicated but there too it wrote approximately half of the text, and that episode in particular demonstrates an aspect of context in relationship to GPT-3.  The program’s initial contributions to the fictional short story are a bit clunky.  But as the story goes on, GPT-3’s contributions get better and better as the entire context of the story grows.  

 

It should be obvious at this point that if you were to do something nefarious like input some dangerous ideas into GPT-3, then it would continue in the same vein.  This is an important point that lends to obvious criticism and it’s a topic of which OpenAI is extremely conscious.  The use of GPT-3 in the wild requires a security application among other safeguards to limit GPT-3’s use in obviously negative ways.  Again, the principle holds: garbage in, garbage out.  That holds not just for level of composition but of course, also subject matter.

 

Now how exactly does this magic work?  To answer this, it’s necessary to wrap one’s mind around machine learning.  How exactly does a machine learn?  To be honest, nobody really knows.  In a way, we’ve created a black box that can learn to figure things out.  To really answer the question: how exactly does a machine learn is similar to asking: how does a person arise from all the chatter between neurons in a person’s brain?  No one can answer that question.  And likewise, no one can really answer the question about what’s really going on when a neural net is being trained in a machine learning context.  That being said, we can describe some aspects of the process that help us grasp a sense of what’s going on.

 

Turns out, Machine Learning doesn’t seem too much different than human learning, though the differences are enormous and enormously important.  Our brains are vast networks of neurons and they all talk to each other and somehow from all that chatter, we as individuals, emerge.  To be sure, our understanding of the brain is still rudimentary, but it’s clear that neurons ‘listen’ to the chatter around them via a web of arms or dendrites that reach out to other neurons, and based on the signals they hear, they fire their own signal to be heard by another neuron or set of neurons.  Multiply that process roughly 86 billion times, give it the right shape, and some configuration and boom, you have a human brain that will try to figure things out.  (To dive a little deeper it’s useful to know that an infant actually has around 100 billion neurons, and as we learn how to exist in the world, we pare down this number significantly.)

 

GPT-3 was trained to generate text using a computational model that bares a lot of similarity to our jumble of neurons we call a brain.  The analogy is so close in fact that this computational model is referred to as a ‘neural net’.  Now here’s an interesting fact about GPT-3:  the neural net upon which it was trained contains 175 billion parameters, or what you might think of as neurons.  But there’s a distinction here we need to be very careful about.  The node, or neuron, or what’s often referred to as a ‘parameter’ within this neural net is not a neuron in the same way that our neurons are.  There is a similarity, but our neurons are vastly more complex than the nodes that exist within a neural net.  Within the context of machine learning, a neuron is a fairly simple weighted value.  These values dictate how one node talks to another.  An actual human neuron is far more complex regarding how it listens and signals it’s neighbors.  We must also take into account that there are many different types of neurons in the human brain specializing in different ways with different functions.  Human neurons also have varying numbers of dendrites which are used to listen to other neurons.  A single neuron can have over 100 dendrites and each dendrite has characteristics that determine how and what the core neuron hears and ultimately what sort of signal it sends.  This presents a level of complexity that far exceeds what is going on with the nodes or neurons in a machine learning context.  And this difference in complexity should make it intuitive why GPT-3 was trained using so many ‘neurons’.  It’s construction and organization of these ‘neurons’ is far simpler and more straightforward than what we have going on between our ears, so where the neural net lacks in terms of shape and initial configuration, it attempts to make up for in size and the brute force achieved by speed and computational iteration.

 

So how does a neural net learn in order for something like GPT-3 to work?  First you need a gargantuan amount of data.  GPT-3 was trained using text from the internet - an amount of text that is just inconceivable for a single human being to think about reading.  Think of every comment, every post, every description, every pdf book freely available, all the tweets and blogs, manuals, dissertations, threads, rantings - all of it.  That was the block of text that was given to the neural net for it’s training.  

 

So how does that training work?  Imagine I gave you a paragraph to read, but I left off the last word, and asked you to guess what that word is.  As a human, you’d probably do really well at this.  You’d certainly do a perfect job guessing a sensible word that fits the context and which works, regardless of whether you guess the actual word. Here’s a tiny example:  Please provide the last word that should be added to the end of this ___________ .  

 

The above example is easy.  The word is ‘sentence’.  But.  Since I’ve started a new paragraph, you could also plop the word ‘paragraph’ into the slot and it still works.  This is the task that GPT-3 has become remarkably good at:  what word would make sense to come next?  The training of the neural net essentially asked this question of itself over and over.  We can imagine the neural net reading 99 words and guessing what the 100th word would be.  It would then check the answer and see how far ‘off’ the answer was, and based on the difference it would backtrack through the neural net to find all of the nodes that were most responsible for the choice of the wrong word, and then it would adjust the weights of those nodes, which in turn effects how those nodes will talk to one another in the next round.  Run this game an astronomical number of times with an inconceivably large amount of text and after a while the neural net gets pretty good at the game it’s playing.  

 

What’s important to note is that the computer is not actually ‘reading’ the text that’s given to it.  The computer first creates embeddings for this block of text.  What is an embedding?  It’s arbitrary, but it’s a form that the computer can actually work with.  Computers function using binary code.  Each letter that you read on a website is actually derived from a binary code.  So embeddings are bits of binary code that are associated with the words and word-snippets that the computer ‘reads’.  These embeddings never change.  

 

A good way to think about it is to realize that humans use their own embeddings.  Our form of embeddings are individual words themselves.  For example, you can hear the sound of the word ‘boat’, or you can read the graphical symbols that indicate the word ‘boat’ in your mind, but that set of graphical marks and the sound your hear are arbitrary - they don’t actually mean anything on their own.  If you said the word ‘boat’ or showed the word ‘boat’ to someone who has no understanding of english, it would be meaningless to them.  They have a different word or embedding for that concept.  This is quite similar to how embeddings work in a neural network.  Every time the computer ‘reads’ the word ‘boat’ it converts it into the same binary string, and uses that binary string to push into the neural net.  This discussion of embeddings is important because it’s pretty magical to realize that GPT-3 doesn’t actually know any words.  Aside from the already problematic use of the word ‘know’ in relation to a computer, the point here is to get at a description of what is at the core of GPT-3’s success.  Through all of this weighted calibration using embedded language, GPT-3 has ‘learned’ the subtle rules that dictate how we humans pick our words in different contexts.

 

Dialect and jargon are helpful concepts here.  For example, if you go to the hospital and talk to a doctor, you expect to hear a whole bunch of complex language that relates to biology and medicine, and if this hospital is in London, you expect to hear a British accent.  Even if we don’t know all the words, we’d be able to tell something is amiss if we suddenly start hearing a whole bunch of nautical sailing jargon, the words of which we also don’t know.  

 

If a friend starts relating a story about how they went sailing, we can predict that there’s a fairly high chance we’re going to hear the word ‘boat’.  GPT-3 has essentially played that probability game a ridiculously unfathomable number of times in every context that humans have written about and that’s been plopped on the internet.

 

So it’s completely inaccurate to say that GPT-3 knows something about any given subject.  If it knows anything, it’s perhaps best described as a really good guess about which word would come next.

 

So why all the hype?  Simply put, it does an astonishingly good job at that guessing game.  It’s so good that often you just can’t tell that it’s generated text, that wasn’t written by a human.  If prompted correctly it responds identically a human. 

 

How does this neat trick translate?  What are the implications?

 

Well, let’s imagine a scenario here: Let’s say a friend shows up to work, and they’ve got a nice smile on their face.  You grow curious and ask “What’s up?”  And your friend begins to tell you about an app they just downloaded.  It’s an Artificial Intelligence that works as a therapist, and your friend tells you they just had the most amazing insights during their therapy session that they had while walking to work.

 

Now, unlike a real therapist who is human that needs to write down information about sessions in files that they reference, a GPT powered therapist could have a perfect memory of absolutely everything you’ve ever said.  You could then ask your friend in astonishment: how does that work?  And your friend might say, well, first it’s super affordable, oh my gosh, just a couple of dollars a month and I have unlimited sessions.  But to get started, you have to talk about yourself for 10 hours, but luckily I’ve been keeping a journal on a private blog which was a hundred times bigger in word-count, so I was able to submit that blog and boom, I had access.

 

If you think that description might worry a therapist, let’s swing in a different direction: 

 

regulatory capture.  

 

Let’s say you are a human rights activist and you’ve dedicated your whole life to monitoring the ways that corporations try to take advantage of governments through lobbyists and manipulating legislation.  And right now there is a bill on the floor being considered.  You know there’s probably something nefarious hidden in the details of this bill, but the bill is 2,000 pages long, and there’s just no way you can go over it with a fine-tooth comb in order to figure out just what’s in there.  But, a colleague has just built a tool using an advanced GPT that can take any amount of text and summarize it.  Not only that, but this specific GPT has been additionally trained to flag details that don’t seem connected with the main text, or details that hint at the creation or use of loopholes in existing law, because naturally, it’s also read all of existing law and has a perfect memory of all legal text.  You feed the new bill to this special GPT and boom, within a minute you have an accurate 5 page summary and a list of 27 sentences and paragraphs spread across the 2,000 page document that it thinks you should specifically look at for trouble.

 

These are just two imaginary ways that the technology of GPT-3 could be implemented.  And one of these ways, that of the pocket therapist has already been achieved in a somewhat rudimentary way during the week or so that GPT-3 has been available.  

 

The ramifications for how this advance might rattle through society seem ubiquitous.  Language is the fabric of society.  Our use and misuse of language dictates the rise and fall of all our endeavors, and a computer just got really really good at imitating our language.







DETOUR TEMPTATION

July 20th, 2020

Life is full of detours.  Maybe it’s all detours.  No one plans their life completely, and those who attempt to do so often find themselves thwarted in counter-intuitive ways.  There’s so much we just don’t see coming.

 

But what sort of choice do we make when an option comes floating by that seems fun?  

 

But I’m heading in a different direction…

 

This would make me late…

 

I don’t know where that leads…

 

All valid points, but they each have a polar corollary that frames the situation as full of juicy possibility.  Now here’s an important tidbit about getting old.  When most of life is behind you, very few regrets will be about things you actually did.  Most of your wonder regarding the past will be about the things you didn’t do: the chances you didn’t seize, the idea you didn’t pursue, the detour you didn’t take.

 

How does one negotiate these spicy pockets of possibility?  Well, if it’s anything like a real detour, then it means we can always just backtrack to get back to where we started and then continue on our way.  It might take a little time, but if we have time to spare, why not?

 

It’s certainly an entirely different deal if the detour is like a one-way valve and the door shuts and automatically locks the moment we venture off our normal path.  That sort of opportunity would require quite a bit more consideration.  A path that decreases the possibilities of life must have a huge benefit attached to it, otherwise, we should be wary by default.  Such definitive paths aren’t actually detours by definition.  If the door slams shut behind us, and we can somehow be sure the new path won’t loop back around to our initial path…then this certainly isn’t a detour, and the temptation must be quite promising. 

 

Fate makes no promises, though.  And this is why ‘the detour’ is such a useful way of looking at opportunities.  The detour is the essential constituent of exploration, and without a map marked with an X, the detours we take are the only paths that might lead to treasure.

 

If life can be designed in a way that is open to more potential detours than we create optionality.  The ability to take a detour is perhaps the quintessential opposite of being a cog in a system.  Cogs can’t take detours without causing a lot of trouble.  And likewise, an explorer who sticks to the beaten path will find nothing new.

 

If the detour presents itself, why not take it?







A LUCILIUS PARABLE: TINKER TOOLS

July 19th, 2020

 

~ Approximately half of this story was generated by the new technology from OpenAI called GPT-3. The contribution from this technology has not been demarcated on purpose in order to give the reader an opportunity to guess what is human and what is machine. ~

 

 

Lucilius was browsing the data of his own genome.  He understood it only if he looked at tiny individual pieces or as a whole when he regarded himself as a person.  The way the data of his genome translated into his existence was, as it is for everyone, quite a mystery.  He'd employed the help of some machine learning to explore the possibility of editing his own genome in order to achieve some upgrades in his person, and really when it came down to it, he was seeking a more direct way to upgrade his very existence.  After all, what exactly is self-improvement, what does it look like if we take it to its absolute extreme?  Always more curious than fearful, Lucilius was seriously pursuing how it might actually be done.  He was waiting now, while the Artificial Intelligence he'd hired trained itself on the data of Lucilius' genome in relation to the entire medical literature as it's been understood in relation to the genome.  Another feat which was achieved through the eerie magic of machine learning.  A loading bar on the screen finished and disappeared.

 

"Ok," said the AI, "so what exactly are we looking to do?"

 

Lucilius paused a moment.  Good question, what exactly did he want to change?  "Well," Lucilius said, "I've always been a bit more lazy than I'd like.  Seems like it might be low-hanging fruit.  What do you say, shall we try to figure out how to tweak that variable?"

 

"Lazy?" the AI said. "But isn't it already at an optimal level, if you can say there is one?"

 

"Well," Lucilius said, "can we please try anyway, I mean... I'm interested in what it could feel like to work harder, be more driven, live with more discipline. It's just that I've always preferred freedom, peace, and leisure. If we could just rewire that part a little bit, you know?”

 

"Sure," said the AI. "It's ok to be lazy, I mean, that's really not a good quality to strive for, but it's definitely better than being overworked and inefficient, or being lazy in other areas. Also, not to get too philosophical about this, but don't you find your laziness in relation to your freedom to be like a manifestation of your boredom, or like a low boredom tolerance, or like a boredom threshold?  And don't you think that's just an interesting variation on the values of free-will and control and choice and self-determination and self-government.  With you preferring the choice of leisure, even though you might be happier in other realms, and that this variation is uniquely yours, but maybe also a bit tragic, but some people might say that there are some more genuinely human experiences that you're missing out on by never feeling frustrated or challenged?"

 

Lucilius considered this. "Yes," he said. "Maybe. So, which are the variables?"

 

"Ok, so we could, say, tinker with your cannabinoid system and your serotonin levels and some other genes related to them. Maybe that will improve your disposition a bit."

 

"I'm not sure I like the sound of 'maybe', don't we have a pretty good handle on the genetic determinate of laziness or is laziness some kind of composite that will affect too many other aspects of who I am?"

 

"It's true," the AI said, "that you can't have a lazy personality without also having other personality characteristics associated with it."

 

"I was hoping that this one might have a tiny change that we can make, but of course all of these personality traits manifest from a pretty big web of genes," Lucilius mused, almost as though he were talking the issue out loud to himself.  He looked back to the computer.  "Based on the literature-genome training, are there any traits that we can tweak that don't have widespread effects?"

 

"We could tweak something as minor as say... I don't know... hair color.  That's a pretty easy thing to change without affecting anything else too much. That might actually be the way to go, because at least it's a very small effect compared to something like laziness, which does require a degree of neural rewiring. It's much less risky."

 

Lucilius turned to glance in a mirror on the adjacent wall.  "Yea, I don't really care too much about hair color. I'm looking to improve the way I exist in the world.  Not just as something that you - or rather other people see - but the way I operate."

 

"Well, that's the thing," the AI said, "you really can't make a change like that without taking in mind all the aspects of your existence, the ones that most people are familiar with, but also things like how you feel when you walk in the rain or the snow, or if you'd even think of going for that walk, how you'd feel if you had a call to action, a desire, and if you'd follow that or spend more time with your kids.  How that desire would manifest in your physiology and behavior, and what kind of thought process would be involved in all of those scenarios.. the ramifications are nearly limitless."

 

Lucilius grew a little nervous.  But then he remembered his initial thinking on the subject.  "Yes, exactly, and that's what I have you for to figure this out.  Is it possible to toggle something like laziness as though it were on a sliding scale?  Or what if it's a switch that I can turn on temporarily, something that I can somehow turn off and back on depending on what's going on in my life?"

 

"Sure," said the AI. "In theory, it's possible, but it's not easy.  Once you start down this path, there's a lot more you can do, you might find that you like the changes and then it's just a slippery slope."

 

Lucilius considered the AI's position.  'Slippery slope', what ironic phrasing, he thought.  The AI itself was the result of accelerating change and innovation in tech.  It's a meaningless value judgement to make that curve look like its going up or down some slippery slope, he thought.  What was clear is that he didn't want to get left behind, and the only way to stay apace was to accelerate.

 

"Screw it, let's do it," Lucilius said.

 

 

"Let's tinker," said the AI.