The value of what you know without knowing it
Generative artificial intelligence is no longer a hypothesis.
There are now a host of tasks that it can execute for us. And that it executes much better than we do. Make a list of twenty 18th century French painters. Describe the difference between the RBMK and CANDU nuclear reactors. Extract the main arguments from an article arguing against the regulation of cryptocurrencies.
The advent of AI in the workplace raises many questions. Many of which remain unanswered. And that’s just as well: it’s perhaps the ability to ask questions, the ones we call “good” questions, that (still?) makes the major difference between our human intelligence and artificial intelligence.
One of these questions is that of our added value as humans in our jobs. Now that artificial intelligence has been given a desk in our offices, what can and should we cultivate to remain relevant in the workplace?
Contents
Carry out or complete a task
What AI does best for us is executing tasks. Executing a task means responding to a request with a degree of precision equivalent to the instructions provided. No more, no less.
To respond to this request, a tool like ChatGPT cannot base itself on any intrinsic knowledge or understanding of the place of utterance, the meaning of the request, the sensitivities involved… It is up to us to provide it with these elements.
But how good are we at capturing these elements of context surrounding the execution of a task and recording them in a prompt? We’ve all experienced it before: the prompt we give never really encapsulates the entire context. That’s because we have what can be called a situational awareness of this context, an experiential knowledge of it, that we’re not always aware we have.It’s often after we’ve written a prompt that we realise all the things we knew but didn’t mention. Because we didn’t know we knew them. And that they were important. And this is perhaps what will enable us to remain relevant in our trades: to become aware of all the things we know without knowing we know them, and to value them in carrying out a task. Artificial intelligence encourages us to explain what we know without knowing it. If only because this description makes the difference between a “good” prompt and a “bad” one.
What are these things that we know without knowing we know them? It’s all the information that comes from our experience of the world. Everything we have acquired through our presence and (inter)actions in this world. The sensitivity we’ve developed towards others and our environment. The preferences and tastes we have honed. Our intuitions. Our projects. Our values. Our emotions. And our moods. Just to name a few.
Unlike executing a task, performing a task implies carrying out that task taking into account this wider context. The place of utterance, the meaning of the request, the sensitivities involved… All these parameters have an impact on the way we respond to a request. Even when are not aware of them. Performing a task means responding in a way that takes account of the wider context of the request, by considering, at the same time, all the elements (that we deem) relevant to that context.
It’s all about adjusting the task to the context. Adjusting to what makes sense, but also to what creates meaning. All in all, it’s a question of common sense.
Adjustments at every stage
Performing a task means adjusting its execution to a wider context. But what context?
To answer this question more precisely, we can compare the carrying out of a task to an act of communication. With a sender, a receiver, a method, and a goal. According to this comparison, a task is carried out by someone (who?), for someone (for whom?), with a view to a goal (for what?) and according to a method (how?). It is these contextual elements that give meaning, colour and texture to the task.
When performing a task, it is therefore a matter of adjusting the execution of the task to these contextual elements. These adjustments take place before, during and after the task.
Upstream adjustments
First there is the formulation of the problem to be solved, which leads to the identification of a task. Could ChatGPT have put the engineers who came up with the idea of replacing steam engines with combustion engines in trains on the right track? Probably not. To achieve this result, it is the problem that had to be formulated differently. Not the answer that had to be improved.
Let’s take a task. Writing a manual for building a combustion engine. Listing the marketing strategies useful to a company that sells toothpaste. Before starting to carry out this task, there are a host of questions we can ask ourselves to ensure that it fits the context. What is the purpose of what I’m about to do? Which task is best suited to that purpose? Are there any considerations related to the sender’s social and cultural context that I need to take into account? What are the recipient’s needs? What are their preferences? What tool(s) or skill(s) should I use to carry out this task? Which stages should be given priority? What constraints need to be taken into account?
An AI can certainly answer some of these questions. Perhaps even most of them. But what it is (still) unable to do is to consider these elements of context together, like a constellation. And to adjust them to each other and to the task in hand. As it happens, we are capable of doing this. And we do it most of the time. Some of us all the time. Without realising it.
Adjustments along the way
Then there is the actual execution of the task. During which we can also call on AI for assistance. But careful. We need to know what to expect when we do so.
Some of us may find a certain pleasure in executing a task. Yes, it happens. Draw a line. Write a paragraph. Finding the formula that unblocks an inconsistency in Excel. And AI has the power to deprive us of that pleasure. Resorting to AI when we derive satisfaction from executing a task is like offering a rather advanced robot to someone who loves cooking: all they are left with is cutting the pieces that a machine will concoct for them.
When we execute a task, it can happen that we decide to adjust our aim as we go. We choose to thicken a line. We change the structure of the article we’re writing. We wonder whether this inconsistency in Excel is more a question of data than formula. AI also has the power to deprive us of the iteration that goes into carrying out these tasks. While this may save us time, it’s not always a good thing: this iteration is precisely what can help us refine the adjustments of the task to the context.
Finally, it can also happen that we decide, along the way, to change course. Completely. To reshuffle the deck. To turn things around. Because we’ve reached a better understanding of where we were going. Because new elements were added to the equation. And when it comes to making these changes, AI’s help is limited. Generative AI, as its name suggests, generates content, but does not create it. Its “creativity” stops at combining the elements it has been given during training. These combinations are sometimes novel, it’s true. But that novelty does not go beyond the predictable. Whereas content generation uses probabilistic logic to combine the most appropriate elements among all those supplied, creativity is the ability to produce improbable combinations from existing, sometimes uncorrelated elements. This creativity is at the root of the ideas we come up with to unblock situations.
Downstream adjustments
Lastly, once the task has been completed, an assessment can be made as to how well it fits into the context. Has the task achieved its intended purpose? Is its impact on other, now and in the future, acceptable? Has carrying out this task help me grow as a person? In my job?
The question of our time is no longer where to find information, but what to do with it. And we need to do something with it. ChatGPT doesn’t know what’s true. Doesn’t know what’s beautiful. Doesn’t have an opinion. Doesn’t have preferences. It makes no judgements. And is not capable of making decisions. The biggest risk we face with an AI of this type is not that we ask a question it cannot answer. But that we say “thank you”. Without questioning the relevance of its answer to the context in which a task is carried out.
For fulfilment’s sake
Now that artificial intelligence is sitting with us in the open space, what can and should we cultivate to remain relevant in the workplace? Awareness of who we are and what we know without knowing it.
This is the only advice we can give to our readers: make the most of everything in your teams that makes a person different from a machine. Value everything they are, everything they know, and everything they do that a machine is incapable of producing. Their insight. Their intuition. Their situational intelligence. Their passion. Their curiosity. Their taste. Their sensitivity. Their emotions. Their humour. These are the elements that will enable us to continue adjusting what we do to what we want to do, to the best of our human ability.
Last, but not least, let’s not forget the satisfaction that comes with performing a task. The satisfaction of having contributed something of our own that a machine is incapable of generating. This satisfaction is part of the esteem we have for ourselves in our lives and in our trades. It is part of the relevance we attach to our work. It is part of the meaning that we give and can still give to our jobs.
Ps. We’ve got plenty of ideas to help you explain what you don’t know you know. Get in touch !