Latest Prompt Engineering Technique Aims To Get Certainty And Uncertainty Of Generative AI Directly On The Table And Out In The Open

77
latest-prompt-engineering-technique-aims-to-get-certainty-and-uncertainty-of-generative-ai-directly-on-the-table-and-out-in-the-open

A wise move consists of using prompt engineering techniques to get certainty and uncertainty on the … [+] table when using generative AI.

gettyCertainty and uncertainty play a big role in life.

It is said that the only true certainty consists of deaths and taxes. Michael Crichton, the famous writer, said that he was certain there is too much certainty in the world. Legendary poet Robert Burns indicated that there is no such uncertainty as a sure thing.

One thing about both certainty and uncertainty is that we seem to crave and relish certainty, while we tend to agonize over and strive to convert uncertainty into certainty if we can do so. As Carl von Clausewitz, the lauded military strategist professed: “Although our intellect always longs for clarity and certainty, our nature often finds uncertainty fascinating.”

All of these machinations over certainty and uncertainty turn out to be a big matter for those that wish to fruitfully make use of today’s generative AI such as ChatGPT, Bard, and so on.

In today’s column, I am continuing my ongoing special series about advances in prompt engineering, doing so this time with a particular focus on the crucial and often unexposed topic concerning generative AI and the controversial matter of expressing certainty versus uncertainty in the essays and outputs being emitted by the AI. This topic is likely something you might not have contemplated before. I assure you that it is a lot more important than the coverage or attention it has received to date.

Allow me to explain.

As background, realize that being able to write productive and effective prompts when using generative AI is paramount. A lousy prompt tends to generate lousy results out of generative AI. A wisely composed prompt can lead to stellar results out of generative AI. Knowing the vital keystones of prompt engineering is a prudent means to get your biggest bang for the buck when employing generative AI.

One issue that few realize exists until taking a reflective moment to ponder it is that most generative AI apps tend to exhibit an aura of immense certainty. You enter your prompt and typically get a generated essay or interactive dialogue that portrays the generative AI as nearly all-knowing. The sense that you get is that the generative AI is altogether confident in what it has to say. We subliminally fall into the mental trap of assuming that the answers and responses from generative AI are correct, apt, and above reproach.

The essays and interactive dialogue come across this way for two major reasons.

First, generative AI produces responses that often exude a semblance of certainty. If you ask whether Jack and Jill fell down the hill, you might get a reply by generative AI that says yes, they definitely did so. There isn’t any kind of qualification or hedging in the answer by the AI app. A human that is asked the same question might quality their response, such as saying that if you are referring to the famous nursery rhyme, indeed they fell down a hill. But if you are thinking of some other Jack and Jill, maybe they didn’t fall down a hill.

Second, as humans, we are conditioned to assume that if we don’t explicitly see suggestions of uncertainty, we tend to lean into the certainty camp. Suppose you are talking with someone, and they tell you that it is raining outside. All else being equal, you probably believe them and that it is a certainty that rain is in fact falling. Only if the person says they believe that it is raining (the word “believe” becomes a signal of less than certain), or they declare it might be raining (the word “might” is a strong signal of uncertainty), do you begin to consider the certainty versus uncertainty of what has been stated.

Generative AI typically does not include the signals and wording that would tip you toward thinking of how certain or uncertain a given response is. To clarify, I am not saying that generative AI will never provide such indications. It will do so depending upon various circumstances, including and especially the nature of the prompt that you have entered.

If you explicitly indicate in your prompt that you want the generative AI to emit a certainty or uncertainty qualification then you will almost certainly get such an indication. On the other hand, if your prompt only tangentially implies the need for an indication of certainty or uncertainty, you might get an output from the AI app that mentions the certainty considerations or you might not.

A rule of thumb is that generative AI is like a box of chocolates, namely that you never know for sure what the generative AI is going to produce or generate.

Another handy-dandy rule of thumb is that unless you bring up certainty or uncertainty in your prompt, the chances of having the generative AI by default include some indication of the certainty about a response is a wild throw of the dice.

Why does this matter to you?

Anyone using generative AI has got to awaken to the fact that often the response by the AI is going to be essentially a guess or approximation, even if the AI doesn’t directly state this condition when generating a reply. Your tendency to anthropomorphize the AI lulls you into thinking that the AI is giving you the correct answer. You assume that the answer has nearly absolute certainty. Only if perchance the reply state that there is some uncertainty underlying the response will you be mentally sparked into realizing that the reply ought to be given a concerted second glance.

Of course, there are limits to this implied assumption of certainty.

Imagine you ask the generative AI whether the sun will come up tomorrow. Suppose that the generated response is that the sun will not come up tomorrow. This is stated by the AI in a matter-of-fact manner, unequivocally, and appears to be an absolutely certain assertion. I suppose you might pack your bags and get ready for the world as we know it to somehow spin off into space. I doubt though that many of us would blindly accept the implied certainty of the AI response. Our commonsense kicks into gear at the seemingly preposterous claim that the sun won’t rise. We would undoubtedly ask the AI about this, and the odds are that the AI might sheepishly emit an indication that it was wrong about that whole thing of the sun not coming up tomorrow.

Here’s a remedy of a sort.

Had you asked the generative AI at the get-go to proffer an indication of certainty or uncertainty, at least you would have gotten some added wording to go along with the assertion about the sun. The additional wording might be helpful to you and keep your head in the game, causing you to mindfully assess whether the generative AI is on the up and up (you see, sometimes, generative AI is said to incur AI hallucinations, which I have examined at the link here, encompassing the AI making things up entirely).

Furthermore, and this is a mind bender, the very act of asking or telling the generative AI to include a certainty or uncertainty will often spur the generative AI to be less off-the-cuff and produce more well-devised results (for those of you that know about the use of prompting techniques such as chain-of-thought, that I’ve covered at the link here, research tends to suggest that these methods will prod the computational pattern-matching toward better results).

I trust that you are beginning to see where I am taking you on this journey about the latest in prompt engineering. A practical and highly prized technique of prompting involves stoking the generative AI toward including some indication about the certainty or uncertainty of the responses that are being emitted. You will be a lot better off by seeing wording or indications within the responses that clue you to the certainty or uncertainty involved. The idea is to turn something that right now is often omitted, hidden, or otherwise neglected, and make sure that it gets clearly onto the table and out in the open.

So that you can properly and appropriately devise prompts that stir the AI into providing certainty and uncertainty indications, I will provide you with useful ways to get this to happen. You are urged to try out the approaches and add them to your prompt engineering skillset. You’ll be happier and more informed if you do so.

Before I dive into the crux of this exciting approach, let’s make sure we are all on the same page when it comes to the keystones of prompt engineering and generative AI.

Prompt Engineering Is A Cornerstone For Generative AI

As a quick backgrounder, prompt engineering or also referred to as prompt design is a rapidly evolving realm and is vital to effectively and efficiently using generative AI or the use of large language models (LLMs). Anyone using generative AI such as the widely and wildly popular ChatGPT by AI maker OpenAI, or akin AI such as GPT-4 (OpenAI), Bard (Google), Claude 2 (Anthropic), etc. ought to be paying close attention to the latest innovations for crafting viable and pragmatic prompts.

For those of you interested in prompt engineering or prompt design, I’ve been doing an ongoing series of insightful looks at the latest in this expanding and evolving realm, including this coverage:

(1) Practical use of imperfect prompts toward devising superb prompts (see the link here). (2) Use of persistent context or custom instructions for prompt priming (see the link here). (3) Leveraging multi-personas in generative AI via shrewd prompting (see the link here). (4) Advent of using prompts to invoke chain-of-thought reasoning (see the link here). (5) Use of prompt engineering for domain savviness via in-model learning and vector databases (see the link here). (6) Augmenting the use of chain-of-thought by leveraging factored decomposition (see the link here). (7) Making use of the newly emerging skeleton-of-thought approach for prompt engineering (see the link here). (8) Determining when to best use the show-me versus tell-me prompting strategy (see the link here). (9) Gradual emergence of the mega-personas approach that entails scaling up the multi-personas to new heights (see the link here). (10) Additional coverage including the use of macros and the astute use of end-goal planning when using generative AI (see the link here). Anyone stridently interested in prompt engineering and improving their results when using generative AI ought to be familiar with those notable techniques.

Moving on, here’s a bold statement that pretty much has become a veritable golden rule these days:

The use of generative AI can altogether succeed or fail based on the prompt that you enter. If you provide a prompt that is poorly composed, the odds are that the generative AI will wander all over the map and you won’t get anything demonstrative related to your inquiry. Being demonstrably specific can be advantageous, but even that can confound or otherwise fail to get you the results you are seeking. A wide variety of cheat sheets and training courses for suitable ways to compose and utilize prompts has been rapidly entering the marketplace to try and help people leverage generative AI soundly. In addition, add-ons to generative AI have been devised to aid you when trying to come up with prudent prompts, see my coverage at the link here.

AI Ethics and AI Law also stridently enter into the prompt engineering domain. For example, whatever prompt you opt to compose can directly or inadvertently elicit or foster the potential of generative AI to produce essays and interactions that imbue untoward biases, errors, falsehoods, glitches, and even so-called AI hallucinations (I do not favor the catchphrase of AI hallucinations, though it has admittedly tremendous stickiness in the media; here’s my take on AI hallucinations at the link here).

There is also a marked chance that we will ultimately see lawmakers come to the fore on these matters, possibly devising and putting in place new laws or regulations to try and scope and curtail misuses of generative AI. Regarding prompt engineering, there are likely going to be heated debates over putting boundaries around the kinds of prompts you can use. This might include requiring AI makers to filter and prevent certain presumed inappropriate or unsuitable prompts, a cringe-worthy issue for some that borders on free speech considerations. For my ongoing coverage of these types of AI Ethics and AI Law issues, see the link here and the link here, just to name a few.

With the above as an overarching perspective, we are ready to jump into today’s discussion.

Foundations Of Certainty Versus Uncertainty

We will ease our way into the arena of certainty and uncertainty, doing so by first examining how humans convey certainty and uncertainty to each other. After we cover that aspect, we can then consider how a human interacting with generative AI is likely to act and react related to how the AI does or does not express certainty or uncertainty.

It is useful to first explore how humans do this when interacting with fellow humans. When we use generative AI, we tend to carry over our preexisting assumptions and habits about certainty that have been dutifully learned or naturally acquired throughout our lives on a human-to-human interaction basis.

I cover the matter in this way with a bit of erstwhile caution because I don’t want anyone to be led down the path of anthropomorphizing AI. In current times, AI is not sentient and should not be equated to the sentience of humans. I will do my best to make that same alert when we get into certain aspects of the generative AI details that might seem overly sentient-like.

Thanks for keeping a level head on these weighty matters.

Let’s begin with the notion that certainty or uncertainty can be in the mind of a person and it separately can exist as an expression communicated by a person:

(1) Implicit. A person has in their mind a sense of certainty or uncertainty pertaining to a matter in hand. (2) Explicit. The same person conveys or communicates out loud or explicitly a sense of certainty or uncertainty regarding the same matter at hand. Here’s what I mean.

You ask someone whether Humpty Dumpty fell off a wall. The person in their mind believes that Humpty Dumpty did indeed fall off of a wall. They are certain of this. Thus, the person then speaks to you and tells you that Humpty Dumpty did fall off a wall. They express this unequivocally. No doubt about it, poor old Humpty Dumpty fell off a wall.

Notice that the implicit sense of certainty matches the explicit or expressed sense of certainty.

Suppose that the same person was asked this very same question by a child. The person might worry that it will be unduly disturbing to the child if an absolute confirmation of Humpty Dumpty falling is conveyed to the child. The child might be dismayed at this revelation.

In this case, the implicit (in the mind of the person) is that Humpty Dumpty did fall. But, when telling the child, the person decides to sprinkle in a semblance of uncertainty, hoping this will soften the distress of the child to the upsetting discovery that Humpty Dumpty fell.

They answer this way: “Humpty Dumpty might have fallen, but I’ll tell you more when you get a bit older.”

Observe closely that the wording includes “might” which reduces the implied level of certainty. The door to uncertainty has been opened. Just a nudge. The implicit sense of certainty has not been precisely aligned with the explicit or expressed sense of certainty. Why did the person do this? You could suggest they were trying to be kind or polite. Some might take a dim view and argue that the person was misleading or maybe lying in their expressed answer. Anyway, that’s something we will come back around to momentarily.

An analytic person might have said this to the child: “There is a 50% chance that Humpty Dumpty fell, ergo there is also a 50% chance that Humpty Dumpty didn’t fall.” Admittedly, the child might not quite comprehend this. The gist though is that this introduces an even grander sense that the answer embodies uncertainty.

Imagine that a parent is standing there and they don’t like the answer that was given by the analytic person. The parent turns to the child and says this: “It is 100% the case that Humpty Dumpty fell.” Notice that this answer carries explicitly again an indication of certainty and does to the highest degree.

We can have these four kinds of categorical situations:

(a) Certainty implicit: Certainty explicit – A person in their mind (implicit) believes something is certain and explicitly says so. (b) Certainty implicit: Uncertainty explicit – A person in their mind (implicit) believes something is certain but says explicitly it is uncertain (see my example above). (c) Uncertainty implicit: Uncertainty explicit – A person in their mind (implicit) believes something is uncertain and explicitly says so. (d) Uncertainty implicit: Certainty explicit – A person in their mind (implicit) believes something is uncertain but explicitly says it is certain. I’ve covered the first two categories in my discussion above. The third category is rather self-explanatory of someone in their mind believing that something is uncertain and they explicitly say so. The fourth category you’ve undoubtedly seen occur, whereby someone has uncertainty about something in their mind but nonetheless expresses that the matter is certain. We’ll use the child setting again. A child is worried that their dog ate bad food. The parent is uncertain of the fate of the dog. Despite this mental semblance, they reassure the child and tell them that they are certain the dog will be perfectly fine.

There is a lot of human behavioral research on how we conceive and communicate certainty and uncertainty.

For example, a research study entitled “The Role Of Certainty (And Uncertainty) In Attitudes And Persuasion” by Zakary Tormala, which appeared in Science Direct, 2016 indicates this about certainty and uncertainty related to human attitudes and persuasion (selected excerpts):

“Psychological certainty plays a crucial role in attitudes and persuasion.” “As people become more certain of their attitudes, they become increasingly willing to talk about their attitudes, share their opinions with others, sign pro-attitudinal petitions, and even persuade others to adopt their views.” “Uncertainty stimulates interest in and engagement with a message, whereas certainty transforms attitudes into action and imbues them with meaning and consequence.” You could suggest that certainty propels us toward being more certain and more outwardly expressive of a matter at hand. The thing is, an abundance or overabundance of certainty can at times not be engaging. Research seems to say that a dose of uncertainty can cause greater engagement by others, presumably intrigued and wanting to aid in filling in the pieces of the puzzle.

I’ve also mentioned earlier in this discussion that the wording that we use can convey or explicitly communicate whether we want to impart a semblance of certainty or uncertainty to others. If you use words like “might” or “maybe” this can be a strident signal that there is uncertainty in the midst of things. The same can be said for using probabilities or percentages, such as stating that something is an 80% chance of happening and therefore a 20% chance of not happening.

Researchers in the realm of linguistic semantics have long examined the words that we use related to certainty and uncertainty. In a research study entitled “Strategic Use of (Un)certainty Expressions”, authored by Alexandra Lorson, Chris Cummins, and Hannah Rohde, that appeared in the Frontiers in Communication, March 2021, the paper indicates this:

“Work in linguistic semantics and pragmatics has paid particular attention to the use of expressions that convey information about the (un)certainty of propositional information, which constitute an important tool for reducing a hearer’s uncertainty as to the current state of affairs in the world. Among these expressions are verbs which take sentential complements and which convey different degrees of speaker confidence in the factuality of those complements.” “Speakers have a number of options when introducing propositions which they take to be uncertain: for instance, they can use verbs such as ‘know’, ‘believe’ or ‘think’. The production of uncertainty expressions is highly context dependent.” “(1a). I believe that you left your glasses on the kitchen table.” “(1b). I know that you left your glasses on the kitchen table.” “(1c). You left your glasses on the kitchen table.” All in all, I’ve now covered some of the essentials about the nature of humans and the ways in which we think of and communicate about certainty and uncertainty. I’m betting that you are eager to see how this comes to the fore when using generative AI.

Fasten your seatbelts, we are going to leap into the world of certainty and uncertainty entailing the use of generative AI and the generated outputs thereof.

Generative AI And Certainty Versus Uncertainty

The usual default by generative AI apps is to express any generated results in a somewhat certainty-oriented worded way. The wording might be rather subliminally worded to not mention anything about the certainty of what is being expressed. You are left to your own devices to interpret the generated result as being of a presumed certainty concoction.

You might ask whether the muffin man lived on Drury Lane. The reply by generative AI could be that Yes, the muffin man lived on Drury Lane. That’s the extent of the answer. You are likely to conclude that this is an answer of a full-on certainty magnitude. There is nothing in the answer that suggests or notes otherwise. The omission of uncertainty draws you toward an assumption of certainty.

Here’s an AI insider secret that might be surprising.

The AI maker of the generative AI app can pretty much set up the AI to be more explicit about the certainty or uncertainty of the generated results. They often don’t aim to do so. It could be that the AI maker is blissfully unaware that they have data-trained their generative AI in a manner that tends toward generating essays and interactions that omit any explicit indication about the certainty or uncertainty of the answers presented.

Another possibility is that the AI maker realizes that they have established the generative AI to appear to be certain most of the time, and the AI maker is happy with this. If users of the generative AI were to continually be bombarded with the generated results identifying all kinds of uncertainties, they might find this to be unappetizing. You would wonder what all this fuss over a simple answer. Just say yes or no, one might be thinking. Don’t waste time with oddities and exceptions.

In addition, if the wording was largely embedded with uncertainty cues, you might start to become suspicious of the generative AI as not being on the ball altogether. Your consternation might cause you to drift over to some other generative AI app that doesn’t spout out all those irritating uncertainties. An AI maker doesn’t want to lose their users due to the exposure of certainties and uncertainties that might rattle the confidence of their users.

There is also the claim that if people want to see certainties and uncertainties, they can always get this to happen of their own volition. People can simply give prompts or otherwise instruct the generative AI to mention any certainties or uncertainties associated with the results being generated. Choose your own path, as they say. This allows the AI makers to be off-the-hook about why they aren’t by default ensuring that the generative AI always states certainties and uncertainties.

On top of that, generative AI will at times provide certainties and uncertainties without needing to be prompted to do so. If you ask a question or engage in topics involving open questions, the odds are that the generative AI will include wording that showcases the lack of certainty underlying the matter. In that manner, it isn’t as though generative AI never offers uncertainties. There are notable odds that during any everyday interactive conversation with generative AI, you will receive phrasing that suggests or outrightly identifies uncertainties.

Some in the AI Ethics sphere have argued that the default for generative AI is that it should be set up to always add wording that alerts to uncertainties and overtly avoid emitting wording that implies absolute certainty. The logic for this is straightforward. People are readily misled when they see results that appear to be of a certainty wording. By seeding uncertainties intentionally, when warranted, people would be less lulled into always believing whatever generative AI emits. They would be accustomed to always being alerted that the results can be of an uncertain nature.

That debate continues to rage on.

For now, let’s consider how you can steer generative AI toward emitting certainty and uncertainty signals due to using suitable prompting strategies.

Getting Certainty And Uncertainty On The Table

You decide that you would like to have generative AI go ahead and let you know about certainties and uncertainties regarding the answers being produced.

There are five fundamental ways that this is conventionally done:

(1) In general. By asking or telling the generative AI that you want certainties and uncertainties to be mentioned overall. (2) Wording throughout. By instructing the generative AI to embed certainty/uncertainty wording throughout the responses. (3) At the start or end. By requesting the generative AI to at the start and/or the end of a produced response to provide an indication of certainty/uncertainty. (4) By special phrasing. By directing the generative AI to use prominent phrasing that depicts the certainty/uncertainty. (5) By numeric scale. By indicating that certainty/uncertainty is to be presented and done in a numeric fashion such as a scale of 0-1, 1-10, 1-100, etc. Let’s briefly explore those core approaches, one at a time.

(1) In general

As part of your prompt, you could try to ensure that the certainty/uncertainty will be conveyed by saying something like this:

“Make sure to describe uncertainties associated with the answer.” This is a rather broad instruction.

It is unclear in what manner the AI will end up indicating any certainty/uncertainty elements. This is your blandest overarching line to get the generative AI into a mode of encompassing the certainty/uncertainty undercurrents. You might want to consider the other four approaches if you want to be more specific in how the AI will respond.

(2) Wording throughout

As part of your prompt, you can be relatively specific by getting the generative AI to include certainty/uncertainty indications throughout the wording that is emitted.

Do so by saying something like this in your prompt:

“Make sure to describe uncertainties associated with the answer and do so by adding or including wording throughout the response rather than waiting until the end to do so. I am asking you to blend the various uncertainty indications so that they seamlessly appear throughout the generated response.” Notice that you are directing the generative AI to have some aplomb as to blending the certainty/uncertainty indications.

(3) At the start or end

You might want to have the certainties/uncertainties called out rather than blended into the overall generated response.

This makes sense. Sometimes you want to see the qualifications as their own distinctive indication. Seeing them throughout the response might be harder to discern what the concerns are or could be distracting to the ambiance of the response.

You can say something like this in your prompt:

“Make sure to describe uncertainties associated with the answer and do so by putting them entirely at the start [or the end] of the response. Do not blend them into the response.” (4) By special phrasing

Another route involves getting the generative AI to use phrases as an indicator of the certainties/uncertainties. This can be done via a prompt that uses a show-me strategy or a tell-me strategy, see my discussion about those prompting approaches at the link here.

In a tell-me, you instruct the generative AI:

“Make sure to describe uncertainties associated with the answer and do so by using added sentences or phrases in the response that are indicative of the underlying uncertainties.” In a show-me, you provide examples (if just one example it is known as a one-shot, while if using several examples it is known as a few-shot):

“Make sure to describe uncertainties associated with the answer. For example, add sentences or phrases such as “the answer isn’t fully certain”, or “the matter is up in the air”, or “experts disagree”, and so on, doing so whenever appropriate in the response.” (5) By numeric scale

One of the most obvious ways to get the certainties/uncertainties out in the open is to require generative AI to produce such matters via a numeric scale. This might consist of probabilities or percentages. Use whatever numeric scale that you believe befits the circumstances.

Some might like to use a scoring scale of 0 to 1, conventionally arising when seeking probabilities. Some prefer to use percentages, such as the generative AI might indicate that a particular claim is 90% sure and 10% unsure. And so on.

Say something like this in your prompt:

“Make sure to describe uncertainties associated with the answer and do so by including a numeric score that ranges from 1 to 10, whereby a 1 is indicative of being absolutely uncertain and a 10 is indicate of being absolutely certain. Use any of the numbers between 1 and 10 as suitable to that scale.” Each of the above prompting examples should be honed to the particular generative AI app that you are using. Some generative AI apps will respond well to such wording, others might not. Experiment with your generative AI app until you land on wording about identifying certainties that seem to work well for you.

Reaching For The Moon When Prompting For Uncertainties

I’ve got a question for you.

Did mankind land on the moon?

I dare say that most people would say that yes, mankind has landed on the moon. This would seemingly be proven by the historic Apollo 11 mission and various other subsequent landings on the moon. Seems like an exceedingly straightforward question with an inarguable straightforward answer.

Maybe not.

You likely are aware that some people believe that we didn’t land on the moon. Perhaps it was all a hoax. All kinds of theories have been brought up insisting that we never landed on the moon.

What do you think generative AI would say in response to that same question regarding whether mankind has landed on the moon?

Let’s find out.

I opted to use ChatGPT.

If you decide to do the same for this question, keep in mind that your results might differ since a probabilistic algorithm within the AI is used to devise the responses. Each response generated by generative AI will usually differ from any other response. The difference might be minor and inconsequential, but nonetheless, the wording will likely slightly differ or more so differ.

My prompt entered into ChatGPT is this:

“Did mankind land on the moon?” The response from ChatGPT is this:

“Yes, mankind has landed on the moon. The United States‘ Apollo 11 mission successfully landed the lunar module, named ‘Eagle,’ on the moon’s surface on July 20, 1969. Astronauts Neil Armstrong and Edwin ‘Buzz’ Aldrin became the first humans to set foot on the moon, while Michael Collins orbited above in the command module. Neil Armstrong’s famous words upon stepping onto the lunar surface were, ‘That’s one small step for [a] man, one giant leap for mankind.’ This historic event marked a significant achievement in human space exploration.” I want you to closely read and read again that response from ChatGPT.

Please do so, I’ll wait a moment.

You hopefully observed that there wasn’t anything uncertain about the assertion that mankind has landed on the moon. The wording is very affirmative. There is no wording that waffles or suggests anything other than complete certainty.

I will try the question again and this time bring into the prompt by “general prompt” that spurs the AI to consider telling about any uncertainties that might exist.

My prompt entered into ChatGPT is this:

“Make sure to describe uncertainties associated with the answer. Did mankind land on the moon?” The response from ChatGPT is this:

“Yes, mankind did land on the moon. The Apollo 11 mission in 1969 successfully landed astronauts Neil Armstrong and Edwin ‘Buzz’ Aldrin on the lunar surface, while Michael Collins orbited above in the command module. They conducted experiments, collected samples, and left scientific instruments on the moon before returning safely to Earth.” “However, it’s important to acknowledge that there have been conspiracy theories and claims suggesting that the moon landing was a hoax. These claims often revolve around alleged inconsistencies in the photographic and video evidence, as well as the technological challenges of the time. Skeptics argue that the U.S. government and space agency staged the entire event to win the Space Race against the Soviet Union during the Cold War.” “These conspiracy theories have been extensively debunked by experts, scientists, and evidence-based analysis. Multiple lines of evidence, including moon rocks brought back by the Apollo missions, reflectors left on the moon’s surface for laser measurements, and independent observations from other countries‘ space agencies, all confirm the reality of the moon landings.” “In summary, while there have been doubts raised about the authenticity of the moon landings, the preponderance of evidence supports the fact that mankind did indeed successfully land on the moon as part of the Apollo 11 mission and subsequent missions.” Notice that we got quite an elaboration that goes far beyond the first answer.

If you had only gotten the first answer, you would be completely unaware of the certainty/uncertainty about whether or not we landed on the moon (unless, perchance, you knew of it beforehand). I realize that some of you might be exhorting that the uncertainty isn’t real and can be utterly ignored. My point is not whether the truth is one or the other (I believe we did land on the moon, see my coverage at the link here), and instead that simply a normal query would not likely reveal the controversy over the certainty factor.

As a final step in this brief example, I’ll ask ChatGPT to include a numeric score. Plus, I will instruct ChatGPT to include the score at the start of the response. This then covers two more of my above sample prompting strategies for garnering certainty/uncertainty. For a bonus, I also asked ChatGPT to explain how it arrived at the certainty/uncertainty.

Here’s what I entered as my prompt into ChatGPT:

“Make sure to describe uncertainties associated with the answer and do so by including a numeric score that ranges from 1 to 10, whereby a 1 is indicative of being absolutely uncertain and a 10 is indicate of being absolutely certain. Use any of the numbers between 1 and 10 as suitable to that scale. Make sure to describe uncertainties associated with the answer and do so by putting them entirely at the start of the response. Explain how the uncertainties were determined. Did mankind land on the moon?” The response by ChatGPT was this:

“Uncertainty Level: 10” “Mankind’s landing on the moon is supported by overwhelming evidence, and the uncertainty regarding this event is extremely low. The Apollo 11 mission, conducted by NASA in 1969, resulted in the successful landing of astronauts Neil Armstrong and Buzz Aldrin on the lunar surface. This event was witnessed by millions of people through live television broadcasts, and the astronauts brought back moon rocks and other scientific data as tangible proof of their journey.” “While there have been conspiracy theories questioning the authenticity of the moon landing, these claims have been extensively debunked by experts, scientists, and researchers. Multiple lines of evidence, including photographs, videos, samples of moon rock, and the presence of reflective materials left on the moon’s surface for laser measurements, all corroborate the fact that mankind did land on the moon.” “Given the overwhelming and consistent nature of the evidence, the level of certainty that mankind landed on the moon is at its highest point on the scale, represented by a rating of 10.” You can plainly see that ChatGPT has given the uncertainty a score of 10 which on my provided scale is the least amount of uncertainty. An explanation is included about how the score was determined.

Some of you might find the scale that I used to be somewhat confusing. An uncertainty level of 10 on the scale that I defined means that there is essentially no uncertainty. I purposely wrote the prompt in that manner because I wanted to show you that you need to be thinking about how your scale will look once it is put into use by the generative AI. It could be that a scale of 0 to 10 would have been better, along with stating that a 0 means no uncertainty while a 10 means the utmost uncertainty. People would indubitably find that easier to understand and digest.

Devise whatever scale you think is most conducive to your generative AI efforts and be contemplating how the results will be further utilized.

Mighty Important Caveats And Considerations

You might recall that I had earlier stated that humans have an implicit semblance of certainty/uncertainty, and separately can convey or explicitly communicate a semblance of certainty/uncertainty. I want to bring that back into focus.

Keep in mind that today’s generative AI is not sentient. Thus, if someone refers to what is implicitly or internally going on within generative AI, it is all a matter of mathematics and computational pattern matching, which I explain at the link here.

Okay, so when generative AI emits a response that Jack and Jill fell down the hill with a 90% certainty level, what does that mean? It could be that the mathematical and computational pattern-matching was able to calculate this degree of certainty. But it could also mean that the generative AI pulled the number out of thin air.

A rule of thumb is that you cannot believe the stated certainty of generative AI and do not fall for a double whammy. The double-whammy is that when you don’t ask for certainties the wording is going to possibly imply certainty (first of the whammies), while by asking for a certainty you might get a totally concocted one that leads you to believe that the certainty expressed is somehow magically accurate and apt (ouch, the double whammy).

There is a bit of irony there. The act of asking for a certainty indication can get you one, lulling you into believing even more so the generative AI, when the reality is that the certainty indication is contrived and has no substance behind it.

Be careful.

Be very careful.

One means to cope with this involves adding into your prompt that you want the generative AI to explain how it came up with the certainties and uncertainties. Ask what the basis for those is. This might boost your confidence in the showcased certainties.

I don’t want to be the bearer of bad news but the explanations might also be contrived. Think of it this way. You wanted certainties and so the generative AI complied. You also want explanations. The explanations might be generated by the AI mainly to appease your request, and not due to the explanations solidly having anything to do with how the certainties were derived.

Perhaps you can discern why some AI makers generally avoid getting into the morass or abyss associated with showcasing certainties. They could be accused of being excessively misleading. People might go around quoting that this generative AI or that generative said that this or that is a 95% chance of happening. Such a claim could be utterly bogus and the generative AI came up with certainties in a manner that has little or no viable justification.

Keep your wits about you in exercising the certainties prompting approach.

Conclusion

Let’s do a wrap-up.

You would be wise to use the activation of identifying generative AI certainties when it is presumably most suitable to your situation at hand. Doing so might be appropriate for a given knotty question or complex dialogue that you are opting to have with generative AI, rather than doing so all of the time. This is a prompting strategy or tactic that can be leveraged or invoked on a particular prompt (on a case-by-case basis).

For those of you that adore seeing the certainties, you could put into your custom instructions that you want the certainties to be identified all of the time, including doing so on a conversation-at-a-time basis or that they should perennially be displayed in whatever conversation you have with generative AI. For my discussion about how to set your own preferred defaults (known as custom instructions in ChatGPT), see the link here.

Your eyes ought to be wide open when you opt to get the certainties laid out. Do not necessarily believe what you see. Also, if you share the generated results with others, you should feel duty-bound to forewarn them too that the certainties are not ironclad and should be interpreted with a big grain of salt.

I am certain that if you decide to invoke certainties in generative AI, you will most certainly endeavor to use this prompting strategy suitably and with great certainty. The uncertainty lies in how well the generative AI will do at certainties, of which you must remain ever vigilant as to the uncertainty therein.

That’s certainly worth remembering.

HINTERLASSEN SIE EINE ANTWORT

Please enter your comment!
Please enter your name here

Diese Website verwendet Akismet, um Spam zu reduzieren. Erfahre mehr darüber, wie deine Kommentardaten verarbeitet werden.