You’ve probably heard this term: “a prompt for ChatGPT” or “prompt engineering.” What does it mean?
First, let me make this into an SAT question: “Prompting is to a generative AI system what a command is to a traditional computer programme.”
Now that I have not only made everything more murky but also given you light PTSD from your high school days, I’ll explain.
In traditional computer programming, a command is an order (an instruction) given to a programme for it to execute something. Like “turn right 90 degrees” (I’m using LOGO here for my example, because I am an 1980’s kid). A command has to be provided in the specific computer syntax that the programme expects — meaning that even a typo would derail things and the command would fail.
Prompting is the term we use for issuing commands to generative AI systems. From a lexical perspective, the choice of the term “prompting” is correct in that it’s less a literal direct order but rather an attempt at driving the system in a certain direction.
And the reason this is less direct and literal is that prompting takes place in natural language — natural language, in computer terms, is the language of humans (regardless of their speaking tongue). So the upside is the human trying to interact with the programme does not have to speak a computer language, and the human also doesn’t need to think about the order of operation for carrying out the prompt.
What it does mean, however, is that we are shifting some amount of work to the computer system. What a computer command does in traditional computing leaves no room for ambiguity. But prompting an AI means the AI first has to parse your prompt, work it through its engines to “understand” it, and then actually execute it.
You can readily appreciate that there will clearly be many opportunities for misunderstanding between the human giving the prompt and what the system will do as it tries to figure out what the human wants it to do.
As a result, the art of prompting is really disambiguating what humans would generally infer from context they already know or can infer into straightforward and ambiguous language that a computer will find complete enough and unambiguous enough that it will be able to properly understand what is being asked.
Sometimes, more advanced prompt engineering may involve explaining what way-station tasks the AI should take to carry out the prompt. For example, when Ippen.Media looked for ways to include quotes in their summaries, they broke down the task into, first, extracting quotes from the text and keeping them in memory, then summarising and including these quotes where appropriate. Asking the system to do everything in one prompt proved a non-starter. But setting the job through a series of prompts of more discrete complexity allowed Ippen to tackle the challenge.
For more: Here’s an in-depth dive into prompting, from Microsoft (written for non-technical humans).
If you’d like to subscribe to my bi-weekly newsletter, INMA members can do so here.