Will Prompt Engineering Die Soon?

Sorab Ghaswalla
Chatbots Life
Published in
6 min readSep 25, 2023

--

I was among the first to talk in detail about the new craft of prompt engineering and even prompt hacking in my newsletter back in February, a few months after the launch of ChatGPT.

Since then, I’ve observed its transformation from a niche interest to a widely embraced field and a mainstream profession, with salaries purportedly as high as New York’s Empire State Building. In its essence, prompt engineering is the science of instructing AI models. It’s about crafting that “perfect” question or command that lets AI generate meaningful responses — like a key turning in a lock, unlocking the vast potential of AI.

But a few days ago, the company that started it all — OpenAI — released a new version of DALL-E, which it claimed would be the death knell for prompt engineering, much in line with OpenAI’s CEO Sam Altman’s claims that prompt engineering was a temporary phase in the gen-AI journey. Given that the person who began it all is making this prediction, we should take it seriously and look at the state of prompt engineering more closely.

DALL-E 3, claims its designers, understands significantly more nuance and detail than its earlier siblings. Which means it translates ideas into highly accurate images, as compared to before.

So we have an “intelligently superior” version of DALL-E. And soon, we may also have a sophisticated version of the rest of the gen-AI tools.

Personally, I tend to agree to a degree with Sam. Here’s why:

Let me give you my own example: I use gen-AI tools every day—from text to image to video generators—for which, of course, I need to input my instructions to the machine. But very rarely, maybe just 1%, have I used a templated prompt for this.

I get the fact that the more nuanced and “in context” the prompt, the better the output. And so I often use 2–3 commands to get what I want. Almost every time, the output is fairly decent. And things only keep getting better as the “machine” “understands” me over time.

Despite the growing interest in generative AI, most people like me haven’t even created a single professional prompt. But if giving instructions to the machine is also one definition of prompt engineering, then, of course, we all have done it.

One of the tools I use, and I must say I am extremely satisfied with, is Microsoft Designer. In one of its iterations, it introduced a feature where the AI itself suggests a “professional” prompt based on your initial inputs.

Two things here:

a) The machine is self-writing a prompt

b) The outputs based on my initial instructions and from the professional prompt are not vastly different

I don’t want to bore my readers with the technical stuff, but prompts are nothing but instructions given in human language to an AI. Unlike computerization and other things digital, it does not require complex, code-based input every time you want an output. That’s thanks to natural language processing (NLP). Which means human talk is translated into computer lingo by the AI, saving the human the time, energy and effort to learn “code”.

I often wonder why prompt engineering became such a “big deal” for the mainstream. I mean, at the very least, the very idea of using gen-AI is to have an assistant, an ally, or even a “smarter” colleague to help you in your pursuit of creative and professional work. So giving instructions to a machine should be as easy as talking to a fellow human being, right? At least, that’s what the theoretical idea is. Of course, for now, the communication between Man and Machine is nowhere close to that between humans, but we seem to be getting there.

When we use computing devices as laypersons, we are not expected to use any form of code to communicate. Most of us, even today, do not know how to use HTML, C/C++ to C#, Java, or whatever. So why should there be any form of input engineering for AI, which is a far more sophisticated piece of technology than anything we have ever had?

In light of all I’ve said above, it is but natural to ask the question: Is Prompt Engineering teetering on the brink of obsolescence?

For me, the time is still not here where the answer can be a simple Yes or No.

Prompt engineering, for now, remains an integral part of AI’s functionality. It’s the compass guiding the neural networks through the vast seas of human language, helping the model generate coherent, contextually accurate responses. But as we sail further into the future, rapid advances in technology do suggest that the tide could turn.

One such advancement is the move towards autonomous learning systems. These AI models are designed to learn independently, without explicit instructions or prompts. They mimic the human brain’s ability to absorb, process, and react to information, reducing the need for human intervention. If these models eventually become a reality, it just could be that prompt engineering as a science would lose its relevance.

No matter how sophisticated the current crop of machines get, they will always lack the intuition and creativity inherent to humans. Which will then ensure some form of prompting remains. Till Artificial General Intelligence (AGI) is born. So, while gen-AI might learn to operate independently, the nuanced understanding of language, context, and culture — a feat achieved through prompting — may still prove elusive. Therefore, it’s likely that prompt engineering will evolve rather than become irrelevant. It might transform from crafting explicit instructions to instilling an understanding of implicit cues in AI models.

Using precise prompts, we teach AI to grasp context, deduce meaning, and produce coherent, pertinent responses. However, there may come a time when just a simple sentence will suffice for the machine to comprehend your intentions completely.

Moreover, the idea of complete autonomy in AI raises ethical and safety concerns. As machines grow more independent, the risk of misuse or unintended consequences increases. Prompt engineering, thus, could serve as a regulatory mechanism, ensuring the responsible use of AI technology. In this regard, the role of prompt engineers might shift towards safeguarding the ethical boundaries of AI applications.

Should Scientists Focus More on Problem Formulation Than on Prompt Engineering?

Oguz A. Oguz, Chair in Marketing at King’s Business School introduced an interesting angle to this debate. Writing in the Harvard Business Review, he asks, “Should scientists invest more energy in problem formulation than in prompt engineering?”

Problem formulation, in essence, is the art of defining the questions that AI should answer or solve. It’s about identifying the gaps, defining the boundaries, and setting the course for our AI-driven solutions. In contrast to prompt engineering, which is more about instructing AI on how to respond, problem formulation focuses on what problems AI should tackle in the first place.

When we view AI through the lens of problem formulation, we shift our perspective from instruction to inquiry. We ask, “What issues can AI help us solve?” rather than “How do we make AI respond appropriately?” This shift requires a deep understanding of both AI capabilities and human needs. It demands an interdisciplinary approach, blending technology with sociology, psychology, economics, and more.

Indeed, some argue that this holistic, problem-focused approach could drive more impactful advancements in AI. Rather than focusing narrowly on refining the prompts we feed into AI systems, we might achieve more by broadening our vision and addressing larger, more complex societal problems. The potential for AI to revolutionize healthcare, education, environmental conservation, and myriad other areas is immense. Of course, to realize this potential, we must first define the right problems for AI to solve. But that’s a different story altogether.

(A confession: Some help was taken from a machine to write/re-write bits and portions of this newsletter.)

--

--

An AI Communicator, tech buff, futurist & marketing bro. Certified in artificial intelligence from the Univs of Oxford & Edinburgh. Ex old-world journalist.