ΑΙhub.org
 

Prompt engineering: is being an AI ‘whisperer’ the job of the future or a short-lived fad?


by
08 September 2023



share this:
close-up of computer keyboard

By Cameron Shackell, Queensland University of Technology

As generative AI settles into the mainstream, growing numbers of courses and certifications are promising entry into the “hot job” of prompt engineering.

Having skills in using natural language (such as English) to “prompt” useful content out of AI models such as ChatGPT and Midjourney seems like something many employers would value. But is it as simple as doing a short course and riding the wave to a six-figure salary?

The prompt engineering hype

A Washington Post article published in February did a lot to seed the notion that prompt engineers are “AI whisperers” who “program in prose”. It dropped some big salary numbers and quoted a job ad by Silicon Valley company Anthropic calling for people who have “a creative hacker spirit and love solving puzzles”.

Similar articles in Time, Forbes and Business Insider further fuelled the frenzy.

And to complete the transition from geek to chic, several influencers jumped on board to portray prompt engineering as a gold rush open for anyone willing to study and learn a few tricks.

Are there really that many jobs?

That Anthropic ad is still hanging around. Six months later, it seems more like a corporate publicity stunt than a search for talent.

As many commentators predicted, prompt engineering hasn’t exploded as a standalone career. At the time of writing this article, there wasn’t a single advertisement for a “prompt engineer” role on the main job sites in Australia. And only four listings mentioned prompt engineering in the job description.

The situation seems better in the United States. But even there, the new profession has largely been subsumed into other roles such as machine learning engineer or AI specialist.

There are few reliable statistics on the growth (or lack of growth) in prompt engineering. Most data are anecdotal. The reality is further clouded by consulting firms such as Deloitte promoting it as “the dawn of a new era” as part of their AI business drive.

What’s the reality?

A lot of the confusion about whether prompt engineering is useful comes from not recognising that there are two different types of value creators: domain experts and technical experts.

Domain experts

The germ of truth in the “anyone can do it” narrative is that experts in a particular subject are often the best prompters for a defined task. They simply know the right questions to ask and can recognise value in the responses.

For example, in branding and marketing, generative AI is taking off for what I have dubbed generic or “G-type” creative tasks (such as making the Pepsi logo in the style of Picasso). When advertising experts start hacking away at prompting, they quickly invent ways to do things even the most skilled AI gurus can’t. That’s because technical gurus often don’t know much about copywriting or marketing.

Technical experts

On the other hand, tech gurus who grapple “under the hood” with the enormous complexity of AI models can also add value as prompt engineers. They know arcane things about how AI models work.

They can use that knowledge, for example, to improve results for everyone using AI to obtain data from a company’s internal documents. But they typically have little domain knowledge outside of AI.

Both domain expert and technical expert prompt engineers are valuable, but they have different skill sets and goals. If an organisation is using generative AI at scale, it probably needs both.

Why is prompting hard?

Generative AI ultimately produces outputs for people. Advertising copy, an image or a poem is not useful or useless until it succeeds or fails in the real world. And in many real-world scenarios, domain experts are the only ones who can judge the usefulness of AI outputs.

Nonetheless, these evaluations are ultimately subjective. We know 2 + 2 = 4. So it’s simple to test prompts that stop AI from hallucinating that the answer is 5. But how long does it take to work out if an AI-designed ad campaign is more or less effective than a human-designed one (even if you do have a domain expert on hand)?

In my past research, I have suggested the evaluation of generative AI should move closer to semiotics – a field that can connect natural language to the real world. This could help narrow the evaluation gap over time.

Is prompt engineering worth learning?

Beyond playing with some tips and tricks, formally learning how to write prompts seems a bit pointless for most people. For one thing, AI models are constantly being updated and replaced. Specific prompting techniques that work now may only work in the short term.

People looking to get rich from prompt engineering would be better advised to focus on pairing AI and problem formulation in their area of expertise. For example, if you’re a pharmacist you might try using generative AI to double check warning labels on prescriptions.

Along the way you’ll sharpen your expository writing, acquire the basic generative AI skills (which employers might appreciate), and maybe strike gold with a killer application for the right audience.

Eventually, boasting that you know how to prompt AI will become resumé furniture. It will be comparable to boasting you know how to use a search engine (which wasn’t always so intuitive) – and may paint you as a dinosaur if mentioned.The Conversation

Cameron Shackell, Sessional Academic and Visitor, School of Information Systems, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.




The Conversation is an independent source of news and views, sourced from the academic and research community and delivered direct to the public.
The Conversation is an independent source of news and views, sourced from the academic and research community and delivered direct to the public.




            AIhub is supported by:



Related posts :



Apertus: a fully open, transparent, multilingual language model

  11 Sep 2025
EPFL, ETH Zurich and the Swiss National Supercomputing Centre (CSCS) released Apertus today, Switzerland’s first large-scale, open, multilingual language model.

Interview with Yezi Liu: Trustworthy and efficient machine learning

  10 Sep 2025
Read the latest interview in our series featuring the AAAI/SIGAI Doctoral Consortium participants.

Advanced AI models are not always better than simple ones

  09 Sep 2025
Researchers have developed Systema, a new tool to evaluate how well AI models work when predicting the effects of genetic perturbations.

The Machine Ethics podcast: Autonomy AI with Adir Ben-Yehuda

This episode Adir and Ben chat about AI automation for frontend web development, where human-machine interface could be going, allowing an LLM to optimism itself, job displacement, vibe coding and more.

Using generative AI, researchers design compounds that can kill drug-resistant bacteria

  05 Sep 2025
The team used two different AI approaches to design novel antibiotics, including one that showed promise against MRSA.

#IJCAI2025 distinguished paper: Combining MORL with restraining bolts to learn normative behaviour

and   04 Sep 2025
The authors introduce a framework for guiding reinforcement learning agents to comply with social, legal, and ethical norms.

How the internet and its bots are sabotaging scientific research

  03 Sep 2025
What most people have failed to fully realise is that internet research has brought along risks of data corruption or impersonation.

#ICML2025 outstanding position paper: Interview with Jaeho Kim on addressing the problems with conference reviewing

  02 Sep 2025
Jaeho argues that the AI conference peer review crisis demands author feedback and reviewer rewards.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence