ΑΙhub.org
 

Prompt engineering: is being an AI ‘whisperer’ the job of the future or a short-lived fad?


by
08 September 2023



share this:
close-up of computer keyboard

By Cameron Shackell, Queensland University of Technology

As generative AI settles into the mainstream, growing numbers of courses and certifications are promising entry into the “hot job” of prompt engineering.

Having skills in using natural language (such as English) to “prompt” useful content out of AI models such as ChatGPT and Midjourney seems like something many employers would value. But is it as simple as doing a short course and riding the wave to a six-figure salary?

The prompt engineering hype

A Washington Post article published in February did a lot to seed the notion that prompt engineers are “AI whisperers” who “program in prose”. It dropped some big salary numbers and quoted a job ad by Silicon Valley company Anthropic calling for people who have “a creative hacker spirit and love solving puzzles”.

Similar articles in Time, Forbes and Business Insider further fuelled the frenzy.

And to complete the transition from geek to chic, several influencers jumped on board to portray prompt engineering as a gold rush open for anyone willing to study and learn a few tricks.

Are there really that many jobs?

That Anthropic ad is still hanging around. Six months later, it seems more like a corporate publicity stunt than a search for talent.

As many commentators predicted, prompt engineering hasn’t exploded as a standalone career. At the time of writing this article, there wasn’t a single advertisement for a “prompt engineer” role on the main job sites in Australia. And only four listings mentioned prompt engineering in the job description.

The situation seems better in the United States. But even there, the new profession has largely been subsumed into other roles such as machine learning engineer or AI specialist.

There are few reliable statistics on the growth (or lack of growth) in prompt engineering. Most data are anecdotal. The reality is further clouded by consulting firms such as Deloitte promoting it as “the dawn of a new era” as part of their AI business drive.

What’s the reality?

A lot of the confusion about whether prompt engineering is useful comes from not recognising that there are two different types of value creators: domain experts and technical experts.

Domain experts

The germ of truth in the “anyone can do it” narrative is that experts in a particular subject are often the best prompters for a defined task. They simply know the right questions to ask and can recognise value in the responses.

For example, in branding and marketing, generative AI is taking off for what I have dubbed generic or “G-type” creative tasks (such as making the Pepsi logo in the style of Picasso). When advertising experts start hacking away at prompting, they quickly invent ways to do things even the most skilled AI gurus can’t. That’s because technical gurus often don’t know much about copywriting or marketing.

Technical experts

On the other hand, tech gurus who grapple “under the hood” with the enormous complexity of AI models can also add value as prompt engineers. They know arcane things about how AI models work.

They can use that knowledge, for example, to improve results for everyone using AI to obtain data from a company’s internal documents. But they typically have little domain knowledge outside of AI.

Both domain expert and technical expert prompt engineers are valuable, but they have different skill sets and goals. If an organisation is using generative AI at scale, it probably needs both.

Why is prompting hard?

Generative AI ultimately produces outputs for people. Advertising copy, an image or a poem is not useful or useless until it succeeds or fails in the real world. And in many real-world scenarios, domain experts are the only ones who can judge the usefulness of AI outputs.

Nonetheless, these evaluations are ultimately subjective. We know 2 + 2 = 4. So it’s simple to test prompts that stop AI from hallucinating that the answer is 5. But how long does it take to work out if an AI-designed ad campaign is more or less effective than a human-designed one (even if you do have a domain expert on hand)?

In my past research, I have suggested the evaluation of generative AI should move closer to semiotics – a field that can connect natural language to the real world. This could help narrow the evaluation gap over time.

Is prompt engineering worth learning?

Beyond playing with some tips and tricks, formally learning how to write prompts seems a bit pointless for most people. For one thing, AI models are constantly being updated and replaced. Specific prompting techniques that work now may only work in the short term.

People looking to get rich from prompt engineering would be better advised to focus on pairing AI and problem formulation in their area of expertise. For example, if you’re a pharmacist you might try using generative AI to double check warning labels on prescriptions.

Along the way you’ll sharpen your expository writing, acquire the basic generative AI skills (which employers might appreciate), and maybe strike gold with a killer application for the right audience.

Eventually, boasting that you know how to prompt AI will become resumé furniture. It will be comparable to boasting you know how to use a search engine (which wasn’t always so intuitive) – and may paint you as a dinosaur if mentioned.The Conversation

Cameron Shackell, Sessional Academic and Visitor, School of Information Systems, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.




The Conversation is an independent source of news and views, sourced from the academic and research community and delivered direct to the public.
The Conversation is an independent source of news and views, sourced from the academic and research community and delivered direct to the public.




            AIhub is supported by:



Related posts :

Sven Koenig wins the 2026 ACM/SIGAI Autonomous Agents Research Award

  06 Feb 2026
Sven honoured for his work on AI planning and search.

Congratulations to the #AAAI2026 award winners

  05 Feb 2026
Find out who has won the prestigious 2026 awards for their contributions to the field.

Forthcoming machine learning and AI seminars: February 2026 edition

  04 Feb 2026
A list of free-to-attend AI-related seminars that are scheduled to take place between 4 February and 31 March 2026.

#AAAI2026 social media round up: part 2

  03 Feb 2026
Catch up on the action from the second half of the conference.

Interview with Zijian Zhao: Labor management in transportation gig systems through reinforcement learning

  02 Feb 2026
In the second of our interviews with the 2026 AAAI Doctoral Consortium cohort, we hear from Zijian Zhao.
monthly digest

AIhub monthly digest: January 2026 – moderating guardrails, humanoid soccer, and attending AAAI

  30 Jan 2026
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

The Machine Ethics podcast: 2025 wrap up with Lisa Talia Moretti & Ben Byford

Lisa and Ben chat about the prevalence of AI slop, the end of social media, Grok and explicit content generation, giving legislation more teeth, anthropomorphising reasoning models, and more.

Interview with Kate Larson: Talking multi-agent systems and collective decision-making

  27 Jan 2026
AIhub ambassador Liliane-Caroline Demers caught up with Kate Larson at IJCAI 2025 to find out more about her research.


AIhub is supported by:







 













©2026.01 - Association for the Understanding of Artificial Intelligence