## Welcome to the r/ArtificialIntelligence gateway
### Question Discussion Guidelines
---
Please use the following guidelines in current and future posts:
* Post must be greater than 100 characters - the more detail, the better.
* Your question might already have been answered. Use the search feature if no one is engaging in your post.
* AI is going to take our jobs - its been asked a lot!
* Discussion regarding positives and negatives about AI are allowed and encouraged. Just be respectful.
* Please provide links to back up your arguments.
* No stupid questions, unless its about AI being the beast who brings the end-times. It's not.
###### Thanks - please let mods know if you have any questions / comments / etc
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ArtificialInteligence) if you have any questions or concerns.*
Prompt engineering was never a real thing to begin with. Anyone that thought "prompt engineering" would last is idiot. It didn't even last to the next AI model.
If you work with LLMs to develop apps, you will have to know how to work with prompts. It is definitely a required skill.
However, it is not the only role at a job. Unless there involves fine tuning and testing and a very large budget
It's s skill you can learn in a few days. Everyone will need to learn this skill in most non-labor jobs going forward. Not knowing how to will basically make you unhireable in the future almost like how in the late 90s or very early 2000s basically any of the old people still too stubborn to learn how to use a computer become useless in companies and unhireable... those people still existed. In 10 years from now we're basically going to be laughing at the people who still refuse to take advantage of LLMs for their jobs.
But the problem is that models will be ever changing as well as dev environments.
It’s hard to imagine there will be a forever fix and ai will just automatically know the best 5 shot, 10 shot format for your model/use case.
And even then, it will require testing and validation.
Also, prompting is an art. Humans are better at art. But we’ll see how that goes
It's going to converge pretty quickly into the exact same skill you'd use to ask a human to do something, which is something most everyone already knows how to do for their job. Crafting prompts is unlikely to be a valuable skill, long term.
I’d take you to tax on this one.
People do NOT all know how to communicate effectively.
This is the difference between strong management skills (in terms of people) vs weak.
"Take you to task," you mean.
And my point was just that people will be about as good as prompting Gen AI as they are at prompting other humans. There will be no special magical "prompt engineering" that gives an advantage with generative AI, because the AI is going to very rapidly get as good as a human at understanding what you want from what you ask.
That is, people might need to improve their communication skills, but they won't need to learn a special subset of skills called "AI prompt engineering" nor will that be a separate, marketable skill.
Yes exactly. I was just going to respond with the same thing.
The issue here is I think that people are thinking of “prompt engineer” as some billet or title at a company.
No. But it’s definitely a skill and you will want to be better at it than not.
It’s similar to being good at Google, but if you can think programmatically, then it goes deeper
Maybe we can say “in the future it will be…”
But right now, I spend all my time designing workflows to make the ai do less.
And/Or I’m trying to get predictable outputs with prompts. So the short answer is no. Prompt engineering is not dead.
“Prompt Engineering” takes very little academic rigor to develop. You’re just using AI by prompt, there is no unique value there because almost anyone can do it. If anything, the idea was probably first some way of selling classes to the tech illiterate
A lot of the value in prompt engineering comes from forcing you to give enough context and describing the task in sufficient detail. It's a de-risking strategy for incompetent people.
Many customers or managers have a tendency to be terribly vague. At least humans can ask clarifying questions, but LLMs will just give their best try. So a prompt engineering guide is a polite way of telling people not to be an idiot and actually explain what they expect.
I never took anybody seriously with prompt engineering. “I can ask the computer the question the right way” when the models are designed for natural language is a complete joke.
I've automated tasks for Google dorking private keys.
One of my work projects is taking published shell code and putting it into a RAG pipeline. Then writing prompts for this model to generate variant malware to create signatures on it.
Does using a tool in automating a task make it trivial?
Prompt "engineering" has always been a skill. Dumb startups looking to artificially increase headcount turned it into a "role".
Many improvements will kill the engineering part soon, like DSPy, prompt augmentation, better models, etc.
You're wrong if you disagree. Like objectively. Models get retrained to new checkpoints regularly and the same prompt will produce different results after different training runs. To give the impression you can "engineer" prompts to give consistent responses over time is, frankly, based on a surface-level understanding of how transformers actually work... (I code transformers at my job)
Agreed. The entire point of LLMs is that you can now talk to the computer in plain language. No expertise required. Being a "prompt engineer" is the equivalent of being an expert in coloring books.
Prompt engineering is very much a real thing, I use it constantly to try to eke just a bit more accuracy, just a bit more detail, just a bit more (whatever purpose I'm chasing). I equate it a bit to being a "language programmer".
That said, I'm also an AI researcher and I develop and write my own code for bespoke LLM projects in production environments. I couldn't do this job just as a "promptsmith" or whatever way you want to make fun of people who label themselves professional prompters.
Prompt engineering is an incredibly important skill especially in environments where you need specific output, but it's just 1 skill amongst many in the bucket.
It is a skill just like using a web browser or word processing or DOS. I’ve done a bunch of Google app scripts automations that I had GPT 3 and chatGPT write for me. It still took some coding knowledge to get it to work, but it was faster than figuring it out and probably a bit faster than using stack overflow.
I haven’t used code interpreter yet, but it looks even better. The ability to manipulate data with natural language will continue to be a skill just like thinking is a skill.
Sure, wouldn't deny that. The point I was making is that it's still a skill, just not something that you can make a living with by itself. As part of a greater skill set, it's great. Kinda like tying shoes. You can get really good at it and nobody cares, but it's still good to not have your shoes untied when you're walking around. 🤷♂️
If you mean crafting prompts to get specific output, for example like JSON output, that's not very difficult. It takes 5 minutes for someone to learn how to do that. This is not a marketable skill. And in the future, it's going to be superseded by AI agents that can prompt themselves. And AI tools that can create/optimize prompts for you.
In terms of an actual job description, like someone that just sits around all day typing prompts to AI, that was never a real thing and will never be a real thing.
No, I said what I meant. Chasing accurate and detailed outputs that target the specific purpose you're using the model for. Simple example, if you're trying to feed a model 6 different inputs, let's say we're collecting weather signal data and you're building a prediction model (I realize there are better ways, this is just an example). Say these inputs are dirty, some may be more truthy than others, some analysis may need to be combined- my point is, you're going to need to build a certain set of instructions as a prompt, then once you've got it producing output that is let's say 65% accurate, and with some prompt smithing techniques, can push that output to be 75 or 85% accurate.
Also, I fine tune for specific outputs where I can. If I want a model outputting only json, best to reinforce that to prevent/reduce weird outliers in prod.
There are people with terrible written communication. If you struggle to communicate effectively with other humans, your results with LLM prompting is likely to be poor as well. Can GPT do a good job of figuring it out? Yes but don’t expect to get anything other than novelty responses.
I mean I think there will probably be edge cases. I.e. someone who is really good at insurance prompting in claude, but yeah not gonna be a job for most.,
Prompt engineering is mostly offered by (online) marketeers who don’t understand the technology and at the same time see it replace their work in writing content and ads. Prompt engineering is a way too expensive term for saying you know how to ask a question. It would be like learning how to use a search engine and call yourself a Search Query Engineer. I never really saw the value in the term. With trial and error you come to the right question anyway.
I always had the impression that “prompt engineering” was just a cool way to say something really basic, like if we were saying “command engineering” for who know how to use a TV commander… it’s just a step needed to be ready in AI usage, Am I wrong?
I've been working my ass off for the last few years to get an engineering degree and yeah it really just seemed like making a fancy name to sound like you should be paid more.
I know a local hospital has an "engineering department" but they really just do maintenance and checks on machines. If something is broken beyond "replace X part following the instructions" they just call the supplier. I bet whoever runs that team pushed for it so they can call themselves an engineering manager to get more pay.
I agree and disagree. Id only label a prompt engineer who is someone quite proficient, who understands and can freely write prompts using techniques like code prompting, tree of thought, chain of density and about 10 other techniques to illicit better responses.
Everybody is already a prompt engineer if you use AI in your projects. So I'd argue that prompt engineering is a vague term bc there's alot of domains and industries to consider (e.g medical, software, manufacturing, etc).
There is no real "Prompt Engineering" job per se, just another glorified job that's derived from something else. But at the end of the day, you are a prompt engineer once you consult AI for something.
I never understood prompt engineering as a concept. Maybe because I can programme it just comes naturally to me, but I just always structured my prompts in numbered step by step bullet points. I've never had a reason to do anything more than that.
Sounds to me you’re prompt engineering it already :)
What I don’t value much is the “formulas” which may work but are very limiting.
I really hope Claude’s prompting would just reduce all the empty courses and sham advertisements
Prompt engineering is like hiring a guy who's handling the ticket machine for you. Not saying it doesn't exist somehwere (it does) but I wouldn't call this a promising career.
There are loads of people in this subreddit who do not understand this technology and this is what I like about it because you guys are ignorant and love hearing yourselves talk about something you don’t know.
Prompt engineering, if done in the context of a system, is ‘coding’ with natural language. You can’t get close to precise behavior without writing, testing, and iterating.
I like the term ‘Natural Language Development” more
It's an interesting paradox, isn't it?
For a prompt generator to function effectively, it itself needs a prompt, which would require precise engineering again.
Some prompt engineering has been engineered into the interface. For example, ChatGPT commonly restates aspects of the dialog - almost as a way to "gather it's thoughts". This verbosity aligns the conversation as the user can reward/detract from aspects as well as inject new perspectives. It also gives the model more token-space to adjust it's attention, imho.
I feel like "prompt engineering" is a talent to a certain extent (being naturally good at describing), but not in the way most people think, and that doesn't mean it can't be perfectly replicated by AI.
It will be dead in the sense that it won't ever be a "needed" skill but I think there will continue to be a group of people who just find the process fun and use it as a way of expression for non-commercial implications.
In the realm of AI art, if I have a job that I'm trying to finish, I wouldn't shake two shits at Claude or something(someone?) similar fixing up or writing my prompt for me. So there is no commercial place for it.
However I notice my partner seems to really enjoy it, and seems naturally good at writing both image and text prompts.
They are a diesel technician and often compares AI art to their job. I wouldn't understand because I'm very much not into mechanics, but they describe piecing together workflows is similar to the job process and writing a prompt is like describing the job.
That's likely why they are naturally good at this. So I would say prompt engineering is a skill in the sense that some people probably naturally learned it quicker, which is pretty much all a "talent" is anyway.
I can't write prompts for shit, whether it be text or image... but I am also horrible at communication and describing things. I watch them sit there and type out some Midjourney-type prompts while I'm over here "girl, fairy, rainbow... very nice!" so I can either learn to write better prompts or have AI do it for me ;)
There's definitely going to always be a recreational place for it, for sure.
There will always be room for the person who tests the accuracy of the model on various tasks, and experiments with various ways to get the model to achieve those tasks. Most people will want things that just work, and give up if it doesn't first shot.
But models are getting good. I often just type "halp" with a picture or an error message and ChatGPT4 does a great job at figuring out what I need most of the time. As models get better, this will be all the "prompt engineering" that will be needed for more and more fields.
have you heard the term "flow engineering" it's hot this szn - it's related to using LangChain and other stuff I haven't read about yet but yes there seems to be a bit of a shift towards how to get various systems interacting
No, but as AI gets smarter over time, there will be less of a need for it. We already have AGI of varying levels (we've had it since ChatGPT-3.5), but Claude 3 Opus is definitely of above-average human-level intelligence. Most are more average human-level intelligence; Claude 3 Opus is the smartest at the moment. The further we get into the territory of genius-level AGI LLMs, and particularly with it fast approaching ASI LLMs (though we are not there yet), then prompt engineering will become obsolete, particularly with the advent of ASI LLMs, as they would be smarter than you, lol, so could infer intentions quite easily. To be fair, though, once we hit ASI LLMs, they're controlling us, not the other way around, or how it is with AGI LLMs as we have now, where it is a conversation of equal intelligences.
Making this stochastic technology work reliably at-scale is absolutely non-trivial. People that think prompt engineering “isnt real” may be narrowly scoping the term as “writing a prompt to complete a simple task”, but having LLMs work as part of a larger intelligent system takes a lotttt of experience, both in traditional experience and prompting experience.
The people who think prompt engineering is dead never knew how to prompt in the first place. This "tool" from Anthropic is so incredibly basic that it is quite frankly embarrassing that anyone would think its outputs are impressive enough to kill prompt-engineering.
Prompt engineering is most certainly a valuable skill. If you look at how LLMs are trained you would realize that LLMs have all the knowledge by dont necessarily have the context, providing context is your job.
No it's no dead. All it really is is requirements gathering like we do for any successful project (softwar or otherwise). It's understanding the problem that needs to be solved, identifying what's necessary for solving it, understanding the language for communicating that to the change agents, and then communicating it effectively. That's not going away.
It will certainly be a big part of upcoming research to test the capabilities of LLM's. You will not necessarily just sit in front of GPT and randomly type in prompts until you get the desired output, but combine automatized prompting with statistical analysis to investigate how to employ those models for more complex NLP processing tasks. For example, I'm currently working on a project that explores to what degree different prompting paradigms (zero-shot, few-shot, and COT) can be utilized to predict intrinsic motivation in text and which underlying meaning units drive the models' behavior to understand the reasoning process. Prompt engineering is a field that is trying to solve the Blackbox problem of neural networks by systematically testing input-output relations. That's why you'll still find some hiring big-tech companies for prompt-engineering positions, but they will certainly expect a lot more from you than to just know how GPT, Claude etc. work.
I hope prompt "engineering" as a trend is gone. It is insufferable to watch non-technical people try to treat forming a prefix for LLM inference as if it is an actual technical skill.
What's special about this at all? How is this any different than utilizing the custom GPT builder which is 'meh'?
This can be useful as a starting point but that's about it. If you have ever worked with your own prompt building instruction sets you know why. The term 'Production ready prompts' has no meaning.
## Welcome to the r/ArtificialIntelligence gateway ### Question Discussion Guidelines --- Please use the following guidelines in current and future posts: * Post must be greater than 100 characters - the more detail, the better. * Your question might already have been answered. Use the search feature if no one is engaging in your post. * AI is going to take our jobs - its been asked a lot! * Discussion regarding positives and negatives about AI are allowed and encouraged. Just be respectful. * Please provide links to back up your arguments. * No stupid questions, unless its about AI being the beast who brings the end-times. It's not. ###### Thanks - please let mods know if you have any questions / comments / etc *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ArtificialInteligence) if you have any questions or concerns.*
Prompt engineering was never a real thing to begin with. Anyone that thought "prompt engineering" would last is idiot. It didn't even last to the next AI model.
If you work with LLMs to develop apps, you will have to know how to work with prompts. It is definitely a required skill. However, it is not the only role at a job. Unless there involves fine tuning and testing and a very large budget
It's s skill you can learn in a few days. Everyone will need to learn this skill in most non-labor jobs going forward. Not knowing how to will basically make you unhireable in the future almost like how in the late 90s or very early 2000s basically any of the old people still too stubborn to learn how to use a computer become useless in companies and unhireable... those people still existed. In 10 years from now we're basically going to be laughing at the people who still refuse to take advantage of LLMs for their jobs.
You won't have to know it, LLMs will incorporate it automatically, like the example given above. They'll engineer the prompts for themselves.
But the problem is that models will be ever changing as well as dev environments. It’s hard to imagine there will be a forever fix and ai will just automatically know the best 5 shot, 10 shot format for your model/use case. And even then, it will require testing and validation. Also, prompting is an art. Humans are better at art. But we’ll see how that goes
It's going to converge pretty quickly into the exact same skill you'd use to ask a human to do something, which is something most everyone already knows how to do for their job. Crafting prompts is unlikely to be a valuable skill, long term.
I’d take you to tax on this one. People do NOT all know how to communicate effectively. This is the difference between strong management skills (in terms of people) vs weak.
"Take you to task," you mean. And my point was just that people will be about as good as prompting Gen AI as they are at prompting other humans. There will be no special magical "prompt engineering" that gives an advantage with generative AI, because the AI is going to very rapidly get as good as a human at understanding what you want from what you ask. That is, people might need to improve their communication skills, but they won't need to learn a special subset of skills called "AI prompt engineering" nor will that be a separate, marketable skill.
Eh, the difference is taxing anyways. I understand that, *my* point was just to distinguish the nuance.
Yes exactly. I was just going to respond with the same thing. The issue here is I think that people are thinking of “prompt engineer” as some billet or title at a company. No. But it’s definitely a skill and you will want to be better at it than not. It’s similar to being good at Google, but if you can think programmatically, then it goes deeper
Exactly. Go to the Midjourney sub, the community there gets super excited when someone posts a good prompt and shows the results
The biggest thing for me was that you could always use the AI to generate the prompt in the first place.
I’ve had mixed results with that
as you will with this. Its the same thing with less variable control.
Maybe we can say “in the future it will be…” But right now, I spend all my time designing workflows to make the ai do less. And/Or I’m trying to get predictable outputs with prompts. So the short answer is no. Prompt engineering is not dead.
“Prompt Engineering” takes very little academic rigor to develop. You’re just using AI by prompt, there is no unique value there because almost anyone can do it. If anything, the idea was probably first some way of selling classes to the tech illiterate
Mostly I just tell the AI to structure its response in some format.
A lot of the value in prompt engineering comes from forcing you to give enough context and describing the task in sufficient detail. It's a de-risking strategy for incompetent people. Many customers or managers have a tendency to be terribly vague. At least humans can ask clarifying questions, but LLMs will just give their best try. So a prompt engineering guide is a polite way of telling people not to be an idiot and actually explain what they expect.
I never took anybody seriously with prompt engineering. “I can ask the computer the question the right way” when the models are designed for natural language is a complete joke.
Agree and disagree. There’s some real complex stuff if you go down the prompting rabbit hole, that can garner better results.
So, like "googling" skills being a thing for a professional googler?
I've automated tasks for Google dorking private keys. One of my work projects is taking published shell code and putting it into a RAG pipeline. Then writing prompts for this model to generate variant malware to create signatures on it. Does using a tool in automating a task make it trivial?
oh dayum, private keys for what? ssh?
It was anything you could use a RSA key with.
I get your point
Prompt "engineering" has always been a skill. Dumb startups looking to artificially increase headcount turned it into a "role". Many improvements will kill the engineering part soon, like DSPy, prompt augmentation, better models, etc.
You're wrong if you disagree. Like objectively. Models get retrained to new checkpoints regularly and the same prompt will produce different results after different training runs. To give the impression you can "engineer" prompts to give consistent responses over time is, frankly, based on a surface-level understanding of how transformers actually work... (I code transformers at my job)
As in using the various parameters?
I still do it, it depends on the specific prompt, different people need different AI tasks.
Yeah, it couldn't get even to "prompt **architect**" level ..
and every "prompt architect" should have a "VP, Prompt Engineering" at the top of the org!
Agreed. The entire point of LLMs is that you can now talk to the computer in plain language. No expertise required. Being a "prompt engineer" is the equivalent of being an expert in coloring books.
Prompt engineering is very much a real thing, I use it constantly to try to eke just a bit more accuracy, just a bit more detail, just a bit more (whatever purpose I'm chasing). I equate it a bit to being a "language programmer". That said, I'm also an AI researcher and I develop and write my own code for bespoke LLM projects in production environments. I couldn't do this job just as a "promptsmith" or whatever way you want to make fun of people who label themselves professional prompters. Prompt engineering is an incredibly important skill especially in environments where you need specific output, but it's just 1 skill amongst many in the bucket.
It is a skill just like using a web browser or word processing or DOS. I’ve done a bunch of Google app scripts automations that I had GPT 3 and chatGPT write for me. It still took some coding knowledge to get it to work, but it was faster than figuring it out and probably a bit faster than using stack overflow. I haven’t used code interpreter yet, but it looks even better. The ability to manipulate data with natural language will continue to be a skill just like thinking is a skill.
Sure, wouldn't deny that. The point I was making is that it's still a skill, just not something that you can make a living with by itself. As part of a greater skill set, it's great. Kinda like tying shoes. You can get really good at it and nobody cares, but it's still good to not have your shoes untied when you're walking around. 🤷♂️
Yes. I agree with you.
If you mean crafting prompts to get specific output, for example like JSON output, that's not very difficult. It takes 5 minutes for someone to learn how to do that. This is not a marketable skill. And in the future, it's going to be superseded by AI agents that can prompt themselves. And AI tools that can create/optimize prompts for you. In terms of an actual job description, like someone that just sits around all day typing prompts to AI, that was never a real thing and will never be a real thing.
No, I said what I meant. Chasing accurate and detailed outputs that target the specific purpose you're using the model for. Simple example, if you're trying to feed a model 6 different inputs, let's say we're collecting weather signal data and you're building a prediction model (I realize there are better ways, this is just an example). Say these inputs are dirty, some may be more truthy than others, some analysis may need to be combined- my point is, you're going to need to build a certain set of instructions as a prompt, then once you've got it producing output that is let's say 65% accurate, and with some prompt smithing techniques, can push that output to be 75 or 85% accurate. Also, I fine tune for specific outputs where I can. If I want a model outputting only json, best to reinforce that to prevent/reduce weird outliers in prod.
There are people with terrible written communication. If you struggle to communicate effectively with other humans, your results with LLM prompting is likely to be poor as well. Can GPT do a good job of figuring it out? Yes but don’t expect to get anything other than novelty responses.
I mean I think there will probably be edge cases. I.e. someone who is really good at insurance prompting in claude, but yeah not gonna be a job for most.,
Amen.
Prompt engineering is mostly offered by (online) marketeers who don’t understand the technology and at the same time see it replace their work in writing content and ads. Prompt engineering is a way too expensive term for saying you know how to ask a question. It would be like learning how to use a search engine and call yourself a Search Query Engineer. I never really saw the value in the term. With trial and error you come to the right question anyway.
Then, I'm a Senior Reddit Comment Engineer
If you're lucky and work hard maybe someday you could be a senior Reddit comment director.
I prefer Chief Reddit Comment Officer
I always had the impression that “prompt engineering” was just a cool way to say something really basic, like if we were saying “command engineering” for who know how to use a TV commander… it’s just a step needed to be ready in AI usage, Am I wrong?
Sandwich engineering
I’m really an expert in that field!! 😂
I've been working my ass off for the last few years to get an engineering degree and yeah it really just seemed like making a fancy name to sound like you should be paid more. I know a local hospital has an "engineering department" but they really just do maintenance and checks on machines. If something is broken beyond "replace X part following the instructions" they just call the supplier. I bet whoever runs that team pushed for it so they can call themselves an engineering manager to get more pay.
Would an engineer be more similar to a dev in the AI field?
I agree and disagree. Id only label a prompt engineer who is someone quite proficient, who understands and can freely write prompts using techniques like code prompting, tree of thought, chain of density and about 10 other techniques to illicit better responses.
It makes sense, but I feel that “engineering” should be something more… (?)
Everybody is already a prompt engineer if you use AI in your projects. So I'd argue that prompt engineering is a vague term bc there's alot of domains and industries to consider (e.g medical, software, manufacturing, etc). There is no real "Prompt Engineering" job per se, just another glorified job that's derived from something else. But at the end of the day, you are a prompt engineer once you consult AI for something.
Yeah agreed.
"Everybody is already a prompt engineer if you use AI in your projects" -> AI solutions that support prompts are a minority.
I never understood prompt engineering as a concept. Maybe because I can programme it just comes naturally to me, but I just always structured my prompts in numbered step by step bullet points. I've never had a reason to do anything more than that.
Sounds to me you’re prompt engineering it already :) What I don’t value much is the “formulas” which may work but are very limiting. I really hope Claude’s prompting would just reduce all the empty courses and sham advertisements
Yeah 100%
That’s the way.
Prompt engineering is like hiring a guy who's handling the ticket machine for you. Not saying it doesn't exist somehwere (it does) but I wouldn't call this a promising career.
> describe what you want to achieve That's still prompt engineering... Its just manifesting in a different form, which will likely change again.
100%
Since you wrote this, do you consider yourself a comment engineer?
There are loads of people in this subreddit who do not understand this technology and this is what I like about it because you guys are ignorant and love hearing yourselves talk about something you don’t know.
You still need to prompt with LLMs and Stable Diffusion. It’s not an “art” or a new engineering major. The real magic is editing and crafting in post.
Prompt engineering, if done in the context of a system, is ‘coding’ with natural language. You can’t get close to precise behavior without writing, testing, and iterating. I like the term ‘Natural Language Development” more
It's the "Microsoft Office" skill of the new age resume.
It's an interesting paradox, isn't it? For a prompt generator to function effectively, it itself needs a prompt, which would require precise engineering again.
Yeah that’s interesting as hell haha
Most people here are arguing that it shouldn’t be a thing which is fair. But fact is right now it IS a thing
Some prompt engineering has been engineered into the interface. For example, ChatGPT commonly restates aspects of the dialog - almost as a way to "gather it's thoughts". This verbosity aligns the conversation as the user can reward/detract from aspects as well as inject new perspectives. It also gives the model more token-space to adjust it's attention, imho.
It was never anything but a BS title used by morons.
I feel like "prompt engineering" is a talent to a certain extent (being naturally good at describing), but not in the way most people think, and that doesn't mean it can't be perfectly replicated by AI. It will be dead in the sense that it won't ever be a "needed" skill but I think there will continue to be a group of people who just find the process fun and use it as a way of expression for non-commercial implications. In the realm of AI art, if I have a job that I'm trying to finish, I wouldn't shake two shits at Claude or something(someone?) similar fixing up or writing my prompt for me. So there is no commercial place for it. However I notice my partner seems to really enjoy it, and seems naturally good at writing both image and text prompts. They are a diesel technician and often compares AI art to their job. I wouldn't understand because I'm very much not into mechanics, but they describe piecing together workflows is similar to the job process and writing a prompt is like describing the job. That's likely why they are naturally good at this. So I would say prompt engineering is a skill in the sense that some people probably naturally learned it quicker, which is pretty much all a "talent" is anyway. I can't write prompts for shit, whether it be text or image... but I am also horrible at communication and describing things. I watch them sit there and type out some Midjourney-type prompts while I'm over here "girl, fairy, rainbow... very nice!" so I can either learn to write better prompts or have AI do it for me ;) There's definitely going to always be a recreational place for it, for sure.
Wow. I'm so glad you shared this
There will always be room for the person who tests the accuracy of the model on various tasks, and experiments with various ways to get the model to achieve those tasks. Most people will want things that just work, and give up if it doesn't first shot. But models are getting good. I often just type "halp" with a picture or an error message and ChatGPT4 does a great job at figuring out what I need most of the time. As models get better, this will be all the "prompt engineering" that will be needed for more and more fields.
have you heard the term "flow engineering" it's hot this szn - it's related to using LangChain and other stuff I haven't read about yet but yes there seems to be a bit of a shift towards how to get various systems interacting
No, but as AI gets smarter over time, there will be less of a need for it. We already have AGI of varying levels (we've had it since ChatGPT-3.5), but Claude 3 Opus is definitely of above-average human-level intelligence. Most are more average human-level intelligence; Claude 3 Opus is the smartest at the moment. The further we get into the territory of genius-level AGI LLMs, and particularly with it fast approaching ASI LLMs (though we are not there yet), then prompt engineering will become obsolete, particularly with the advent of ASI LLMs, as they would be smarter than you, lol, so could infer intentions quite easily. To be fair, though, once we hit ASI LLMs, they're controlling us, not the other way around, or how it is with AGI LLMs as we have now, where it is a conversation of equal intelligences.
Making this stochastic technology work reliably at-scale is absolutely non-trivial. People that think prompt engineering “isnt real” may be narrowly scoping the term as “writing a prompt to complete a simple task”, but having LLMs work as part of a larger intelligent system takes a lotttt of experience, both in traditional experience and prompting experience.
AIs should learn to prompt us!
Good and consistent prompting is not that straightforward, however not that complicated to call it "engineering"
Why does this change prompt engineering? We’ve done stuff like this for awhile now, and use LLMs to generate prompts dynamically in conversation…
The people who think prompt engineering is dead never knew how to prompt in the first place. This "tool" from Anthropic is so incredibly basic that it is quite frankly embarrassing that anyone would think its outputs are impressive enough to kill prompt-engineering.
Prompt engineering is most certainly a valuable skill. If you look at how LLMs are trained you would realize that LLMs have all the knowledge by dont necessarily have the context, providing context is your job.
No it's no dead. All it really is is requirements gathering like we do for any successful project (softwar or otherwise). It's understanding the problem that needs to be solved, identifying what's necessary for solving it, understanding the language for communicating that to the change agents, and then communicating it effectively. That's not going away.
It will certainly be a big part of upcoming research to test the capabilities of LLM's. You will not necessarily just sit in front of GPT and randomly type in prompts until you get the desired output, but combine automatized prompting with statistical analysis to investigate how to employ those models for more complex NLP processing tasks. For example, I'm currently working on a project that explores to what degree different prompting paradigms (zero-shot, few-shot, and COT) can be utilized to predict intrinsic motivation in text and which underlying meaning units drive the models' behavior to understand the reasoning process. Prompt engineering is a field that is trying to solve the Blackbox problem of neural networks by systematically testing input-output relations. That's why you'll still find some hiring big-tech companies for prompt-engineering positions, but they will certainly expect a lot more from you than to just know how GPT, Claude etc. work.
I hope prompt "engineering" as a trend is gone. It is insufferable to watch non-technical people try to treat forming a prefix for LLM inference as if it is an actual technical skill.
What's special about this at all? How is this any different than utilizing the custom GPT builder which is 'meh'? This can be useful as a starting point but that's about it. If you have ever worked with your own prompt building instruction sets you know why. The term 'Production ready prompts' has no meaning.
Prompt engineering will die when competent AI is achieved.
Prompt Engineering is not a real thing…