How to Become an AI-Prompt Engineer, Skills Needed


  • You don’t need a computer-science background for the sought-after job of an AI-prompt engineer.
  • Teodora Danilovic became one shortly after graduating with a philosophy degree.
  • She used her knowledge of logic and linguistics, but had to gain some technical skills on the job.

I studied philosophy at King’s College London because I was passionate about critical thinking and analytic questioning. Prompt engineers didn’t exist in the UK when I started my degree in 2019, but four years later, it feels like the best combination of my education and skills.

I joined AutogenAI, a company that has built a language model companies use for bid and tender writing, in July 2022. 

Being a prompt engineer involves finding the best inputs to use in our software, known as “Genny,” so that users get the best responses to their requests.

For bids and proposals, an input could include expanding text, summarizing something to a specific word count, or finding and incorporating a crucial piece of evidence to support a claim within the bid.

AutogenAI has button functions such as rephrase, summarize, and tone of voice that users can employ to adapt their own text.

My job is to ensure the software delivers the best results

As a prompt engineer, I need to make sure the coding and more detailed prompts behind these buttons create accurate and consistent results and that engineers can adapt the coding and prompts as the AI system changes.

I always associated this type of generative AI with computer science or something technical, but there is actually an important linguistic element that made it a perfect job for me.

I was born in London, but my parents are Serbian. Serbian was my first language, which meant I learned English when I started preschool.

At university, I focused mostly on analytic philosophy, such as mathematical logic, formal logic, philosophy of science, and linguistics.

This helped me gain a deep understanding of how we use language — the nuances of tone, expression, finding the right words to convey meaning, and ultimately, how language is not just a passive medium to exchange ideas, but something that impacts the physical world. It has causes and effects like any other thing in the universe does.

During my last year at university, I worked at a cryptocurrency company, where I wrote legal documentation and recruited a data team.

Once I graduated, my former boss recommended me to the chief executive of AutogenAI because the executive was looking for philosophy-type graduates.

I assumed you had to have a background or computer science or machine learning to do this job

I was always interested in AI and cutting-edge technology but hadn’t considered it as a career option. 

I assumed you had to be technically proficient or have a background in computer science or machine learning to work in this sector.

One area I did cover in my degree was the ethics of AI. I studied the problems of consciousness, identity, truth, inherent bias, how creativity and work affect society, and more, but had little to no understanding of what a language model was before starting in my role.

It is fundamental to understand that when prompting a large language model, you are, in some way, communicating with it.

Many of the problems people encounter when using generative tools are due to the user assuming the models automatically understand intuitive social, contextual, and intentional cues.

My three main requirements for prompts are that they must be unambiguous, direct, and relevant.

The command, “make this paragraph better,” assumes the language model knows whether that means longer, shorter, clearer, or less boring.

Though this may be intuitive to you, it isn’t explicitly clear to the language model. It would be better to say, “improve the paragraph above by removing all grammatical errors.”

Ensuring the outputs of all our software’s features — such as rephrasing, summarizing or expanding, writing in a specific tone of voice, translating, or incorporating statistics — are consistent, optimal, and diverse is a continuous process of improvement. When new versions of language models that our tools are based on are released, their capabilities change as well.

When ChatGPT-4 came out, we had to retest all our current prompts against It.

ChatGPT-4 turned out to be incredibly useful in some ways, such as its ability to follow detailed instructions, but it can be very verbose in its responses.

Long answers responding to requests to “summarize” or “rephrase,” aren’t great for bids, tenders, or proposals.

What I learned on the job

An understanding of language is a useful foundational tool for being a prompt engineer, but a technical understanding of the language model and its architecture is necessary to create great prompts.

Though I am not a developer, I gained an understanding of basic programming on the job. Having an understanding of code helps me suggest and make edits myself.

I have also built knowledge of how natural-language processing and machine learning works within AI and the process behind the research and development that developers use to build a language model.

I’ve heard there are prompt engineers in Silicon Valley getting paid up to $300,000 per year.  

I am not getting anywhere near that much, but I do get a respectable salary. Of course, the more experience and expertise you have, and the more disciplines your skills cover, the more you can earn. 

There are obviously fears that AI will take people’s jobs; I think it is helping our clients so they can work on more exciting tasks and get the more menial things out the way.

Prompt engineering is just one new role that’s emerged from this ever-changing landscape, and it’s a role you may already have the skills for.



Source link