Automation has always been something I’ve worked in and around since I started back on the help desk over twenty years ago… dealing with repeatable and repetitive tasks is what drives humans to find way to make the processes easier and more efficient… it’s at the heart of innovation and has been this way throughout humanity. Not to get too deep and philosophical here, but from the wheel, to the loom, to the printing press to the car, to the rise of computing there have been evolutionary steps along the road of human efficiency.

Today, with the rise of Generative AI, almost all of us are able to achieve outcomes that might have been far out of reach prior to this technology become almost mainstream accessible.

A Trow back to 2007:

My first real example that sticks in my head of assisted or “generated” development was with with Microsoft Exchange 2007 which, for those that remember, gave you the PowerShell output of the command you where using through the UI and made that available for copy and paste. From that extension I was able to really start to learn how to construct PowerShell and build on top of the given code and enhance my level of automation such that I was able to provision a tenant and all their new mailboxs directly from PowerShell. As the Tweet below shows from 2018, when VMware brought this into vSphere it triggered memories of learning through this feature all those years ago.

Fast forward 15+ years and we have gone through many iterations of tooling that has helped platform engineers or DevOps or software developers create code that can now be automatically completed, abstracted (think Terraform for infrastructure provisioning) and more efficient. But, what we have seen happen with the rise of Generative AI and companies inserting AI into almost all areas of common applications and platforms, we have reached a point in time that’s actually as scary, as it is exciting.

The Dawn of the the Prompt Engineer:

I posted this post on X earlier this week after spending an hour or so interacting and prompting ChatGPT to help me perform a fairly rudimentary task on a Veeam Backup & Replication server via the new enhanced set of API’s in our 12.1 release. In essence what I was able to achieve was to Prompt Engineer my way through to a solution to automate a tasks that would normally be a multi-stepped process through a UI. Granted, this could also be done with PowerShell (to tie back the example above) but in the example I wanted to leverage the APIs through some Python code.

What used to take hours of tinkering and even just absolute defeat at the end of day now can be completed with relative efficiency and ease. The is what Generative AI and Prompt Engineering allows us to do today. You don’t need to dev to dev!

So… what is a Prompt Engineer?

A Prompt Engineer in the context of Generative AI is a professional who specializes in crafting queries or prompts that guide AI models, like ChatGPT or Midjourney, to produce a specific, desired outcomes. These outcomes could range from generating text, code, images, or any content type the AI is designed to create. This new role involves a deep understanding of the AI model’s capabilities, nuances, and how different prompt structures can affect the output. The goal is to maximize the effectiveness and efficiency of interactions with the AI, whether for creative tasks, problem-solving, or data analysis.

There are many elements to this but in essence to be a Platform Engineer there still needs to be some understanding of the desired outcome. The key component of being successful in reaching that outcome is based upon how you turn what’s naturally in your head by way of stepped process into questions to ask an LLM such as ChatGPT to then receive some form of output that can be executed to test the success or failure of that outcome. From there you need to be able to manipulate and work with the LLM to troubleshoot and then, maybe even enhance the original output until satisfied that it’s doing the job you want.

You Still Need Base Level Skills to Prompt!

When looking at code generation, or even the wider array of skills that Generative AI with LLMs allows professionals in different industries to now gain efficiencies in, you still need to sort of know what you are doing. Weather is be a marketer trying to craft a messaging document, or an artist trying to create new artwork for a project what Prompt Engineering does’t allow for is lack of understanding in what you are trying to get out of the platform.

To wrap this up, I have gone ahead and asked ChatGPT to produce five key points to becoming a successful prompt engineer in the area of technology… some good points… because I prompted and guided the model to answer request based on my inputs, style and tone as well as the blog post to this point… the irony isn’t lost is it?

  • Leverage Technical Expertise to Understand AI Models: Utilize your background in technology to deeply understand the mechanics of generative AI models. This includes getting familiar with the architecture, training process, and algorithms that power these models. Technology professionals can leverage their understanding of software development, data science, and machine learning principles to grasp the nuances of AI behavior and performance.
  • Enhance Prompt Crafting with Programming Skills: Use your programming skills to automate and optimize the prompt engineering process. This could involve writing scripts to test multiple prompts automatically, analyze the effectiveness of different phrasing, or even use machine learning techniques to predict the success of certain prompt styles. Familiarity with APIs for AI models, such as OpenAI’s GPT or similar platforms, can be particularly beneficial.
  • Adopt an Agile Approach to Experimentation: Borrowing from the principles of agile development, apply an iterative approach to experimenting with AI prompts. This means rapidly prototyping prompts, conducting A/B testing, and using feedback loops to refine your strategies. Technology professionals are well-versed in the agile methodology, making it a natural fit for the process of trial and error inherent in prompt engineering.
  • Stay Ahead with Continuous Learning and Networking: The tech industry is fast-paced, with constant innovations. Apply the habit of continuous learning to stay updated on the latest AI advancements. Participating in forums, online communities, and attending conferences not only helps in learning but also in sharing knowledge and networking with peers who have similar interests. Platforms like GitHub or Stack Overflow can also be valuable resources for exchanging ideas and finding inspiration.
  • Broaden Skills in Relevant Technologies: For technology professionals, expanding your skill set to include areas directly and indirectly related to AI can be a game-changer. This could mean delving deeper into natural language processing, computer vision, or even understanding the ethical implications of AI. Skills in data analysis, UX/UI design for creating human-centered AI applications, and cybersecurity to protect data and privacy in AI interactions are also valuable.