Back

10 Ways Prompt Engineering Transforms Development Work

10 Ways Prompt Engineering Transforms Development Work

AI can greatly transform software engineering work, and this article will provide ten examples.

ChatGPT, Large Language Models (LLMs), Generative Pre-trained Transformer (GPT), Open AI, Natural Language Processing (NLP).

Does the list stop?

Screenshot 2024-09-27 at 20.10.22

Since the launch of ChatGPT 3 in November 2022, we’ve encountered these terms daily; the buzz hasn’t stopped. As a software engineer reading this article, you’ve probably wondered if Large Language Models and artificial intelligence would replace your job at some point, haven’t you?

Screenshot 2024-09-27 at 20.17.12

There is one truth about LLMs: they can replace you if you don’t know how to use them to their full potential. The people who aren’t losing their jobs to AI use AI to keep their jobs.

Screenshot 2024-09-27 at 20.19.03

Everyone can use ChatGPT, but if you can do what anyone can do with Large Language Models, what makes you stand out?

Everyone knows what artificial intelligence can and cannot do. We know, for instance, that ChatGPT can generate and visualize images, understand our voices, and respond accordingly.

Do you know what most people don’t know about these models? They don’t know that they need to be prompted accurately and precisely to deliver the exact results they want and need.

Screenshot 2024-09-27 at 20.21.48

‘Prompting’ in this context means providing instructions to the models to trigger a response or action. If you can use ChatGPT, you can prompt. It’s not just about prompting; it’s about framing the prompt in a way that elicits the desired response.

LLMs are powerful and innovative, but they aren’t mind-readers. You must learn how to craft prompts that cater to your needs.

Don’t worry; I’m not here to mock or judge you, and I’m certainly not here to scare you. I’ll simply show you how to be a better software engineer by leveraging the power of artificial intelligence to do exactly what you want and more.

Screenshot 2024-09-27 at 20.26.54

I aim to teach you how to improve your prompting techniques to get the best results from LLMs. These scenarios are perfectly applicable to your daily work life as a software engineer.

What is prompt engineering?

As I explained earlier, prompting is the simple act of interacting with the model. Prompt engineering is the art or practice of intentionally designing these prompts or instructions to produce optimal results.

The best way to illustrate this is by giving an example. Let’s check out two prompts:

🔴 Bad prompt:

Create an ideal weekly schedule for a remote software engineer.

🟢 Good prompt:

Design an ideal weekly schedule for a remote software engineer that includes time blocks for deep work, meetings, breaks, exercise, and personal time. Ensure the schedule accommodates walking my dog, working on my side project, checking my kids’ homework, spending quality time with my family, and cooking meals. Prioritize productivity and work-life balance, while considering typical software development tasks such as coding, code reviews, and project management.

LLMs are usually misled by bad prompts because they’re poorly crafted. This leads to misinterpretations and, in many cases, irrelevant responses. Bad AI prompts lack specific details and context, making it difficult for the model to understand exactly what you want.

Plus, if you’re not using the right keywords or if you use vague or overly complicated language, the AI model may produce irrelevant or incorrect information. In contrast, good prompts are specific, clear, and grounded in context, ensuring the best results.

Bad prompts limit our control over LLMs. For instance, if a user asks ChatGPT, “Where can I purchase a shirt?” the chatbot may give an unhelpful response. However, if the user refines their prompt and asks, “Where can I purchase a shirt in Lagos, Nigeria, specifically Lekki? Provide the three nearest locations that may have t-shirts near my location,” the chatbot will provide more relevant and useful results.

Screenshot 2024-09-27 at 20.46.25

Now that we’ve highlighted the differences between good and bad prompts, let’s explore how software engineers can use LLMs in their daily work to boost productivity, save time, and increase efficiency.

10 Ways Software Engineers Can Write Effective Prompts

Here are ten ways the art of prompting AI can transform your daily work as a software engineer or developer.

1. Be as specific as possible

Provide as much context as possible when using an LLM. Be as specific as you can when prompting. Include details about the code error, possible solutions you’ve tried, and the methods involved.

Details, details, details!

I’m a big advocate of using the voice feature on ChatGPT rather than always typing because articulating your thoughts verbally forces you to include more detail. Once you’ve articulated your request, you can edit or tweak the text ChatGPT interprets. As a software engineer, I know it can be tempting to just dump the code in the chatbox and ask ChatGPT to fix it.

Screenshot 2024-09-27 at 20.53.58

Instead of doing this, take this approach:

  • Start by telling the LLM what you’re trying to achieve. Walk it through the problem you’re facing, including any bugs or error messages.
  • Explain the solutions you’ve already tried to resolve the issue.
  • Ask the LLM to fix the problem. As it does, request that it walks you through the solution. Asking an LLM to walk you through the solution can help you identify where the error originated or where the LLM may have missed something.

Screenshot 2024-10-07 at 23.13.07

For example, Oluebubechukwu wants to build a calendar feature in Rails. Below are two prompts, one bad and one good:

🔴 Bad prompt:

How do I build a calendar in Rails?

This prompt is too vague. The LLM doesn’t know what kind of calendar Oluebubechukwu is trying to build, what her requirements are, or what constraints she has.

🟢 Good prompt:

I’m developing a scheduling feature in my Rails application where I need to implement a custom calendar. I don’t want to use any third-party libraries because I need full control over the functionality and design to match our existing design system, which uses Tailwind CSS. The calendar should allow users to create, edit, and delete events, and it needs to integrate with our existing event models. I’ve set up the basic models for events, but I’m unsure how to render the calendar view and handle date selections effectively. Could you walk me through, step by step, how to create a custom calendar view in Rails that integrates with our existing models and design system?

See the difference? By providing detailed context—Oluebubechukwu’s goals, constraints, the technologies she is using, and where she is stuck—she gives the model everything it needs to provide a precise and helpful response.

2. Synonyms are your best friend

Sometimes, your first prompt may not be clear to the model. Instead of asking for clarification, it may generate what it thinks you want. Many people give up on AI at this point, assuming it can’t handle the task. All you need to do is reword or rephrase your statement so the AI can better understand your request.

Here is an example using JavaScript exception handling: 🔴 First attempt:

My JavaScript code throws an error when I try to loop through data. Can you help me fix it?

This prompt gives a little context—there’s an error when looping through data—but it’s still too vague. The model doesn’t have enough specific information to provide a meaningful answer.

🟢 Rephrased prompt using synonyms:

I’m encountering a TypeError in my JavaScript code: “Cannot read property ‘length’ of undefined” when trying to loop over an array using a for loop. It seems like the array might be undefined at the point of iteration. Here is the code snippet:

for (let i = 0; i < myArray.length; i++) {
 console.log(myArray[i]);
}

Could you help me understand why this exception is occurring and how to resolve it?

You provide clarity by rephrasing and using synonyms like “encountering” instead of “throws an error,” and specifying the exact error message. The LLM can now offer a more accurate and helpful response.

3. Give the LLM room to think

Let’s examine these two prompts: 🔴 Bad prompt:

How can I improve my code?

This is too broad and doesn’t give the LLM any specific area to focus on.

🟢 Good prompt:

I’m working on optimizing a Python script that processes large datasets for machine learning. The current code takes too long to run and consumes a lot of memory. Could you outline strategies for improving the script’s performance, considering factors like algorithm efficiency, data structures, and parallel processing? Then suggest the most effective approach based on these factors.

The good prompt allows the model to consider specific factors like algorithm efficiency, data structures, and parallel processing and encourages a thoughtful, in-depth response.

Give the model room to think.

4. Ask the LLM to adopt a persona

Asking an LLM to adopt a persona is known as role prompting. Role-promoting aims to get AI to approach or solve problems by taking on a particular role or character.

There are three simple steps to using role prompting:

  • Determine a character or role that will be relevant to the problem or question.
  • Set a scene or introduce the role so the AI understands the context.
  • Once the context and role are established, ask the question or give the task. Ensure it’s directly related to the chosen role.

Here is an example:

You are a senior Python developer with 10 years of experience working in a fintech company that emphasizes code readability and compliance with PEP 8 standards. I’ve written a Python script that successfully processes financial transactions but am unsure if it adheres to our company’s best practices and coding standards. Could you review the following code and suggest improvements to align it with PEP 8 guidelines and our internal code quality standards?

#!/usr/bin/python
# -*- coding: utf-8 -*-
import threading
import logging

logging.basicConfig(
    filename="transactions.log",
    level=logging.INFO,
)

class TransactionProcessor:
    def __init__(
        self,
        transactions,
 ):
        self.transactions = transactions
        self.lock = (
 threading.Lock()
 )

    def process_transaction(
        self,
        transaction,
 ):
        # Process the transaction
        if (
 transaction[
                "amount"
 ]
 > 0
 ):
 transaction[
                "sender"
 ][
                "balance"
 ] -= transaction[
                "amount"
 ]
 transaction[
                "receiver"
 ][
                "balance"
 ] += transaction[
                "amount"
 ]
 logging.info(
                "Transaction successful: %s",
 transaction,
 )
        else:
 logging.error(
                "Invalid transaction amount: %s",
 transaction,
 )

    def run(
        self,
 ):
 threads = (
 []
 )
        for transaction in (
            self.transactions
 ):
 t = threading.Thread(
                target=self.process_transaction,
                args=(
 transaction,
 ),
 )
 threads.append(
 t
 )
 t.start()

        for t in threads:
 t.join()

transactions = [
 {
        "sender": {
            "name": "Alice",
            "balance": 1000,
 },
        "receiver": {
            "name": "Bob",
            "balance": 500,
 },
        "amount": 200,
 },
 {
        "sender": {
            "name": "Charlie",
            "balance": 800,
 },
        "receiver": {
            "name": "Dana",
            "balance": 300,
 },
        "amount": -50,
 },
]

processor = TransactionProcessor(
 transactions
)
processor.run()

By setting the role and context, you guide the LLM to provide an answer that’s tailored to your specific needs.

5. Encourage step-by-step solutions

Asking the LLM to provide step-by-step solutions to a problem is called Chain-of-Thought Prompting (CoT). This involves explicitly telling the model to provide step-by-step reasoning before arriving at the final answer. It encourages LLMs to explain the reasoning behind their solution, helping the AI stay focused on the problem and preventing it from going off track.

Certain keywords ensure a detailed breakdown of a solution:

Walk me through the process step-by-step.

Break down the problems into smaller tasks.

Consider all factors, such as …

Outline each step clearly as you approach finding the bug.

Here is an example of Segun debugging a complex database issue: 🟢 Good prompt:

I’m working with a distributed PostgreSQL database cluster that handles high-volume transactions for a financial application in real-time. Recently, we’ve been experiencing intermittent data replication lag and occasional data inconsistency across nodes. I’ve tried optimizing the network configuration and increasing the replication slots, but the problem persists. As a senior software engineer with over 15 years of experience in debugging complex database systems, could you walk me through, step by step, how to diagnose and resolve this replication lag and data inconsistency issue? Please consider factors like network latency, write conflicts, and potential configuration pitfalls.

This method encourages deep thinking by the model about a problem at hand. By requesting a step-by-step solution, Segun encourages the LLM to think deeply about the problem and provide a comprehensive answer.

6. Request best practices and standards

You can never go wrong by asking the model to adhere to best practices and standards. This ensures the model’s answer aligns with set procedures, minimizing potential errors.

Here’s an example of following coding standards in Python:

You are a seasoned Python developer working in a healthcare tech company that requires strict adherence to coding standards for security and compliance reasons. I’ve developed a Python module that handles patient data encryption and decryption. While the code works, I’m not certain it meets our company’s security protocols and best practices. Can you review the code and suggest improvements to align it with industry standards like PEP 8 and our internal security guidelines?

By specifying the need for best practices and standards, you guide the LLM to provide an answer that is not only functional but also compliant with important guidelines.

7. Ask for alternative perspectives

This is as simple as it sounds: ask the model for alternative perspectives. Doing so promotes creative responses that you might not have considered and opens the door to thinking beyond a single approach.

Imagine we want to evaluate server-side rendering in our app:

🔴 Bad prompt:

Should we use server-side rendering in our React app?

This closed question doesn’t encourage the LLM to explore different viewpoints.

🟢 Good prompt:

Consider both the frontend developers’ and backend developers’ perspectives on implementing server-side rendering (SSR) in our React application. From each viewpoint, discuss SSR’s potential benefits and drawbacks in terms of performance, development complexity, and user experience.

By asking for multiple perspectives, you get a more rounded understanding of the potential solutions.

8. Set a problem-solving framework

Not only can you master prompting, but you can also guide the model to follow a structured problem-solving framework. By setting a clear approach—such as defining the problem, generating potential solutions, evaluating options, and selecting the best course of action—you guide the model to think critically and systematically through complex tasks.

For instance: 🔴 Bad prompt:

How do I make my code better?

This prompt is too generic and doesn’t provide any direction.

🟢 Good prompt:

I’m working on a Python codebase where I’ve noticed several functions performing similar operations with minor differences. I want to refactor the code to adhere to the DRY (Don’t Repeat Yourself) principle. Using this principle as a problem-solving framework, could you outline each step I should take to identify redundant code, abstract common functionality, and update the codebase to be more maintainable and efficient?

By setting the framework, you help the LLM provide a structured and effective solution.

9. Let the model ask you questions

A good way to ensure the model clearly understands your needs is to ask it to pose clarifying questions before providing a solution. This way, the model can address any gaps in its understanding, leading to a more tailored response.

For example, when optimizing a slow SQL query, instead of using a vague prompt, try this:

🔴 Bad prompt:

My SQL query is slow. Fix it.

This prompt lacks context and doesn’t give the LLM enough information to provide a meaningful solution.

🟢 Good prompt:

I’m experiencing performance issues with a SQL query in my application, causing significant slowdowns during peak usage times. The query involves multiple JOIN operations across large tables, and despite indexing the primary keys, the performance hasn’t improved. Before you provide a solution, please ask me any questions you need to better understand the query and the database schema.

By encouraging the model to ask clarifying questions, you ensure that the assistance you receive is tailored to your specific problem.

10. Incorporate examples and analogies

Don’t just ask the AI for a solution without giving it examples or analogies.

Screenshot 2024-10-07 at 23.21.41

Analeia has a problem. She is trying to optimize a recursive function. Here is the prompt she uses:

I’m trying to optimize a recursive function in Python that calculates Fibonacci numbers, but it’s extremely slow for large inputs. For example, calculating fib(50) takes an impractically long time. Here’s the code I’m using:

def fib(n):
    if n <= 1:
        return n
    else:
        return fib(n-1) + fib(n-2)

Could you explain how I might improve the performance of this function, perhaps by using memoization or converting it to an iterative approach?

By providing a concrete example and even including the code, Analeia helps the LLM understand exactly what she’s dealing with, making it easier to provide a useful and accurate solution.

These examples will help LLMs better understand your concerns and provide useful responses.

Limitations of LLMs

While AI is fantastic and makes our lives easier, LLMs have limitations. These include:

1. LLMs are limited by computational constraints

When working with LLMs, it’s important to remember that they have limits on how much text they can process at once. The term “tokens” refers to how LLMs measure text. On average, one token is about 4 characters. Various LLMs have different token limits.

In practice, this means that if you attempt to paste a multi-page document into an LLM prompt, you may get an error message if it exceeds the token limit for that particular model.

Screenshot 2024-10-07 at 23.25.19

The solution to this is to break down the text into smaller chunks before entering it into the LLM’s prompt.

2. LLMs sometimes hallucinate

LLMs can be ineffective at times because they tend to “hallucinate.” This means they may generate incorrect, false, or misleading information that might seem plausible or realistic. While the responses might be grammatically correct, they can be factually inaccurate.

AI models hallucinate for several reasons:

  • A lack of correct or updated data in the training dataset.
  • Misinterpretation of ambiguous inputs or prompts, leading to inaccurate results.

The solution to this limitation is to ask the LLM to double-check its output or manually verify the information provided to ensure accuracy.

3. LLMs have limited knowledge and can’t update their knowledge base

LLMs are not natively connected to the internet and cannot automatically learn or acquire new knowledge beyond their training data. This means that LLMs are essentially a snapshot of the world’s knowledge at the time of their last update. They cannot independently learn new information or update their knowledge base in real-time.

This limitation means that when using LLMs, particularly for rapidly evolving subjects, they may be out-of-date or lack knowledge of the latest developments. As a result, they may not always provide the most current or accurate information.

4. LLMs fail to retain knowledge across sessions

AI models do not retain knowledge passed to them by users across different sessions. For example, if you tell the AI your favorite game and TV show during one conversation, it will not remember this information in a subsequent conversation. It essentially “forgets” everything it’s told after the session ends.

This “forgetting” happens because AI models are basically stateless inference machines. They make predictions based on their pre-trained knowledge, but they aren’t updating their models with each interaction.

LLMs cannot learn individual preferences, remember personal information, or expand their knowledge base based on ongoing interactions. Therefore, they can become outdated or limited when compared to continually updated systems.

Takeaway

It’s one thing to write an AI prompt, and it’s another to get the results you want.

Screenshot 2024-10-07 at 23.33.13

A good way to remember to prompt models effectively moving forward is to keep in mind that the AI model doesn’t know you before your request.

This is the art of prompt engineering - understanding that the LLM is a blank page, shaping its responses entirely from the information and guidance you offer.

Understand every bug

Uncover frustrations, understand bugs and fix slowdowns like never before with OpenReplay — the open-source session replay tool for developers. Self-host it in minutes, and have complete control over your customer data. Check our GitHub repo and join the thousands of developers in our community.

OpenReplay