Prompt driven engineering

YD Jin

YD Jin

Β· 3 min read
Test image

Maybe it is just another pun, but wouldn't hurt to check out


Since ChatGPT was released, "Prompt engineering" is a hot topic these days. It can be a complex subject if you delve deep into it, and for some, it may sound like a pun, but I believe that it is obviously useful so no reason to hate it either. Here are some useful tips that I have summarised briefly.

  • Add context
    Detailed context improves its responses. It is recommended to provide context information at the beginning of the conversation or after receiving a response to obtain the desired information.
  • Provide detailed examples.
  • Keep it concise
    ChatGPT can only remember up to 4096 tokens, so it is important to keep questions concise. You can use translation tools like Google Translate or Papago to translate your questions and answers.
  • Assign roles
    You can assign roles to ChatGPT to get different answers. "act as" keywords allow you to give roles to ChatGPT, such as marketing experts, senior developers, and successful CEOs. Also you can assign your role as well. For example, you as a front-end engineer with 4 years of experience.
  • Let’s use others prompts πŸ™‚ https://github.com/f/awesome-chatgpt-prompts
  • Use parameters
    Some parameters can actually change the response result that ChatGPT generates. Explanations were generated by ChatGPT, so if it is wrong blame him/her : -)
    1. Top-k: A parameter that limits the selection of words to the top k most likely candidates at each step, leading to more coherent but potentially repetitive text. In front-end development, this could be used to generate pre-defined responses for a chatbot, limiting the number of possible responses to a few relevant ones.
    2. Temperature: A parameter that controls the level of randomness in the output, balancing creativity and coherence. In front-end development, this could be used to generate product recommendations for an e-commerce website, allowing for some randomization while still ensuring the recommendations are relevant.
    3. Max length: A parameter that determines the maximum length of the generated text, preventing excessively long outputs. In front-end development, this could be used to generate summaries of articles or reviews, limiting the length of the summary to a specific number of characters.
    4. Diversity: A parameter that controls the diversity of the output by encouraging the model to generate more diverse responses. In front-end development, this could be used to generate alternative product recommendations for users who have already seen a similar product, ensuring a wider range of options.
    5. Beam width: A parameter that controls the number of candidate sequences the model considers at each step, allowing for faster generation with some sacrifice in quality. For example, in front-end development, this could be used to generate auto-completion suggestions for a search bar, limiting the number of suggestions to a few of the most likely ones.

Then, happy prompting πŸ‘‹

YD Jin

Author

YD Jin

Registered nurse, humble learner, arrogant challenger, heavy sugar consumer, and programmer πŸ™‚

Copyleft - No right reserved except personal information @ 2023
Github link