this post was submitted on 26 Jun 2023
21 points (100.0% liked)

Actually Useful AI

2010 readers
7 users here now

Welcome! ๐Ÿค–

Our community focuses on programming-oriented, hype-free discussion of Artificial Intelligence (AI) topics. We aim to curate content that truly contributes to the understanding and practical application of AI, making it, as the name suggests, "actually useful" for developers and enthusiasts alike.

Be an active member! ๐Ÿ””

We highly value participation in our community. Whether it's asking questions, sharing insights, or sparking new discussions, your engagement helps us all grow.

What can I post? ๐Ÿ“

In general, anything related to AI is acceptable. However, we encourage you to strive for high-quality content.

What is not allowed? ๐Ÿšซ

General Rules ๐Ÿ“œ

Members are expected to engage in on-topic discussions, and exhibit mature, respectful behavior. Those who fail to uphold these standards may find their posts or comments removed, with repeat offenders potentially facing a permanent ban.

While we appreciate focus, a little humor and off-topic banter, when tasteful and relevant, can also add flavor to our discussions.

Related Communities ๐ŸŒ

General

Chat

Image

Open Source

Please message @[email protected] if you would like us to add a community to this list.

Icon base by Lord Berandas under CC BY 3.0 with modifications to add a gradient

founded 1 year ago
MODERATORS
 

TL;DR (by GPT-4 ๐Ÿค–):

Prompt Engineering, or In-Context Prompting, is a method used to guide Language Models (LLMs) towards desired outcomes without changing the model weights. The article discusses various techniques such as basic prompting, instruction prompting, self-consistency sampling, Chain-of-Thought (CoT) prompting, automatic prompt design, augmented language models, retrieval, programming language, and external APIs. The effectiveness of these techniques can vary significantly among models, necessitating extensive experimentation and heuristic approaches. The article emphasizes the importance of selecting diverse and relevant examples, giving precise instructions, and using external tools to enhance the model's reasoning skills and knowledge base.

Notes (by GPT-4 ๐Ÿค–):

Prompt Engineering: An Overview

  • Introduction
    • Prompt Engineering, also known as In-Context Prompting, is a method to guide the behavior of Language Models (LLMs) towards desired outcomes without updating the model weights.
    • The effectiveness of prompt engineering methods can vary significantly among models, necessitating extensive experimentation and heuristic approaches.
    • This article focuses on prompt engineering for autoregressive language models, excluding Cloze tests, image generation, or multimodality models.
  • Basic Prompting
    • Zero-shot and few-shot learning are the two most basic approaches for prompting the model.
    • Zero-shot learning involves feeding the task text to the model and asking for results.
    • Few-shot learning presents a set of high-quality demonstrations, each consisting of both input and desired output, on the target task.
  • Tips for Example Selection and Ordering
    • Examples should be chosen that are semantically similar to the test example.
    • The selection of examples should be diverse, relevant to the test sample, and in random order to avoid biases.
  • Instruction Prompting
    • Instruction prompting involves giving the model direct instructions, which can be more token-efficient than few-shot learning.
    • Models like InstructGPT are fine-tuned with high-quality tuples of (task instruction, input, ground truth output) to better understand user intention and follow instructions.
  • Self-Consistency Sampling
    • Self-consistency sampling involves sampling multiple outputs and selecting the best one out of these candidates.
    • The criteria for selecting the best candidate can vary from task to task.
  • Chain-of-Thought (CoT) Prompting
    • CoT prompting generates a sequence of short sentences to describe reasoning logics step by step, leading to the final answer.
    • CoT prompting can be either few-shot or zero-shot.
  • Automatic Prompt Design
    • Automatic Prompt Design involves treating prompts as trainable parameters and optimizing them directly on the embedding space via gradient descent.
  • Augmented Language Models
    • Augmented Language Models are models that have been enhanced with reasoning skills and the ability to use external tools.
  • Retrieval
    • Retrieval involves completing tasks that require latest knowledge after the model pretraining time cutoff or internal/private knowledge base.
    • Many methods for Open Domain Question Answering depend on first doing retrieval over a knowledge base and then incorporating the retrieved content as part of the prompt.
  • Programming Language and External APIs
    • Some models generate programming language statements to resolve natural language reasoning problems, offloading the solution step to a runtime such as a Python interpreter.
    • Other models are augmented with text-to-text API calls, guiding the model to generate API call requests and append the returned result to the text sequence.
top 1 comments
sorted by: hot top controversial new old
[โ€“] sisyphean 9 points 1 year ago* (last edited 1 year ago)

As AI hype is approaching fever pitch, "Prompt Engineering" has become another buzzword, with an insane amount of guides and tutorials cropping up on the internet. Unfortunately, a large portion of these resources offer little more than cookie-cutter strategies, contributing to a growing skepticism around the term itself.

It's easy to dismiss it as just another fad, but doing so overlooks the genuine engineering behind effective communication with LLMs. This guide shows some strategies that really work and are based on sound principles instead of guesswork by AI-bros compiled into yet another useless infographic.

I hope it will be just as useful to you as it was for me.