The Prompt and the Model: Selecting Your AI Brain

The Prompt and the Model: Selecting Your AI Brain

Choosing the right engine for the job. Learn how to navigate the Bedrock model catalog and master the art of Prompt Engineering.

Precision in Selection

Once you have your data, the next stage of the AI lifecycle is choosing your "Inference Engine." In the world of Amazon Bedrock, this means picking the specific Foundation Model (FM) that balances Cost, Speed, and Accuracy.

But picking the model is only half the battle. You then have to "Tune" your instructions. This is Prompt Engineering.


1. Choosing the Model: The Trade-off Triangle

You cannot have everything. You must choose two:

  • Accuracy (Opus-class models): Deep reasoning, massive knowledge, very accurate.
  • Speed (Haiku-class models): Near-instant responses, great for live chat.
  • Cost (Small Language Models): Fractions of a cent, great for high-volume tasks.

Evaluation Criteria:

  • Task: Is it simple (summarize) or complex (coding/logic)?
  • Modality: Do you need images, text, or both?
  • Context Length: Is the document 1 page or 500 pages?

2. The Art of Prompt Engineering

A "Prompt" is the instruction you give to the AI. A well-engineered prompt can turn a hallucinating AI into a precision business tool.

Effective Prompting Techniques:

  1. Persona: Give the AI a role. ("You are a senior lawyer with 20 years of experience").
  2. Context: Give the AI a specific background. ("Based on this provided contract...").
  3. Constraint: Tell the AI what NOT to do. ("Do not use legal jargon. Keep the answer under 200 words").
  4. Few-Shot Prompting: Give the AI 2-3 examples of a "Great" answer before you ask your question.

3. Delimiters: The Secret to Security

Use delimiters (like ### or --- or <text>) to separate your instructions from the data. This helps the AI understand where the "Order" ends and the "Work" begins, reducing the risk of prompt injection.

  • Bad Prompt: Summarize this text: The company is fine.
  • Good Prompt:
    You are an analyst. Please summarize the text contained within the `<data>` tags.
    <data>
    The company is fine.
    </data>
    

4. Visualizing the Selection Loop

graph TD
    A[Requirement] --> B{Task Complexity?}
    B -->|Simple| C[Model: Claude Haiku / Mistral 8B]
    B -->|High| D[Model: Claude 3.5 Sonnet / Llama 70B]
    
    C & D --> E[Initial Prompt Design]
    E --> F[Test Result]
    F -->|Bad Result| G[Prompt Tuning: Add Examples/Constraints]
    G --> E
    F -->|Good Result| H[Final Model/Prompt Pair]

5. Summary: Experiment Early

Model selection is an Empirical process. You should try your prompt on 3 different models in the Amazon Bedrock Playground before you commit to one for your application. You might find that a cheaper model is actually "Good enough" for your specific task!


Exercise: Identify the Prompt Technique

A user wants the AI to format its output as a valid JSON object. They provide the AI with a prompt that says: "Here are two examples of the output I want. Example 1: {x:1, y:2}. Example 2: {x:3, y:4}. Now, process this new data: x=5, y=6."

What is this technique called?

  • A. Zero-shot prompting.
  • B. Few-shot prompting.
  • C. Negative prompting.
  • D. Fine-tuning.

The Answer is B! Providing "Examples" (shots) in the prompt is Few-shot prompting.


Knowledge Check

?Knowledge Check

Which factor is most important when selecting a foundation model for a real-time customer support chatbot where response speed is the top priority?

What's Next?

We have a model and a prompt. But is it right? In the next lesson, we see how to measure "Truth." Find out in Lesson 3: Testing and evaluating AI outputs.

Subscribe to our newsletter

Get the latest posts delivered right to your inbox.

Subscribe on LinkedIn