From Triplets to Text: The Graph-to-Prompt Logic

From Triplets to Text: The Graph-to-Prompt Logic

How to tell the AI what you found. Learn the specific techniques for translating raw graph triplets into descriptive natural language sentences that maximize the LLM's understanding.

From Triplets to Text: The Graph-to-Prompt Logic

A Graph Database returns data in a structured format: (Node)-[Relationship]->(Node). But an LLM doesn't "Think" in graph math; it thinks in Natural Language. If you just dump a raw JSON list of relationships into a prompt, the AI might get confused by the symbols and keys. To maximize the performance of Graph RAG, you must Translate the graph into a Narrative.

In this lesson, we will look at the Serialization Strategy. We will learn three different ways to present graph facts to an LLM: the Triplet List, the Sentence Narrative, and the Relational Table. We will see which format is best for different reasoning tasks and how to use "Template Injection" to make the context feel natural to the AI.


1. The Raw Triplets Format (Compact)

The Input: {"s": "Sudeep", "p": "WORKS_AT", "o": "Google"} The Prompt Format:

FACT: (Sudeep) --[WORKS_AT]--> (Google)
FACT: (Google) --[LOCATED_IN]--> (NYC)

Pros: Very token-efficient. Good for models trained on Cypher/Code. Cons: Can be "Noisy" for smaller models that struggle with logic symbols.


2. The Sentence Narrative (High Accuracy)

The Input: Same as above. The Prompt Format:

The following facts are verified from our knowledge graph:
- Sudeep works at the company Google.
- Google is located in the city of NYC.

Pros: Maximum understanding for the LLM. It "Reads" like a story. Cons: Uses more tokens per fact. Requires a Python "Translation" layer.


3. The Relational Table (Comparison Tasks)

If you are comparing 10 different projects, a table is best.

| Project Name | Lead | Budget |
|--------------|------|--------|
| Titan        | Jane | $10M   |
| Alpha        | Sudeep| $5M    |

Pros: Excellent for aggregation and comparative reasoning.

graph TD
    DB[(Graph Result)] --> P[Parser]
    P --> T1[Triplet List]
    P --> T2[Narrative]
    P --> T3[Markdown Table]
    
    T1 & T2 & T3 --> LLM[LLM Prompt]
    
    style DB fill:#4285F4,color:#fff
    style LLM fill:#34A853,color:#fff

4. Implementation: A Triplet-to-Sentence Function in Python

def serialize_triplet(subj, pred, obj):
    # Mapping relationship types to natural verbs
    VERB_MAP = {
        "WORKS_AT": "is employed by",
        "LEADS": "is the manager of",
        "LOCATED_IN": "is based in"
    }
    
    verb = VERB_MAP.get(pred, pred.lower().replace("_", " "))
    return f"{subj} {verb} {obj}."

# Result: "Sudeep is employed by Google."

5. Summary and Exercises

Translation is the "Final Mile" of Graph RAG.

  • Compact formats (Triplets) save money but risk confusion.
  • Narrative formats (Sentences) are the most reliable for reasoning.
  • Structural formats (Tables) are best for numbers and comparison.
  • Context Templates provide the "Source of Truth" wrapper for the AI.

Exercises

  1. Serialization Task: Take the following triplet: (Python)-[:WRITTEN_IN]->(C). Write it as a "Narrative Sentence."
  2. Format Choice: If you have 50 facts, should you use the "Sentence" format or the "Triplet" format? Why?
  3. Visualization: Draw a 3-node graph. Now, write the "Narrative Paragraph" that describes it.

In the next lesson, we will look at quality: Neighbor Ranking: Selecting the Best Context.

Subscribe to our newsletter

Get the latest posts delivered right to your inbox.

Subscribe on LinkedIn