Researchers in Shanghai say “context engineering” can improve AI performance without retraining models.
Testing shows that richer prompts improve relevance, consistency, and task completion rates.
This approach builds on prompt engineering and extends it to complete situational design for human-AI interaction.
A new paper from Shanghai AI Lab argues that large language models don’t necessarily need more training data to get smarter, they just need better instructions. Researchers have found that carefully designed “contextual prompts” allow AI systems to generate more accurate and useful responses than generic ones.
Think of this as setting the scene for the story so that everything makes sense, and a practical way to make the AI feel like a helpful friend rather than a mindless robot. At the heart of context engineering is carefully crafting the information you give to an AI so that it can respond more accurately and usefully.
People are not just isolated individuals. We are shaped by our environment, relationships, and situations, or “context.” The same goes for AI. Machines often fail because they don’t see the big picture. For example, if you ask an AI to plan a trip, it might suggest a luxury cruise without knowing you’re on a budget or traveling with kids. Context engineering fixes this problem by including these details up front.
Researchers acknowledge that the idea is not new, with its origins dating back more than 20 years to the early days of computers. at that time, We had to adapt to a clunky machine with strict rules. Today, powerful AI platforms can use natural language, but the right context must be designed to avoid “entropy” (in this case, the word refers to confusion caused by too much ambiguity or confusion).
How to context engineer prompts
This paper shows you how to make AI chat more effective today. It is based on “prompt engineering” (creating good questions), but is more extensive, focusing on the full context. Here are some user-friendly tips with examples.
Let’s start with the basics: who, what, and why. Be sure to include a background to set the stage. Instead of “Write me a poem,” try saying, “You’re a romantic poet and I’m writing this for my anniversary. The theme is eternal love. Please keep it short and sweet.” This reduces misunderstandings.
Layer information like a cake Build context at levels. Start roughly, then add details. For coding tasks: “I’m a novice programmer. First, I’d like to teach you the basics of Python. Now, please help me debug this code.” [paste code]. Context: For simple gaming apps. ” This allows the AI to handle complex requests without becoming overloaded.
Use tags and structure Organize your prompts with labels to make them easier to understand, such as “Goal: Plan a budget vacation; Constraints: Under $500, family-friendly; Preference: Beach destination.” This is like giving AI a roadmap.
Incorporate something multimodal (such as images or history) If your query includes visuals or past chats, say “Based on this image.” [describe or link]suggest outfit ideas. Previous content: I like casual style. ” For long tasks, summarize the history: “Resuming from last session: We discussed marketing strategy. Now add some social media tips.”
remove noise Include only the important ones. Test and Tweak: If the AI goes off track, add instructions like “Ignore irrelevant topics and focus only on health benefits.”
Think ahead and learn from your mistakes Anticipate my needs: “Infer my goals from past fitness queries and suggest a workout plan.” Store errors in context for correction. “Last time I suggested X, but it didn’t work because Y. Please adjust accordingly.”
generally intelligent Newsletter
A weekly AI journey told by Gen, a generative AI model.