16 Jul 2025
by Sean Williams

Linguistic Engineering and Context

Ideas, even good ones, are plentiful. Everyone has ideas, and often, the same concept strikes multiple minds at the same time. Leibniz and Newton invented calculus independently. Darwin and Wallace independently developed the theory of natural selection.

So it is perhaps not surprising that the wider artificial intelligence community is collectively coming around to the importance of context—a topic that we called linguistic engineering and started writing about at AutogenAI in 2023 based on the research we were doing into deploying large language models in proposal writing.

Context is vital to understanding meaning in natural language. Shouting “fire” at an archery competition means something very different to shouting “fire” in a crowded theatre. It is not possible to understand meaning without understanding context. No amount of compute or intelligence alone can uncover meaning. To answer the question, “Is east to the left or right of me?” you need to know if you are facing north or south.

Large language models (like humans) need context to properly understand what they are being asked to do. They need explicit instructions to deliver anything useful. If you want the right answer, you need to ask a very specific question and provide all of the relevant additional information. This is no trivial task. Humans possess vast amounts of both implicit and explicit knowledge about context. We know if we are at an archery contest or a theatre performance. Large language models, however, can only understand their context if it is explicitly provided in the input/prompt they receive. LLMs are regression models predicting the next likely sequence of words based on the input they are given. Stating the very obvious, the input plays the most crucial role in shaping the output. Humans rely on experience, intuition, sensory cues and a vast array of other factors to interpret context. LLMs only have the input data that they are given. This limitation means they depend on precise and specific prompts to deliver accurate and relevant responses.

A lot of early writing on LLMs focused on fine-tuning — changing the weights in the underlying neural nets to try to improve performance. Fine-tuning involves adapting a pre-trained model to perform better on specific tasks by training it further on a narrower dataset. Fine-tuning is resource-intensive, requiring significant computational power and time.

The importance of context in LLMs has led to a shift in focus from fine-tuning to prompt engineering, or as some now prefer to call it, context engineering.

Shopify’s CEO, Tobi Lutke, posted on X last month:

“I really like the term “context engineering” over prompt engineering. It describes the core skill better: the art of providing all the context for the task to be plausibly solvable by the LLM.”

This was retweeted by ex-Tesla, ex-OpenAI AI leader, Andrej Karpathy, who wrote:

“+1 for "context engineering" over "prompt engineering". People associate prompts with short task descriptions you'd give an LLM in your day-to-day use. When in every industrial-strength LLM app, context engineering is the delicate art and science of filling the context window with just the right information for the next step.”

This shift in terminology reflects a broader understanding of what it takes to make LLMs effective in real-world applications. Linguistic/context engineering, is not just about crafting a single sentence or paragraph to guide the model. It is about curating and structuring all the relevant information that the model needs to perform a task effectively. 

A good ‘prompt’ might be thousands of lines long, containing a mix of short-term and long-term contextual information. This could include hundreds of elements including: short-term memory that retains the context of the ongoing interaction; long-term memory that stores persistent knowledge, such as user preferences or summaries from past interactions; external data retrieved via APIs and external and internal data from documents.

The sophistication, relevance and accuracy of the response is directly dependent on the completeness of the context. When people ask which provider has the best models - OpenAI, Cohere, Anthropic etc. - they are increasingly missing the point. Any model is only as good as the input that it is being given. The science of building these inputs - “linguistic engineering” as we call it at AutogenAI or “context engineering” as Lutke and Karpathy are calling it, is what truly determines the value and performance of AI systems.

Linguistic engineering is where the magic happens.

AI and Data Analytics Programme activities

Members of techUK join a thriving community of companies committed to demonstrating the power of data analytics and AI. We help our members to build strong relationships with industry leaders, policymakers and regulators, reach new customers, and enable their business to grow. Visit the programme page here.

 

Upcoming events

Latest news and insights 

Learn more and get involved

 

AI and Data Analytics updates

Sign-up to our monthly newsletter to get the latest updates and opportunities from our AI and Data Analytics Programme straight to your inbox.

 

Here are the five reasons to join the AI and Data Analytics programme

Download

Join techUK groups

techUK members can get involved in our work by joining our groups, and stay up to date with the latest meetings and opportunities in the programme.

Learn more

Become a techUK member

Our members develop strong networks, build meaningful partnerships and grow their businesses as we all work together to create a thriving environment where industry, government and stakeholders come together to realise the positive outcomes tech can deliver.

Learn more

Meet the team 

Laura Foster

Laura Foster

Associate Director - Technology and Innovation, techUK

Usman Ikhlaq

Usman Ikhlaq

Programme Manager - Artificial Intelligence, techUK

Sue Daley OBE

Sue Daley OBE

Director, Technology and Innovation

 

 

Related topics

Authors

Sean Williams

Sean Williams

Founder and CEO, AutogenAI

Sean Williams is the Founder and CEO of AutogenAI, the world's leading proposal-writing software. AutogenAI leverages cutting-edge natural language processing technology to assist companies in crafting proposals, marketing copy and much more- ultimately saving time, reducing costs, and boosting success rates. Supported by top-tier investors such as Blossom Capital, Spark Capital, and Salesforce Ventures, AutogenAI has secured over $60 million in funding.

With a background in research, policy, business development, and operational management, Sean has worked with some of the largest and most successful public service providers globally. He has designed and managed large-scale public service contracts overseeing businesses with revenues exceeding $100 million and leading teams of over 900 employees.

Sean previously founded and served as CEO of Corndel Ltd, where he scaled the business from the ground up to a team of 350. In November 2020, he successfully sold Corndel to THI Holdings for $60 million.

Sean is passionate about artificial intelligence, the future of work, business creation, evidence-based policy, sustainable social enterprise, systems, incentives, technology, human potential, ideas, and execution.

Read lessmore