Ideas, even good ones, are plentiful. Everyone has ideas, and often, the same concept strikes multiple minds at the same time. Leibniz and Newton invented calculus independently. Darwin and Wallace independently developed the theory of natural selection.
So it is perhaps not surprising that the wider artificial intelligence community is collectively coming around to the importance of context—a topic that we called linguistic engineering and started writing about at AutogenAI in 2023 based on the research we were doing into deploying large language models in proposal writing.
Context is vital to understanding meaning in natural language. Shouting “fire” at an archery competition means something very different to shouting “fire” in a crowded theatre. It is not possible to understand meaning without understanding context. No amount of compute or intelligence alone can uncover meaning. To answer the question, “Is east to the left or right of me?” you need to know if you are facing north or south.
Large language models (like humans) need context to properly understand what they are being asked to do. They need explicit instructions to deliver anything useful. If you want the right answer, you need to ask a very specific question and provide all of the relevant additional information. This is no trivial task. Humans possess vast amounts of both implicit and explicit knowledge about context. We know if we are at an archery contest or a theatre performance. Large language models, however, can only understand their context if it is explicitly provided in the input/prompt they receive. LLMs are regression models predicting the next likely sequence of words based on the input they are given. Stating the very obvious, the input plays the most crucial role in shaping the output. Humans rely on experience, intuition, sensory cues and a vast array of other factors to interpret context. LLMs only have the input data that they are given. This limitation means they depend on precise and specific prompts to deliver accurate and relevant responses.
A lot of early writing on LLMs focused on fine-tuning — changing the weights in the underlying neural nets to try to improve performance. Fine-tuning involves adapting a pre-trained model to perform better on specific tasks by training it further on a narrower dataset. Fine-tuning is resource-intensive, requiring significant computational power and time.
The importance of context in LLMs has led to a shift in focus from fine-tuning to prompt engineering, or as some now prefer to call it, context engineering.
Shopify’s CEO, Tobi Lutke, posted on X last month:
“I really like the term “context engineering” over prompt engineering. It describes the core skill better: the art of providing all the context for the task to be plausibly solvable by the LLM.”
This was retweeted by ex-Tesla, ex-OpenAI AI leader, Andrej Karpathy, who wrote:
“+1 for "context engineering" over "prompt engineering". People associate prompts with short task descriptions you'd give an LLM in your day-to-day use. When in every industrial-strength LLM app, context engineering is the delicate art and science of filling the context window with just the right information for the next step.”
This shift in terminology reflects a broader understanding of what it takes to make LLMs effective in real-world applications. Linguistic/context engineering, is not just about crafting a single sentence or paragraph to guide the model. It is about curating and structuring all the relevant information that the model needs to perform a task effectively.
A good ‘prompt’ might be thousands of lines long, containing a mix of short-term and long-term contextual information. This could include hundreds of elements including: short-term memory that retains the context of the ongoing interaction; long-term memory that stores persistent knowledge, such as user preferences or summaries from past interactions; external data retrieved via APIs and external and internal data from documents.
The sophistication, relevance and accuracy of the response is directly dependent on the completeness of the context. When people ask which provider has the best models - OpenAI, Cohere, Anthropic etc. - they are increasingly missing the point. Any model is only as good as the input that it is being given. The science of building these inputs - “linguistic engineering” as we call it at AutogenAI or “context engineering” as Lutke and Karpathy are calling it, is what truly determines the value and performance of AI systems.
Linguistic engineering is where the magic happens.
Innovations in Physical AI: Investigating the UK’s comparative advantage in physical AI systems
This April we will be launching our new series: Innovations in Physical AI — investigating the UK's comparative advantage in physical AI systems. Join us to hear the latest insight, share your perspective, and help shape the UK's position on physical AI.
The UK is a global leader in AI innovation, development and adoption.
AI has the potential to boost UK GDP by £550 billion by 2035, making adoption an urgent economic priority. techUK and our members are committed to working with the Government to turn the AI Opportunities Action Plan into reality. Together we can ensure the UK seizes the opportunities presented by AI technology and continues to be a world leader in AI development.
Get involved: techUK runs a busy calendar of activities including events, reports, and insights to demonstrate some of the most significant AI opportunities for the UK. Our AI Hub is where you will find details of all upcoming activity. We also send a monthly AI newsletter which you can subscribe to here.
techUK's Technology & Innovation programme is excited to announce that we are now seeking contributions for our annual focus week, which is taking place from 15-19 June.
A report by Elsewhen's Nadav Mordechai, Director of Product and Strategy, and Gabriel O'Brien, Researcher, on agentic AI and public sector productivity.
Sign-up to our monthly newsletter to get the latest updates and opportunities from our AI and Data Analytics Programme straight to your inbox.
Contact the team
Kir Nuthi
Head of AI and Data, techUK
Kir Nuthi
Head of AI and Data, techUK
Kir Nuthi is the Head of AI and Data at techUK.
She holds over seven years of Government Affairs and Tech Policy experience in the US and UK. Kir previously headed up the regulatory portfolio at a UK advocacy group for tech startups and held various public affairs in US tech policy. All involved policy research and campaigns on competition, artificial intelligence, access to data, and pro-innovation regulation.
Kir has an MSc in International Public Policy from University College London and a BA in both Political Science (International Relations) and Economics from the University of California San Diego.
Outside of techUK, you are likely to find her attempting studies at art galleries, attempting an elusive headstand at yoga, mending and binding books, or chasing her dog Maya around South London's many parks.
Usman joined techUK in January 2024 as Programme Manager for Artificial Intelligence.
He leads techUK’s AI Adoption programme, supporting members of all sizes and sectors in adopting AI at scale. His work involves identifying barriers to adoption, exploring solutions, and helping to unlock AI’s transformative potential, particularly its benefits for people, the economy, society, and the planet. He is also committed to advancing the UK’s AI sector and ensuring the UK remains a global leader in AI by working closely with techUK members, the UK Government, regulators, and devolved and local authorities.
Since joining techUK, Usman has delivered a regular drumbeat of activity to engage members and advance techUK's AI programme. This has included two campaign weeks, the creation of the AI Adoption Hub (now the AI Hub), the AI Leader's Event Series, the Putting AI into Action webinar series and the Industrial AI sprint campaign.
Before joining techUK, Usman worked as a policy, regulatory and government/public affairs professional in the advertising sector. He has also worked in sales, marketing, and FinTech.
Usman holds an MSc from the London School of Economics and Political Science (LSE), a GDL and LLB from BPP Law School, and a BA from Queen Mary University of London.
When he isn’t working, Usman enjoys spending time with his family and friends. He also has a keen interest in running, reading and travelling.
Sue leads techUK's Technology and Innovation work. This includes work programmes on AI, Cloud, Data, Quantum, Semiconductors, Digital ID and Digital ethics as well as emerging and transformative technologies and innovation policy. In 2025, Sue was honoured with an Order of the British Empire (OBE) for services to the Technology Industry in the New Year Honours List. She has also been recognised as one of the most influential people in UK tech by Computer Weekly's UKtech50 Longlist and was inducted into the Computer Weekly Most Influential Women in UK Tech Hall of Fame.
A key influencer in driving forward the tech agenda in the UK, in December 2025 Sue was appointed to the UK Government’s Women in Tech Taskforce by the Technology Secretary of State. She also sits on the UK Government’s Smart Data Council, Satellite Applications Catapult Advisory Group, Bank of England’s AI Consortium and BSI’s Digital Strategic Advisory Group. Previously, Sue was a member of the Independent Future of Compute Review and co-chaired the National Data Strategy Forum. As well as being recognised in the UK's Big Data 100 and the Global Top 100 Data Visionaries in 2020, Sue has been shortlisted for the Milton Keynes Women Leaders Awards and has been a judge for the Loebner Prize in AI, the UK Tech 50 and annual UK Cloud Awards. She is a regular industry speaker on issues including AI ethics, data protection and cyber security.
Prior to joining techUK in January 2015, Sue was responsible for Symantec's Government Relations in the UK and Ireland. Before that, Sue was senior policy advisor at the Confederation of British Industry (CBI). Sue has an BA degree on History and American Studies from Leeds University and a Master’s Degree in International Relations and Diplomacy from the University of Birmingham. Sue is a keen sportswoman and in 2016 achieved a lifelong ambition to swim the English Channel.
Visit our AI Hub - the home of all our AI content:
Enquire about membership:
Become a techUK member
Our members develop strong networks, build meaningful partnerships and grow their businesses as we all work together to create a thriving environment where industry, government and stakeholders come together to realise the positive outcomes tech can deliver.
Sean Williams is the Founder and CEO of AutogenAI, the world's leading proposal-writing software. AutogenAI leverages cutting-edge natural language processing technology to assist companies in crafting proposals, marketing copy and much more- ultimately saving time, reducing costs, and boosting success rates. Supported by top-tier investors such as Blossom Capital, Spark Capital, and Salesforce Ventures, AutogenAI has secured over $60 million in funding.
With a background in research, policy, business development, and operational management, Sean has worked with some of the largest and most successful public service providers globally. He has designed and managed large-scale public service contracts overseeing businesses with revenues exceeding $100 million and leading teams of over 900 employees.
Sean previously founded and served as CEO of Corndel Ltd, where he scaled the business from the ground up to a team of 350. In November 2020, he successfully sold Corndel to THI Holdings for $60 million.
Sean is passionate about artificial intelligence, the future of work, business creation, evidence-based policy, sustainable social enterprise, systems, incentives, technology, human potential, ideas, and execution.
This morning, the Department for Science, Innovation and Technology’s (DSIT) Secretary of State, the Rt Hon Peter Kyle, announced the publication of two new Responsible Technology Adoption Unit (RTA) products at the Financial Times Future of AI Summit.
AI is transforming business, but governance is lagging. Shoosmiths’ Technology & AI Partners, Alex Kirkhope and Sarah Reynolds, explore why embedding compliance into strategy is critical for resilience and growth.
This is an informative insight about artificial intelligence system impact assessments (AI-SIAs) which are documented processes for identifying individuals and groups impacted by an AI system. Common-use impact assessments are already found in the context of business, the environment, finance, human rights, IT, privacy, personally identifiable information and security. As part of a growing list of governance and policy assets available to those deploying and providing AI systems, AI-SIAs can provide analytical frameworks and approaches to SIAs supported through technical rigor.
Chatham House has released a collection of essays that examines innovative approaches to AI regulation and governance. It presents and evaluates proposals and mechanisms for ensuring responsible AI, from EU-style regulations to open-source governance, from treaties to CERN-like research facilities and publicly owned corporations. You can read the essays here. Drawing on perspectives from around the world, the collection underscores the need to protect openness, ensure inclusivity and fairness in AI, and establish clear ethical frameworks and lines of cooperation between states and technology companies.
As AI continues to transform industries across the globe, the need for professionals who can operationalise its ethical implementation has never been more critical. Whether you're looking to join the field or are already working as a responsible AI practitioner, these resources from techUK will help you navigate this evolving profession.