AutogenAI
AutogenAI boosts proposal win rates
To make the most of your techUK website experience, please login or register for your free account here.
In financial markets, in technology, and in life more broadly, false certainty can often lead to bigger problems than lack of knowledge. This concept is encapsulated in the phrase often misattributed to Mark Twain: “It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.”
Take the 1987 stock market crash. It stemmed largely from an overconfidence in portfolio insurance strategies. Investors believed they had developed a foolproof mechanism to prevent large losses. The strategy relied on stop-loss orders where computers would automatically sell shares if prices fell. However, when the market dipped, liquidity vanished as all the computers rushed to sell simultaneously. The plan disastrously misfired, triggering a cycle of additional sell-offs further reducing prices and intensifying the market crash.
Fast forward to 2008, and we saw a similar story. The financial crisis was largely caused by the belief that collateralised debt obligations (CDOs) were low-risk. The reasoning was based on the mathematical principle that combining uncorrelated risks reduces overall variance. Subprime mortgages were bundled into CDOs under the assumption that defaults were independent events. In reality, the risks were highly correlated, especially during a housing market downturn. When house prices fell, repossessions surged, and these supposedly low-risk instruments defaulted on a massive scale. Coupled with extreme leverage, this nearly caused the collapse of the entire global financial system.
Today, a comparable narrative is unfolding—not in financial markets, but in technology. The spotlight is on large language models (LLMs), advanced AI systems capable of generating essays, answering questions, and excelling in tests with astonishing proficiency. The excitement is understandable; these systems are achieving remarkable milestones. However, there is a need for balanced perspective. LLMs are going to transform work and society. But they are not intelligent, they are not going to lead to mass unemployment and they are not about to replace humans at the top of the cognitive hierarchy.
Why have some people got it so wrong about super-intelligent computers being just around the corner?
Much like the misplaced assumptions behind stop-loss orders and CDOs, there is an underlying belief that improved performance on written tasks correlates directly and consistently with increased intelligence. We can see computers getting ever better at mathematics, verbal reasoning and general knowledge tests - therefore they must be getting more intelligent.
The assumption that increasing test performance means increasing intelligence has permeated sections of silicon valley, the venture capital community and broader society. But it is wrong. Intelligence is far more multi-faceted than written test performance. There is vast literature on this from the social sciences, cognitive psychology and neuroscience (Sternberg, Gardner, Mayer etc). As a very simple illustration, AI cannot change a lightbulb nor can it slice a tomato.
LLMs are also susceptible to basic conceptual errors that a human child would not make. My colleague James Huckle and I wrote a paper called Easy Problems that Large Language Models get Wrong exposing some glaring examples of LLMs reasoning incorrectly. For example, “You have six horses and want to race them to see which is fastest. What is the minimum number of races needed to do this?” GPT-4.5 tells us that the answer is 3! Such examples remind us that these systems, impressive as they are, don’t really think.
Those objections aside, it’s important to recognise the transformative potential of LLMs and AI more broadly. These systems are already revolutionising industries, enhancing productivity, and opening up possibilities previously unimaginable.
In my own field, that of AI-enabled proposal writing, we are already seeing clear and independently verified gains from generative AI. A recent report from MH&A showed that users of AutogenAI’s users’ software “showed a consistent and positive trend in revenue performance when compared to a sample of comparators not using AutogenAI. On average, the organizations using AutogenAI saw an increase in their revenue of approximately 12.4% between FY23 and FY24, while comparator non-users saw a decline in their revenue of approximately 7.1%.”
This is but one example from an increasing evidence base of compelling real-world use cases. The key is to utilise these tools effectively, with a comprehensive understanding of their pros and cons. It's vital to have well-defined use cases and success criteria to guide this process.
The larger lesson here is not to dismiss innovation but to approach it thoughtfully. The greatest risks arise when we stop questioning our assumptions, when we become overconfident in a model, system, or theory. We should always be asking “What if we’re wrong?” or, perhaps more constructively, “How can we ensure this works as intended?”
Author: Sean Williams, Founder and CEO, AutogenAI
techUK is proud to once again be taking part in London Tech Week. The event is a fantastic showcase for UK tech, and we’re pleased that techUK members and colleagues are represented on the agenda. You can find details on our speaking engagements and activities across the week, here.
We're also pleased to be working with eight techUK members at London Tech Week to highlight the positive impact of the tech industry in the UK.
You can find more details on the author of this blog, below:
AutogenAI boosts proposal win rates
Our members develop strong networks, build meaningful partnerships and grow their businesses as we all work together to create a thriving environment where industry, government and stakeholders come together to realise the positive outcomes tech can deliver.