15 Mar 2024
by Terry Minford

Defending against AI’s dark side

Guest blog by Terry Minford, Practice Manager specialising in digital trust at BSI

Whilst artificial intelligence (AI) is still new and exciting, it's no different from any other technology. If you decide to implement it within your organization, make sure you're aware of the potential risks of exposure.

We’re aware of the benefits of AI, but with 35 percent of businesses now embracing it, the focus must shift towards how to prepare for when something goes wrong. This includes understanding how AI can be exploited to deceive, manipulate, or harm organizations, and tools to help defend against and mitigate risks.

AI being used disingenuously

A major risk associated with AI is the ability for people to pretend to be something they’re not. For instance, AI can make CVs (or resumes) look fantastic and the process of building them much quicker. In an increasingly competitive job market, graduates are using the likes of OpenAI or ChatGPT to write cover letters alongside CVs. Though this helps some advance through recruitment screenings, businesses find that when a candidate is called for an interview, there’s disparities between the qualifications on paper and the person sitting across the desk.

Similarly, financial institutions often use online forms and AI to determine whether to grant someone a loan or credit. Automating processes like this means that companies aren’t always meeting people face to face, making them a prime target for those wanting to exploit the system.

In a twist on traditional whaling attacks (a type of spear-phishing attack targeting senior executives), there are recent reports of fraudsters using AI to deepfake requests on behalf of the Chief Financial Officer (CFO).

These examples highlight the need for businesses to be cautious, implement robust screening processes, and provide stakeholder training.

Unethical business practices

AI can maximize business advantages through improved online dynamic pricing strategies. Ninety-four percent of shoppers compare prices of products while online shopping, and algorithms monitor that user behavior to offer personalized pricing based on spending habits. However, businesses may be tempted to engage in deceptive pricing strategies, exploiting the algorithms to gauge consumer willingness to pay instead of offering the appropriate price.

This manipulation extends beyond price adjustments. Companies could employ sophisticated algorithms to predict and influence consumer behaviour, potentially crossing ethical boundaries by capitalizing on individual preferences or vulnerabilities.

Insider and third-party risks

Insider threats add another layer of complexity, where disgruntled employees with access to AI algorithms could sabotage operations or compromise sensitive data. By intentionally feeding confidential data into generative AI systems, employees could expose organizational secrets to potential hacking, threatening businesses and clients with significant security risks. In early 2023, a global electronics company banned employees from using AI after it was identified that sensitive internal information had been leaked by an employee using AI for work-related purposes.

Many companies depend on third-party providers for essential data and services. However, this partnership introduces risks as the third party may have different biases and a risk tolerance that doesn’t align with the company's expectations or standards. This mismatch can lead to vulnerabilities, including rushed development that’s lacking in security measures and increased susceptibility to manipulation.

Risk defence

Security is based on three principles: confidentiality, integrity, and availability, and any controls being put in place are to protect these. As techniques advance in the ability to attack those principles, defences must become more advanced. Companies can mitigate risks through:

  • Comprehensive defence strategy: It's important for businesses to vet and monitor AI systems, assess the reliability of third-party involvements, and support against a wide array of potential threats, including those posed by disingenuous users and corrupted algorithms.
  • Responsible governance and disclosure: Threats to cybersecurity and moral dangers need balanced governance. The absence of proactive measures could lead to not just reputational damage but also an erosion of trust in entire industries.
  • Responsible AI practices: From developers to businesses, responsible AI practices such as a human-centered design approach, privacy and security of data, transparency, and accountability must be ingrained at every value chain stage.
  • Regulatory compliance: Stay up to date with evolving regulations and standards related to AI and cybersecurity, such as ISO 27001 or the National Institute of Standards and Technology (NIST) cybersecurity framework. Ensure compliance with relevant regulations to avoid legal and regulatory risks.

The transformative power of AI is undeniable. However, its responsible operation demands a collective effort and balance between technological advancement and ethical responsibility. Only through proactive and robust defence and an industry-wide commitment to ethical AI practices can businesses and societies harness its full potential while safeguarding against the inherent risks.

Read more on how AI is impacting organizations in Unlocking trust in AI by Mark Brown, Navigating generative AI and compliance by Conor Hogan and The impact of AI and ML on cybersecurity by Alessandro Magnosi.

Visit BSI’s Experts Corner for more insights from industry experts. Subscribe to our Experts Corner-2-Go LinkedIn newsletters for a fortnightly roundup of the latest thought leadership content: digital trustEHSsupply chain.

You can read the original post here.


Cyber Security Programme

The Cyber Security Programme provides a channel for our industry to engage with commercial and government partners to support growth in this vital sector, which underpins and enables all organisations. The programme brings together industry and government to overcome the joint challenges the sector faces and to pursue key opportunities to ensure the UK remains a leading cyber nation, including on issues such as the developing threat, bridging the skills gap and secure-by-design.

Learn more

Join techUK's Cyber Security SME Forum

Our new group will keep techUK members updated on the latest news and views from across the Cyber security landscape. The group will also spotlight events and engagement opportunities for members to get involved in.

Join here

Upcoming Cyber Security events

Cyber Security updates

Sign-up to get the latest updates and opportunities from our Cyber Security programme.

 

 

 

Authors

Terry Minford

Terry Minford

Practice Manager specialising in digital trust, BSI Consulting