fbpx

Unchecked AI: Top Cyber Risks for Businesses

PROLINK Blog

Unchecked AI: Top Cyber Risks for Businesses

September 20, 2023

AI has taken the world by storm and transformed the way businesses operate, perhaps forever. While the tech world has been dabbling with AI-powered chatbots, virtual assistants, and digital avatars for some time, we’re now seeing the rise of generative tools that can rapidly create and refine content with just a few prompts. Businesses across every industry are seeing its potential to offload functions, streamline processes, and drastically improve their overall efficiency and productivity. In fact, a Salesforce survey of over 500 senior IT leaders in March of 2023 revealed that the majority (67%) were prioritizing generative AI for company use in the next 18 months.

But in the race to scale, many have hopped on the AI bandwagon without looking twice at the full scope of risk involved. While concerns like plagiarism, copyright infringement, and biased outputs dominate headlines, AI’s inherent lack of control also creates a perfect storm of cyber risk that could undermine your security, privacy and confidentiality, and regulatory compliance efforts. To learn more, here are the top security concerns for your business and best practices for safe, ethical, and responsible AI use.

 

Disclaimer: Please note that the information provided herein offers guidelines only. It is not exhaustive and does not constitute legal or cybersecurity advice. For more guidance, please consult a lawyer, a licensed insurance representative, and/or a cybersecurity specialist.

What are the risks?

1. Threat Actors

 

AI hasn’t just been a game-changer for businesses; it’s making life easier for cybercriminals too. Reports have shown that threat actors are already using generative AI tools to ramp up their cybercrime efforts, polish hacking techniques, and speed up attacks. Examples include:

  • Social Engineering: Despite their growing sophistication, most phishing emails are still fairly easy to recognize, riddled with poor spelling, obvious grammar mistakes, and awkward phrasing. However, hackers can use AI tools to create clear, concise emails, mimic the style of a business leader, and trick victims into divulging confidential data. Threat actors can even personalize emails by creating apps that peruse the internet for information and create detailed profiles of their targets.
  • Malicious Code: Large language models, like ChatGPT, can also be used to crunch numbers, write code, and automate repetitive tasks. And while these tools can be programmed to prevent users from generating malicious or harmful content, there are loopholes; cybercriminals can still use creative prompts to manipulate the system into producing hacking code. In the future, threat actors could even use AI models to create automated malware bots that infect networks and compromise data with minimal human interaction.
  • Deepfakes: In addition to phishing schemes, generative AI can use brief snippets of audio or footage to impersonate people and fabricate remarkably convincing phone calls or video. Unfortunately, instances of this unsettling trend have already emerged, most notably in the form of virtual kidnapping scams. In the business world, there’s no shortage of potential scenarios for AI deepfakes, which could be used to:

 

RELATED: The Human Factor: Tackling Insider Threats in Cybersecurity

2. Data Collection & Use

 

AI tools rely on large datasets for training and learn how to respond based on patterns and trends within that data. The greater the volume of input data, the more information is absorbed into its ever-expanding knowledge base. This raises concerns about privacy and data protection, especially as users become more comfortable sharing personal details with AI. Potential risks include:

 

1) Improper Training

 

Even if your business hasn’t formally adopted any AI technologies, employees might still be finding ways to incorporate tools into their workflow without your knowledge or approval to consolidate notes or create communications. From marketing to writing, image, audio, and video creation, to coding, data analysis, automation, 3D modelling, and more, the possibilities are endless. But that means unaware or improperly-trained users could be inadvertently passing on private client data, corporate information, or trade secrets by feeding them into AI tools.

According to a report by LayerX, 15% of employees regularly paste company data into ChatGPT—over a quarter of this data is considered confidential—putting employers at risk of a breach and violating privacy regulations. As a result, many organizations, like Samsung and JPMorgan, have restricted ChatGPT use after learning staff had been inputting data from sensitive documents.

 

2) Data Collection

 

AI models might even be collecting intel without you realizing it. Say your company implements an AI-powered transcription tool to record notes during meetings or phone calls. But what if the model begins collecting or processing data without user consent? Or collecting data outside of scheduled meeting times? Here’s an example: OpenAI, ChatGPT’s maker, is currently facing allegations of misappropriating Internet users’ personal data to train its tools.

 

3) Transparency

 

It’s not always clear exactly how AI systems are using the data we input. We know they’re using data to train their models, optimize responses, and enhance the overall user experience. But otherwise, where does the data go? How is it stored? How long do AI operators hold onto it? How is it disposed of? And who else has access? Once the data is in their AI cloud, there’s no guarantee of privacy. For instance, sensitive data or conversations might be logged for quality assurance purposes, giving maintenance teams and other personnel confidential information. This is particularly unsettling when client data is involved, which could easily be misused by the service provider.

 

RELATED: Mitigating AI Risks: Tips for Tech Firms in a Rapidly Changing Landscape

3. Data Leaks

 

Not all AI models are created equally. While industry giants like Google and Microsoft have their own proprietary tools, most freely-available AI tools come from lesser-known sources. And with the current AI boom, every software company is itching to strike while the iron is hot and put out the next big thing. Unfortunately, that means they might be putting out products without the same controls, precision, or detail that would normally go into software design and development.

In a recent blog post from Zapier, Adrian Volenik, founder of aigear.io, said “It’s incredibly easy to disguise an AI app as a genuine product or service when in reality, it’s been put together in one afternoon with little or no oversight or care about the user’s privacy, security, or even anonymity.

AI platforms launched practically overnight may contain software bugs that lurk unresolved, opening avenues for cybercriminals to exploit or technology failures that jeopardize stored data. Even AI leader OpenAI has come under fire recently for accidentally leaking titles of users’ conversation history. Although the bug was patched promptly and no sensitive information was revealed, this highlights the need for constant vigilance when engaging with AI technologies.

 

RELATED: Bug Bounty Hunters: Where does your liability end?

4. Regulatory Compliance

 

The growing use of AI in businesses also has implications for legal and regulatory compliance, as governments across the world consider regulations to govern their use and mitigate potential harms. Similar to the precedent set by the GDPR, the EU has once again set global privacy standards with the Artificial Intelligence Act (AIA), outlining key requirements for AI systems, including data governance, cybersecurity, transparency, documentation, monitoring, and human oversight. The AIA is set to come into force in late 2023 or early 2024.

In Canada, there’s currently no specific legal framework for AI; the industry is regulated by a “piecemeal combination” of existing human rights, privacy, tort, and intellectual property law. However, the proposed federal Bill C-27 could potentially introduce Canada’s first AI legislation, the Artificial Intelligence and Data Act (AIDA). The AIDA would establish nationwide mandates governing the design, development, and deployment of artificial intelligence systems in international or interprovincial trade and commerce, applying to all developers and operators of AI systems. This legislation would also create three new criminal offences under the Criminal Code of Canada, targeting AI-related activities that intentionally cause or create a risk of harm. The penalties for these offences remain undetermined and will be set at a later time.

The AIDA won’t be enacted until 2025 at the earliest and the terms are subject to change as the bill moves through Parliament. But one thing’s for certain: the AI landscape is set to change dramatically in the next few years—organizations must begin preparing now, especially as technologies continue to evolve. And keep in mind: until the AIDA comes into force, your business is still responsible for protecting clients’ data at all times under the Personal Information Protection and Electronic Documents Act (PIPEDA). Sharing client data, intentional or not, with AI systems could still constitute a regulatory violation, which could lead to fines, legal action, and lasting reputational harm. Learn more here.

 

RELATED: All About PIPEDA: How do privacy laws affect my business?

5. Insurance

 

In the event of a privacy breach, Cyber Insurance can help you protect your business and offset expenses, like legal fees, remediation costs, forensic investigations, and more. However, the very use of AI could affect your ability to get insurance. Here’s why: say you integrate AI to expand your product offerings or enhance your services. Your insurance company might see that as a substantial shift in your operations and associated risk profile, especially if AI becomes part of client-facing activities.

Given the uncertainties of AI, insurance companies might view your business as higher risk, which could in turn lead to higher premiums or limited coverage. If you’ve already implemented these tools without notifying your insurer, they could even void your existing coverage. That means using AI won’t just increase your cyber risks, it could also leave you without financial protection when you need it most.

 

RELATED: Why is it so hard to get Cyber Insurance?

What can you do?

 

Now we’re not saying you should steer clear of generative AI tools. In fact, these days they’re practically impossible to avoid. New systems are popping up everyday with one prevailing message: “if you’re not getting on board the AI train, you’re falling behind.”

Here’s the good news: you can have the best of both worlds—as long as you exercise some caution. When harnessed correctly, AI has tremendous opportunity for innovation, efficiency, and growth. But as tools become cheaper and more accessible, the threats will only increase. To address these risks, it’s critical for all organizations that use AI technologies to improve their security posture and adopt a proactive approach to cyber risk management.

Whether you’re launching a new platform or just taking advantage of some free tools to reduce workload, be vigilant. Research the AI operators’ background and privacy policies to understand how data is protected. Educate all employees on how to use AI models properly and set clear guidelines around what kind of information should be shared. Ensure all AI models and business activities are compliant with relevant laws, regulations, and industry standards. Seek expert guidance where needed, particularly for any legal, cybersecurity, IT, or insurance concerns. With a robust data protection strategy, you can safeguard confidential information, ensure regulatory compliance, and confidently integrate AI into your business processes.

How can we help?

 

For more guidance, connect with PROLINK. As a licensed broker with over 40 years of experience and a specialized knowledge of cyber threats, we’re ahead of industry trends. We’ll help you plan, protect, and keep up with new technologies. Our dedicated team of risk advisors will:

  • Identify potential losses based on your business operations and unique needs and recommend strategies to control your costs long-term;
  • Stay on top of any emerging threats, legislations, and innovations that could affect you and share what steps others in your industry are taking;
  • Review your existing Cyber Insurance policies to detect any coverage gaps and keep your insurer informed about any major changes;

If your insurer isn’t onboard with your AI usage, we’ll advocate for your needs and align you with a specialized policy, designed for your business goals and budget. For maximum protection, we’ll even regularly reassess your cyber risk management strategy, so it evolves with the ever-evolving AI landscape.

To learn about your exposures—and how you can protect yourself—visit our Cyber Security & Privacy Breach Toolkit and connect with PROLINK today!


PROLINK’s blog posts are general in nature. They do not take into account your personal objectives or financial situation and are not a substitute for professional advice. The specific terms of your policy will always apply. We bear no responsibility for the accuracy, legality, or timeliness of any external content.


    Personal InsuranceCommercial EnterpriseAssociations & Affinity GroupsLife & Benefits


      Personal InsuranceCommercial EnterpriseAssociations & Affinity GroupsLife & Benefits

      Generic filters
      Exact matches only