fbpx

Mitigating AI Risks: Tips for Tech Firms in a Rapidly Changing Landscape

PROLINK Blog

Mitigating AI Risks: Tips for Tech Firms in a Rapidly Changing Landscape

August 15, 2023

The business world is undergoing a rapid transformation as we enter the age of AI. Tech firms are leading the charge, embracing AI technologies to disrupt markets and enhance user experiences. But with great innovation comes great responsibility—and great risk. And as a trailblazer in the landscape, you won’t just be confronting these risks firsthand, but setting the stage for how they’re addressed by the industry as a whole.

Whether you’re designing large language models, incorporating AI into your products or services, or distributing it to clients, every step you take has legal implications, from biased outputs to intellectual property concerns, regulatory compliance, and more. How do you navigate these challenges while remaining competitive? How do you harness the power of AI while mitigating any potential liabilities? Keep reading to learn more about the specific risks for tech firms and how you can protect yourself.

 

Disclaimer: Please note the information provided herein offers guidelines only and is presented from a liability-based perspective to help you avoid insurance claims. It is not exhaustive and should not take the place of legal advice, nor will it apply to all businesses, settings, and circumstances. For specialized guidance, please consult a risk management professional, lawyer, or a licensed insurance representative.

What are the risks?

1. Negligence

 

Language models are fed on large datasets and learn how to respond based on patterns and examples within that data. Models can be trained for accuracy, but the data might still contain errors that then seep into the outputs. Plus, AI algorithms are incredibly complex; even with set parameters, they might not behave as intended in every situation.

Ultimately, developers don’t always have control over the specific outputs produced by AI systems. That means failing to monitor system performance or address glitches right away could leave you vulnerable to accusations of negligence, breach of duty, or misrepresentation.

Say your firm designs an AI system that recommends products to users or integrates a third-party AI chatbot into its services. If a user relies on incorrect advice from your system and then experiences financial setbacks, they could sue you to recoup the funds. But where does the responsibility fall? On the person who licensed the AI system? Or your firm for deploying a system without double-checking for errors? Even if you’re not at-fault, you’ll still need to hire lawyers, seek out expert witnesses, and investigate your systems to clear your name, draining your resources and diminishing goodwill.

 

RELATED: How long can tech start-ups get away without insurance?

2. General Liability

 

Keep in mind: despite their capabilities, large language models can’t tell between what’s real and what’s not. And when asked to verify if something is true, they “frequently invent dates, facts, and figures.” While this stresses the importance of fact-checking on the end-user’s part, you could still face a lawsuit for defamation if any misleading information is published or shared with the public.

In fact, ChatGPT-creator OpenAI is already being sued for libel after the system made false accusations against a radio host in the United States, claiming that he had embezzled funds from a non-profit organization. This is the first case of this nature against OpenAI, which could test the legal viability of any future AI-related defamation lawsuits. However, some legal experts believe the case may be challenging to maintain since there were no actual damages and OpenAI wasn’t notified about the claims or given the opportunity to remove them.

Beyond defamation, tech firms that deploy large language models in user support systems can also face general liability risks relating to physical harm. For example, if a model gives faulty instructions for installing a product, it could lead to damage or injury; affected parties could then sue for any subsequent harm, loss, or emotional distress.

 

RELATED: Your Commercial General Liability Coverages Explained

3. Intellectual Property

 

For generative AI, training data often includes copyrighted, proprietary, or otherwise protected work, typically without the express permission or consent of their original creators. Additionally, most language models don’t cite their sources when asked to produce content, which can trigger a minefield of intellectual property risks. Here are two examples:

 

1) Creative Content

 

With just a few prompts, AI models can swiftly replicate—and commercialize—artwork, music, and film that mimics artists’ style and technique. Text-to-image systems, like StabilityAI’s Stable Diffusion, OpenAI’s DALL-E, and more, that recreate paintings, illustrations, book covers, experimental films, and more come to mind here. This raises issues surrounding the ownership and use of creative content; since AI-generated materials can be sold by anyone, artists could very well be replaced by large language models that use their own work as a base, threatening their livelihood.

If your AI system operates similarly, your firm could face lawsuits from creative professionals who feel their work was infringed upon or misappropriated. In late 2023, three artists filed a suit against several AI platforms, arguing that their works were used to train AI in their styles and produce unauthorized derivatives. Similarly, Getty Images, an image licensing service, has brought a case against StabilityAI, for misuse of its photos. More recently, Meta and OpenAI are being sued by comedian Sarah Silverman and two other authors, who allege that “the companies’ AI language models were trained on copyrighted materials from their books without knowledge or consent” and without credit or compensation.

 

2) Coding

 

Intellectual property also extends to coding, especially if your firm is using AI to write software code for products or materials. For instance, if your products include snippets of code from another company, the creators or owners of the original works could sue you for plagiarism, copyright infringement, or violating licensing restrictions, particularly for any open-source content. Even if you don’t use the exact same words, algorithms, or code, you could be found guilty if your work is deemed derivative enough. On the flip side, if you input your original source code into an AI model, you’ve now made it publicly available to others, allowing third-parties to use or replicate it without your explicit consent.

4. Cyber

 

There are a number of cyber risks when it comes to AI technologies:

  • Faulty Code: AI-generated code could unintentionally introduce security vulnerabilities or software bugs, exposing the system it’s used in to cybercriminals. Additionally, the tech firm responsible for writing the code, deploying the AI system, or designing the model could be liable for providing software that made clients or third-parties vulnerable to attack. Clients who rely on your systems for business operations and experienced significant downtime could even sue you for negligence and lost revenue.
  • Privacy: AI is constantly “absorbing data,” including potentially sensitive information, which can put privacy and confidentiality into question. Here’s an example: your company deploys an AI-powered transcription tool that records meeting notes. But what if the model begins collecting or processing data without user consent? Or collecting data outside of meeting hours? Without rigorous testing, there’s a major risk of unauthorized access or misuse of sensitive data. Case in point: OpenAI strikes again. The company is facing allegations of misappropriating Internet users’ personal data to train its tools.
  • Regulatory Compliance: As the AI landscape evolves, governments across the world are considering regulations to govern their use. Non-compliance with industry standards or specific guidelines, like encryption, access controls, or handling of personal data could result in legal consequences and regulatory penalties.

 

RELATED: Tech Firms & Bug Bounty Hunters: Where does your liability end?

5. Ethical Concerns

 

Biased system outputs can also pose ethical concerns about your AI models. For example, if an AI screening tool excludes certain groups during the recruitment process, affected individuals could sue you for discrimination.

Biased outputs are caused by unconscious bias during the design phase: “human beings choose the data that algorithms use, and also decide how the results of those algorithms will be applied.” Without efforts to diversify the data—or the teams that collect, input, and code it—AI models will simply inherit the biases and continue to underrepresent groups and perpetuate harmful stereotypes. There are countless examples of this, be it healthcare, mortgage algorithms, hiring, and more, but facial recognition software is particularly notable; tools frequently misidentify faces due to inherent racial and gender bias.

 

RELATED: D&O Insurance: Sail Through Troubled Waters With Confidence

PRO Tips: What can you do? 

 

AI is the shiny new toy that everyone’s itching to get their hands on, and tech firms are no exception, especially if it means streamlining costs and blowing past the competition. But while AI technologies offer immense opportunity, they also introduce new risks that require constant vigilance. And if you’re not careful, diving in headfirst could have disastrous repercussions for your business down the line. Even worse? Your insurance coverage might not be there to protect you in the event of a lawsuit.

Here’s why: incorporating AI technologies into your professional services could constitute a material change in your operations—and your risk profile, especially if you’re using it for client-facing activities. Given the uncertainties associated with AI, insurance companies might view your firm as higher risk, leading to higher premiums, limited coverage, or even denied claims if you haven’t notified your provider about your foray into new territory. That means implementing AI won’t just make you more susceptible to lawsuits, it could also leave you without necessary financial protection when you need it most.

In the ever-evolving world of tech, it’s critical to stay ahead of the curve. With new threats popping up left, right, and centre, what works for your business today might not be enough tomorrow. Before you rush to replace your staff or rebrand your business, be proactive and establish a solid risk management strategy to identify, manage, and offload threats. That way, you can safely and seamlessly integrate AI into your operations and set your firm up for success. Here are some tips to get started.

1. Look before you leap. 

 

Whether you’re just dabbling with AI technologies or knee-deep in the trenches, do your due diligence. Be sure to:

  • Define your engagement level. Determine how extensively you want to leverage AI and set specific goals and objectives.
  • Collaborate with all relevant stakeholders in your organization, such as your internal tech team or any third-party IT vendors you employ, to uncover risks and develop strategic action plans for responsible AI development, deployment, and usage.
  • Carefully test processes before adopting any third-party AI tools. Look into the provider’s expertise, reputation, and track record to get a sense of their background.
  • Seek expert guidance where needed, particularly for any legal, cybersecurity, or IT concerns.
  • Stay up-to-date on new trends, technologies, and industry best practices. This will enable you to adapt your AI strategy and mitigate risks accordingly.

 

Once you’ve identified your risks, you can implement containment strategies to manage them.

 

RELATED: Key Risk Indicators for Tech Firms

2. Prioritize safety. 

 

In the race to scale, it’s tempting to get carried away with the pursuit of bigger and better. But try to strike a balance between innovation and accountability. Design with ethics in mind and the rest will fall into place. By focusing on safe and responsible AI use, you can build a strong foundation that not only fosters trust among clients and stakeholders, but acts as a safeguard against legal complications. Key areas to hone in on include:

Data Acquisition

 

Develop a strategy to acquire training data from reputable sources and provide accurate and contextually relevant outputs. Secure permissions or licenses for copyrighted content and set up guidelines for data usage and training processes.

Inclusion

 

Prioritize inclusion all-around; in addition to more diverse and representative data, seek more diversity, not just in the development team, but company-wide, to encourage different perspectives and try to mitigate biases throughout all stages of AI deployment.

Quality Control

 

Establish validation processes to promptly detect and rectify any errors or unintended biases, including regular auditing, review, and maintenance. Collect feedback from users—both internal and external—and stakeholders to improve the system or product’s reliability and overall performance.

Legal Guidelines

 

Keep current with relevant data protection regulations, defamation laws, and other industry standards in the regions where your AI systems are deployed. If needed, establish a dedicated legal and compliance team to identify and monitor risks and ensure compliance.

Privacy & Data Protection

 

Incorporate privacy into all aspects of design right off the bat. build safeguards directly into the system architecture. Run regular security tests and encrypt data to ensure privacy.

Transparency

 

To manage user expectations, provide clear disclaimers about the capabilities and limitations of your AI systems, as well as how much data is collected and how it’s used. Give people an opportunity to opt-in or opt-out and explain how the model works and makes recommendations.

Intellectual Property

 

When using generative AI tools to write code, Goodwin Procter LLP partner Stephen D. Carroll recommends the following:

  • “Get a license or a representation and warranty from the provider of the generative AI tool ensuring that the source works on which the tool is trained are licensed — and that the license extends to you, the user.”
  • “Run a source code audit program to analyze any code you create using generative AI tools to determine whether it is similar to any other code, open source or otherwise. If it is, you can take steps to comply with the relevant open-source license or excise the code. Importantly, running a source code audit program can itself be evidence against a claim of willfulness in a copyright action.”
  • “Conduct due diligence on the provider of the generative AI tool to understand what source materials it uses. Some generative AI tools may give users a degree of choice in determining what training materials are included when they use the tool.”

3. Review your insurance. 

 

Insurance is a critical, but often overlooked, part of the AI implementation process. Even if the claims are groundless, dealing with a lawsuit can tie up valuable resources that could be better spent on marketing, research, or otherwise growing your firm. And keep in mind: lawsuits aren’t just costly or inconvenient; they can also destroy your reputation and deter any potential investors and clients.

The right policy will help you avoid financial strain and ensure that legal action won’t jeopardize your company, your standing, or your financial well-being—but only if you maintain transparency with your insurance provider. Whether you’re simply adding a new component to your services or overhauling your operations, engage your insurer early on. This way, you can verify coverage for any AI-related risks and uncover limitations or exclusions well in advance.

Your provider can also advise you on any additional coverages to offer greater protection for any new risks that arise, including:

  • Professional Liability Insurance: Protects your business from accusations of errors, omissions, or negligence committed within the scope of your professional activities, such as errors or mistakes in the design, implementation, or use of AI systems. Learn more here.
  • Commercial General Liability Insurance (CGL): Protects your business from third-party claims of bodily injury, property damage, and reputational harm (including defamation) caused by your professional activities or company operations. Learn more here.
  • Data Security & Privacy Breach Insurance: Protects your business and offsets your losses in the event of a breach, like if your company’s information is stolen or exposed by a hacker, or accidentally released by an employee. This coverage includes coverage for both first-party expenses (costs incurred by your business following a breach) and third-party events (costs incurred by a third-party who was affected by the breach). Learn more here.
  • Directors & Officers (D&O) Insurance: Defends your business leaders and board members if they’re personally sued for any actual or alleged wrongful acts in managing the company, such as poor governance, failure to act, financial losses, misallocation of funds, operational failures, and more. This coverage is critical to protect both your corporate and personal assets if you’ve invested a lot of your own resources in your business. Learn more here.
  • Intellectual Property Insurance: Covers legal expenses and damages resulting from IP-related disputes.

4. Work with a risk advisor. 

 

Every AI model is different and the right risk management strategy for your needs will depend on a variety of factors, including your industry, operations, and the specific systems you’re working with.

That’s why it’s critical to work with a risk advisor that specializes in the technology sector. With 40 years of experience and over a decade of serving tech firms, a licensed broker like PROLINK can help you navigate the changing AI landscape and become resilient in the face of change. Our dedicated advisors will:

  • Keep you informed about emerging threats, legislation, and innovations that could affect you and share what steps other firms in your industry are taking;
  • Provide you with comprehensive insurance and risk management solutions that align with your business goals and budget;
  • Regularly reassess your exposures and readjust your strategy to scale with your leadership, people, and processes.

 

With greater insight into your risks and a dedicated partner by your side every step of the way, you can operate with confidence and stay ahead, no matter the delay, disruption, or hurdle. You can work to control your exposures—and your costs—long-term. You can focus on what’s most important: your business.

To learn more, connect with PROLINK today!


PROLINK’s blog posts are general in nature. They do not take into account your personal objectives or financial situation and are not a substitute for professional advice. The specific terms of your policy will always apply. We bear no responsibility for the accuracy, legality, or timeliness of any external content.


    Personal InsuranceCommercial EnterpriseAssociations & Affinity GroupsLife & Benefits


      Personal InsuranceCommercial EnterpriseAssociations & Affinity GroupsLife & Benefits

      Generic filters
      Exact matches only