Tech Firms & Bug Bounty Hunters: Where does your liability end?
May 27, 2022
The last few years have seen the rise of bug bounty hunters: security researchers who find software bugs for a fee, rather than exploiting them. Most bug bounty hunters search for system vulnerabilities independently—without a company’s permission, that is—report them to the organization, and request a reward for their efforts. While not every bug will earn a bounty, the reward itself can range from hundreds to thousands of dollars depending on the gravity of the security risk.
As a Technology Professional, you might already be familiar with bug bounty hunters. In fact, your firm might even deal with them regularly. But given the nature of your industry, there are additional exposures for tech firms that work with bug bounty hunters. After all, while many bug bounty hunters are ethical hackers, you can’t always be certain of someone’s motivations. And how and when you respond to them could have serious implications for your business—and your insurance coverage.
So what do you do if a bug bounty hunter reaches out to you? What are your obligations as a business? When do you tell your clients? And your insurance company? Keep reading to learn more.
How am I liable?
Bug bounty hunting is a growing moonlighting gig for IT professionals and security researchers. In fact, companies like Apple, Google, Microsoft, and more have even set up large-scale bug bounty initiatives as a way to crowdsource penetration testing and catch potentially catastrophic bugs that might otherwise lead to a security breach. Alternatively, some businesses may partner with third-party bug bounty platforms to supplement their existing security assessments instead of setting up their own in-house programs.
But while larger corporations have the resources to be proactive, smaller software vendors might have to sit and wait for cyberattacks to happen or for bug bounty hunters to come out of the woodwork before they can fix an underlying security issue. And unfortunately, it’s not always easy to tell who’s ethical (a white hat) and who’s malicious (a black hat).
1. Cyber Liability
Under PIPEDA, all organizations are required to report to the Privacy Commissioner of Canada any time a “breach of security safeguards” poses a real risk of significant harm to individuals. But not every vulnerability equates to a privacy breach. Maybe it’s a corrupted file that’s a threat to your server. Or maybe it’s a network security issue that can be patched within the hour. So how do you know when to report?
Here’s an example: say a bug bounty hunter reaches out to you about a weakness in your server. You run data forensics, but there’s no indication that your networks have been compromised or that any confidential information has been exposed. Because there’s no verifiable evidence of a breach, you elect not to tell anyone about it—not your clients, your insurance company, or even the Privacy Commissioner. You patch the vulnerability, pay the bounty hunter, and move on.
But what happens if they go rogue? What if it’s not in your budget to pay the reward? The bug bounty hunter could post the vulnerability online for the whole web to see or even sell it for a profit. Or what if, before reaching out to you, they copied some of your client data? They could come back in a few months and threaten to release it unless you pay a ransom.
If you reach out to your insurance company afterwards, your Cyber Insurance might not respond since you didn’t tell them about the initial vulnerability that led to the breach. Why? Most policies have specific clauses that require you to notify them as soon as you’re aware of a situation that MAY give rise to a cyber incident. And the longer you wait, the lower the chances are of your policy kicking in.
2. Professional Liability
Whether you design, develop, or distribute software, potentially hundreds of people could be using your systems to run their businesses. That means the trickle-down effect of a bug could be catastrophic. If a security vulnerability escalates into a privacy breach, it won’t just be your organization that’s affected. Your clients—and your client’s clients—could all be impacted, much like the Log4j vulnerability.
Even worse? As the provider of tech services, you’re not just responsible for building software, you’re also responsible for maintaining uptime and protecting the data your clients store on your servers. That means you could also be liable for providing software that made your clients or a third-party vulnerable to a cyberattack. And if clients who rely on your system for their business operations had significant downtime, they could sue you for negligence and lost revenue.
To be clear, the fact that your software contains a defect or that you’re working with a bug bounty hunter doesn’t automatically mean that you’re negligent—it’s more about how you handle the situation. In order for a lawsuit to be successful, a client would have to prove that your actions fell short of your standard of duty as a tech firm (i.e. failing to test if your products were free from defects or neglecting to notify your clients once you knew about a security flaw, even if you patched the vulnerability right away).
What can I do about it?
Despite the risks, the reality is: you’ll never be able to avoid bug bounty hunters entirely, especially in your industry. After all, technology is never perfect. No matter how good your products are or how strong your team is, you’re going to have software bugs and you’re going to need patches.
So how do you control your liability? How can you keep your clients and your business safe without incurring a lawsuit or voiding your insurance coverage? The first step to staying ahead is being prepared. Here are three key steps all organizations should take when dealing with bug bounty hunters.
1. Implement response protocols.
Don’t just play it by ear; make sure you have established protocols in place for dealing with bug bounty hunters the same way you would for any other security incident, like vulnerability disclosure guidelines and an incident response plan.
What’s the difference? Vulnerability disclosure guidelines will help you establish parameters for the hunter to work in, like terms for confidentiality and payment. If they veer outside of the box, you’ll have grounds to go after them. In contrast, an incident response plan outlines the steps you’ll need to take to get things back online following a breach.
Remember, working with bug bounty hunters isn’t necessarily a bad thing, especially since many of them are white hats. Having response protocols just means that you’ll know exactly how to respond in different scenarios. In fact, according to Canadian CISO Jason Barr:
“A vulnerability disclosure program allows companies of all sizes to benefit from the incredible research community that lives on the internet. It’s sort of like having a neighbourhood watch program to patrol your streets and warn you if you forget to close your window.”
When drafting your disclosure program and response plans, just make sure you consider the following:
- Define roles and responsibilities for dealing with security flaws (i.e. who will respond to a bug bounty hunter, triage the issue, conduct data forensics, make the necessary fixes, manage the long-term impact, will draft any communications, etc.).
- Outline bug severity (low, medium, high, and critical) and what rewards are acceptable for each.
- Engage representatives from different units in the company in addition to technical staff when crafting a plan. This will ensure all concerns are voiced and addressed.
- Set up a specific budget for bounty prices and remediation costs so you’re not caught off guard if someone reaches out.
- Consult with a lawyer to determine the best approach. While it might seem like an unnecessary expense, forking out a few thousand in legal counsel is better than losing potentially millions in a breach.
2. Verify the vulnerability.
You have an obligation to take direct action the moment you become aware of a security vulnerability. Whether you’re investigating in-house or partnering with another provider, data forensics is critical to get an understanding of what happened, why, and who’s at risk. Is the bug exposed to the internet? Is client data accessible? Could client networks or business operations be compromised? Once you know more, you can take the appropriate remediation steps.
For example, if you have reasonable grounds to assume there has been a breach, you can notify the Privacy Commissioner and your clients and outline what steps you’re taking to patch the bug and how long it’ll take to resolve. Alternatively, if there’s zero evidence any information has left your systems, you might not have to tell clients at all.
Keep in mind: different jurisdictions have very different requirements for notification. While some have no limit, others may require notification within 6 hours, so you’ll have to act quickly.
3. Notify your insurance company.
Most clients have a fender bender mentality when it comes to insurance claims; they don’t want to report small incidents or network security issues because they’re afraid their premiums will skyrocket. But failure to report only worsens the problem. Plus, there are several reasons you should loop in your insurance company right away, even if you don’t know the scope of the loss just yet.
Firstly, unless there’s concrete evidence of impact, reporting a security flaw alone doesn’t count as a claim in the eyes of your insurer so it won’t affect your rates. Secondly, reporting early means you can get your insurance company’s approval on remediation steps and reduce the possibility of voiding your coverage down the line.
Lastly, your insurance company also has a wealth of valuable resources that you can tap into to navigate the situation. While your Cyber Insurance won’t cover bug bounties or ransom payments, most policies include access to a specialized breach coach that will advise you on regulatory compliance and incident response. They’ll help you come up with a game plan and properly communicate the issue to all relevant parties—they’ll let you know if you even have to involve the Privacy Commissioner or tell your clients. In the event of a ransomware attack, they might even be able to help you stand up to threat actors or negotiate ransom demands.
As a tech firm, you’ll always bear a degree of responsibility when it comes to software bugs. But that’s the nature of the job and the courts know that. That’s why security flaws and privacy breaches are assessed on a case-by-case basis—so the courts can determine whether or not you did everything in your power to prevent harm. And as long as you prove that you dotted all of your i’s and crossed all of your t’s—as long as you can prove that you did your due diligence and due care—you won’t be considered negligent.
For more guidance on bug bounty hunters, connect with PROLINK. With 40 years of experience and over a decade of serving technology firms, a licensed broker like PROLINK can help you navigate industry trends, adopt a proactive approach to risk management, and become resilient in the face of software bugs. Our dedicated team of risk advisors will:
- Provide you with a panoramic view of your business landscape;
- Stay on top of emerging risks and unique threats that could affect your organization;
- Share what steps others in your industry are taking and advise you accordingly; and
- Align you with specialized risk management and insurance solutions that help you retake control and meet your strategic objectives.
Connect with PROLINK today to learn more!
PROLINK’s blog posts are general in nature. They do not take into account your personal objectives or financial situation and are not a substitute for professional advice. The specific terms of your policy will always apply. We bear no responsibility for the accuracy, legality, or timeliness of any external content.