More workers are using generative AI tools to carry out their daily tasks, but some may be keeping their usage secret from their bosses. Although AI may make some workers more productive, the technology also brings a variety of liability exposures. Generative AI risk management keeps these risks under control.
Are Your Workers Using AI?
Microsoft says the use of generative AI nearly doubled in a span of just six months. A 2024 study found that 75% of global knowledge workers were using the technology and 46% had only begun using AI in the last six months. It’s not just young workers, either – 85% of Gen Zers and 78% of Millennials are using AI, but so are 73% of Baby Boomers and 76% of Gen Xers.
Although workers say AI helps them save time, focus on their most important tasks, be more creative, and enjoy their work more, Microsoft found that not everyone is willing to disclose their usage. In total, 52% of workers who use AI are reluctant to admit they use it for their most important tasks and 53% worry it makes them look replaceable.
Generative AI Usage Is Risky
For employers, the Microsoft study has some significant implications. Even if no one in your office is talking about AI, there’s a good chance many of your workers are already using it. Furthermore, a significant proportion may be new to using AI, meaning they may be particularly susceptible to blunders.
This is where AI usage may become a problem. Although generative AI makes many tasks easier, it’s a new technology and it’s far from perfect. Companies that use it – especially without proper safeguards in place – may expose themselves to the following:
- Copyright Infringement. The copyright issues surrounding generative AI are complicated, to say the least. Currently, there are multiple lawsuits stemming from the claim that generative AI is trained on copyrighted material and that the outputs may infringe on those copyrights. For workers using generative AI for marketing purposes, these issues are especially concerning.
- Data Breaches. To use generative AI programs, users need to put information into prompts, which may not always be secure. For example, BGR reports that a ChatGPT memory exploit would have exposed users’ private chat data if OpenAI hadn’t fixed it in time. Furthermore, How To Geek says there have been reports of ChatGPT leaking passwords in conversations. According to Forbes, Samsung allegedly banned workers from using AI chatbots after a leak of sensitive code.
- Factual Errors. According to CNET, Google’s search engine AI apparently borrowed information from parody sites and online trolls, resulting in suggestions for people to eat rocks and put glue on pizza. Generative AI programs have also been known to fabricate information – a phenomenon known as hallucination. According to Reuters, multiple lawyers have ended up in trouble for citing fake cases after using AI chatbots. If factual errors end up in your emails or reports, you could be facing problems of your own.
The Need for a Comprehensive Generative AI Policy
Although some employers may want to ignore generative AI completely, the fact that many workers are using it without disclosure makes this impossible. A good generative AI policy will help businesses control risks.
When crafting your policy, consider the following generative AI risk management tips:
- It’s hard to control generative AI risks when you don’t know how employees are using it. Workplace generative AI policies can require employees to disclose their use of AI tools. Keep in mind that some workers are worried about being replaced. You may need to address these fears before workers will come clean about their usage of AI.
- Acceptable Programs. Since the release of ChatGPT, the number of generative AI programs has exploded. Your company may be fine with some of these but not others, which is something to include in your policy. If your workers want to use generative AI but you’re worried about security, it may make sense to invest in a program for your company and prohibit the use of all other programs.
- Allowable Uses. You may be fine with employees using AI for some tasks (such as summarizing a letter) but not want them to involve AI in other, more sensitive tasks. A generative AI policy that clearly lists acceptable and prohibited uses will help you avoid issues.
- Human Oversight. AI helps workers save time, but it is not reliable enough to use without human oversight. Employers may decide to require workers to review any AI-generated content before using it.
- Sensitive Information. In addition to outlining acceptable and unacceptable uses, you may like to clarify which information employees can and cannot put into generative AI prompts. For example, you may want to prohibit workers from inputting any information that is unavailable to the public, such as trade secrets.
As the risk landscape evolves, it’s important to update your company policies as well as your insurance. Heffernan Insurance Brokers can help you review your insurance needs. Learn more.