When a technology as impactful as generative AI comes on the scene, it’s important to take a position on it as a company. Generative AI stands to change how your employees work and innovate, and it’s probably already widely used within your company. The list of benefits is long, but generative AI isn’t without its risks. By creating a company policy governing the use of generative AI, you can start using the technology to its full potential – while minimizing the potential drawbacks.

In this article, we’ll cover some of the key risks generative AI poses, how a policy can help reduce them, and what the process of creating a policy could actually look like in your company. It shouldn’t be seen as legal advice, but it is based on our own experiences with generative AI tools and will hopefully provide a foundation for further research.

Randstad_MasterBrand_Illustration

download the generative AI policy template, and start creating your own

download here

why create a generative AI policy?

Your company should have a generative AI policy for the same reason it has all its other policies – to ensure compliance, to guide employees in their work, and to streamline your daily operations. 

Generative AI tools that can generate text, image, audio and even video content from a few simple prompts have compelling applications across your company. With the range of tools out there and the speed they’re growing, your company faces losing ground to the competition if it doesn’t start investigating how it can take advantage of these tools.

But for employees to make these investigations confidently and for the company to avoid the well-known risks of generative AI, you should have a policy in place. It shows the company has spent time thinking about the consequences of generative AI and gives employees clear rules to stick to when they experiment – allowing them to safely reap the benefits of the generative AI tools that are available. 

Let’s look at some of the key risks that your generative AI policy should try to mitigate.

the risks of generative AI.

unfairness, bias and unethical behaviour in AI tools

Generative AI tools are only as objective as the data on which they are ‘trained’. If the datasets a tool uses to create content are biased, the tool’s output will be biased as well. For example, imagine a generative AI tool designed to generate job descriptions based on only a few basic details about the job. If that tool has been trained on thousands of job descriptions which contain non-inclusive language, the job descriptions it generates will do the same. In this way, it’s easy for AI tools to simply reproduce the biases of their creators or users. 

The major AI developers are making efforts to avoid bias in their tools, but it’s a serious challenge to eradicate it completely. Your generative AI policy should recognize this limitation and specify measures to overcome it – for example, by mandating that all AI-generated content is thoroughly checked by a human for bias and fairness before being published or distributed.

RM_1024-tech.webp
RM_1024-tech.webp

intellectual property (IP) breaches

This risk is also a result of the way generative AI models are built. By taking influence from the millions of pieces of image, music, video and text data on which they are trained, they can produce completely unique creations – however, there’s a risk that significant elements of this training data make it into the generated content, creating potential intellectual property issues.

The debate on whether AI-generated content infringes the rights of the original creators is ongoing, with some creators’ organizations demanding that creators must consent before their creations can be used to train AI models. Other software companies are focusing on ‘safe for commercial use’ generative AI tools that are only trained on fully licensed content. The question of whether AI-generated content can be copyrighted varies in different jurisdictions, but the answer is important for companies considering using this kind of content in their communications.

The legal decisions your generative AI policy reflects will vary, but the policy should aim to reduce the risk of IP issues – potentially by mandating that AI-generated content cannot be used externally in the company’s communications.

data privacy

The privacy of data shared with generative AI tools has been a topic of discussion since they appeared on the market. It’s known that part of the reason many of these AI models improve so quickly is because they employ user feedback and input to fine-tune their responses and expand their knowledge.

However, imagine a case where an HR specialist hands over sensitive career data about a list of job applicants to an AI chatbot and asks it to group them into categories based on their level of experience. If the chatbot makes use of user input to improve the accuracy of its responses, some security experts warn there’s a risk that this sensitive data could be exposed to other users who know how to ask the right questions.

This is an extreme case, but it illustrates some of the unanswered questions about generative AI and data privacy. Many AI tools state clearly that they don’t expose any user inputs to other users – but this isn’t a universal policy.

Privacy should be a key part of your generative AI policy. Your company has to decide the details, but specifying that no data related to individuals or business activities should be handed to generative AI tools via prompts could be a good start.

On top of these risks, your policy should also consider the general security and compliance risks that come with using any third-party tool for business purposes – as well as the reputational damage of accidental IP infringements, data leaks or biased content.

Randstad_MasterBrand_Illustration

download the generative AI policy template, and start creating your own

download here

creating your company generative AI policy.

assemble your experts

Next to legal and privacy expertise, an effective generative AI policy will also require the input from the people that use it – whether they’re in marketing, engineering, HR or sales. They’ll have the best insight on how generative AI is used within the company and the potential risks of the technology that can arise in their areas of use.

spread the word

Creating an AI policy can be difficult, but ensuring employees recognise and accept it is an even greater challenge. Fortunately, most studies and reports show that workers globally are enthusiastic about AI’s potential in the workplace – so implementing the policy through effective internal communication, training and discussion might be easier than you think. As with any policy, enforcing it through your usual channels is important – but it’s better to prevent breaches in the first place by engaging employees around the policy and generative AI in general.

make sure it allows room for innovation

Your policy is a key tool for protecting you from the potential risks of generative AI. But it shouldn’t be so strict that it makes using generative AI unnecessarily difficult. Generative AI has great potential across your company, so making employees reluctant to use it with an overly constrictive policy would be the wrong way to go. Instead, focus on preventing the most serious risks in the policy, and use training and communication to show employees how important responsible AI use is.

how long does it take to create a generative AI policy?

Your generative AI policy doesn’t need to be especially long to be effective, but the time it will take to create depends entirely on the specifics of your organization. The number of people involved in the process, the sensitivity of your generative AI use cases and the size of your organization will all have an effect.

It is possible to speed up the process by starting with a generative AI policy template, however. Download ours to get a suggestion for the basic structure of your policy, along with tips and guidance for what you should consider during the policy creation process. It’s a great starting point if your company wants to start taking generative AI seriously.

about the author
profile picture martin woodward
profile picture martin woodward

martin woodward

director global legal - tech & procurement

Martin is Randstad's Director Global Legal - Tech & Procurement and member of the Global Legal Leadership Team. In close collaboration with the company’s Global IT and Global Procurement functions, he and his team are responsible for all global commercial and IT contracting, providing legal support to large-scale IT projects, and issuing guidance to the business on technology-related legal challenges, in particular in relation to artificial intelligence.

stay up to date on the latest recruitment and labor market news, trends and reports.

subscribe