Blog Post

With the rise of chatbots, such as ChatGPT (OpenAI), Bard (Google), and Claude (Anthropic), and other generative artificial intelligence (AI) tools developing at a rapid pace, employers need to consider whether, and to what extent, employees should be permitted to use them in workplace. On the one hand, there are confidentiality and privacy issues, bias and fairness concerns, legal compliance headaches, and other potential liability pitfalls. Yet, generative AI tools promise revolutionary insights, creative content creation, conversational interfacing, and efficiencies that may outweigh those risks. Should employers embrace or restrict the use of generative AI in their workplaces? Either way, employers must be proactive in considering, developing, and implementing best practices and policies governing their employees’ use of AI.

How Employees May Be Using Generative AI

ChatGPT stands for “Chat Generative Pre-trained Transformer” and is an AI language model, or “chatbot,” developed by OpenAI and trained to interact conversationally and perform a variety of tasks. ChatGPT can answer questions, compose emails, letters, essays, presentations, or codes, fact-check, generate lists or create other content, and even correct your grammar. ChatGPT can mimic human dialogue and decision-making using reinforcement learning to decipher what the user is asking, to determine how to compile data, and then to create a response to the user’s questions. AI tools like ChatGPT rely on training from vast amounts of data to function, learning not only from publicly available texts, but also from user interaction.

Understandably, the release of ChatGPT and other generative AI tools has prompted excitement, as well as concern and questions, across all industries. Employers should be prepared to either address how their employees may use AI technology, or in the alternative, be prepared to thoughtfully explain and address why they are restricted from doing so.

Risks Associated with Employee Use of Generative AI at Work

Although generative AI tools can introduce efficiencies in workplace processes, they may also present potential drawbacks and legal risks for employers. The list of concerns include:

  • Privacy and Confidentiality. There is the possibility that employees will share proprietary, confidential, private, or trade secret information of the employer or its customers or clients when having “conversations” with chatbots. Employers need to be mindful of any contractual or other obligations to keep that information confidential, and of the specific data usage policies and security and privacy settings of the AI tools being used.
  • Accuracy and Quality Control. Generative AI is far from perfect (yet), and only as good as the information learned in the training phase. Expect mistakes. This could be a deal breaker if there is no quality control or where employees reviewing output cannot adeptly identify and correct errors, or otherwise spot and fix incomplete or inaccurate content.
  • Consumer Protection Risk. If clients or customers are not aware that they are interacting with generative AI, or they receive work product from a company that was generated by AI without a clear disclosure, they could potentially raise a claim of an unfair or deceptive practice under federal, state or local law. On another level, they may be disgruntled to learn that content they paid for was produced by generative AI, if they were not warned ahead of time.
  • Bias. AI generation is dependent on the information upon which it is trained, and accordingly, what information the trainer decides to input. This could impact the types of information the chatbot offers in response to questions presented. Employers need to be aware that there are some state and local laws which require notice if AI tools are to be used in certain employment decisions and/or audits.

Key Considerations To Reduce Risk: Implement Workplace Guidelines and Training Governing AI and Update Confidentiality Policies and Agreements

Evaluating the benefit of generative AI tools against its potential harm can be challenging for many employers to navigate when drafting guidelines, or implementing policies to address workplace use. While an outright ban on generative AI tools may be effective short-term to reduce risk and evaluate options, for many employers, it may not be a viable long-term solution as AI tools continue to proliferate and be refined. A blanket ban on use of generative AI could also subvert usage by employees without the benefit of a well thought out policy or process. Employers should weigh the pros and cons that could be achieved by employees using generative AI to perform such tasks as writing routine letters and emails, generating reports, and creating presentations against the potential loss in developmental opportunities for employees performing such tasks themselves. AI should enhance, not supplant the employee’s own knowledge, skill, and creativity.

At a minimum, employers should consider their unique circumstances and formulate a policy that identifies: (1) uses that are prohibited; (2) uses that are permitted with authorization from some designated authority; and (3) uses that are generally permitted without any prior authorization. Employees should be trained on how to use generative AI responsibly, with emphasis on protecting confidential information, avoiding bias, and the importance of quality assurance and independent verification.

Employers should determine to what extent confidentiality policies and individual agreements may need to be updated to address concerns relating to the use of generative AI in the workplace. Employees should be cautioned and trained to avoid sharing confidential, private, or personally identifiable information if permitted to use chatbots or other generative AI tools for work-related tasks. Employers need to have a strong data output verification system in place, and a knowledgeable point of contact to troubleshoot and resolve issues as they arise. Employers should also develop a record keeping system to identify and log when content was created using generative AI tools and the prompt that was used to generate it.

Legal Horizons For AI Tools

The AI landscape is growing at an unprecedented pace. Employers should make every effort to stay current on federal, state, and local laws and guidelines impacting use of these tools in the workplace. Government regulators and litigants are looking for opportunities to bring claims against companies who they believe are adopting or using AI in a reckless manner. Comprehensive AI policies may help to reduce potential liability while ensuring these tools are being used consistently and responsibly, with thoughtful consideration and oversight.

Chatbots and other generative AI tools are likely here to stay, and new and improved versions are on the horizon. Employers will ultimately be forced to address its use in their workplace as the technology continues to improve. For the risks generative AI can present, employers can also leverage its benefits. But the discussion has only just begun. There is sure to be much more to “chat-a-bot” in the near future. For more information or evaluation about the use of generative AI tools in the workplace, consult your Akerman Labor & Employment attorney.

People
Perspectives
Work
Firm
Vision
To navigate our site
To search our site

Welcome to our new site

Click anywhere to enter