AI Powered Meeting Tools

AI-Powered Meeting Tools: Tips, Benefits, & Risks

Introduction

AI-powered meeting tools like Zoom AI Companion, Fireflies, Nyota, Otter, and Microsoft 365 Copilot have become valuable tools in the construction industry and beyond. AI meeting tools enhance employee productivity and provide a reliable record of discussions by attending online meetings, either alongside or instead of participating. These tools can record video, transcribe audio, summarize notes and actions, provide analytics, and even coach speakers on more effective communication.

But do the benefits outweigh the associated security and privacy risks?

Yes, for broad AI use, if thoroughly vetted and managed appropriately. It depends on the information gathered from such vetting and the company’s options to minimize risk for meeting tool purposes.

When it comes to choosing an AI recording tool, avoid the following:

  • Platforms that use the entirety of uploaded recordings or transcriptions for training;
  • Platforms that retain information uploaded even after deletion (sometimes deletion is not truly deletion);
  • Platforms that do not take reasonable steps to protect the privacy and confidentiality of your company’s communications and data.

Proper vetting is not as simple as it should be, especially given data privacy law requirements. Instead of straightforward policies embodied in one document, the current review invokes a dance with privacy policies, terms of use, definitions, and miscellaneous protection policies that either clarify or, at times, contradict what seems straightforward elsewhere.

How Do AI Meeting Tools Impact Users?

Trustworthy Conversations

Conversations are forever changed. Employees who speak candidly about co-workers, managers, the company, and its customers or investors might find themselves disciplined based on the tool’s transcript, which could easily be taken out of context. AI tools may inhibit meeting participation just as with any other recording device because a meeting participant might feel inhibited from being completely candid during any conversation they know is being tracked by an AI recording/notating tool. This could serve as a detriment to the free flow of information and limit the productivity of any conversation. The fear of recordings being misused could also stifle innovation and transparency.

The possibility of litigation should also be considered for all transcribed or memorialized meetings because all such records become discoverable. Even if a record is protected by the attorney-client privilege or work product doctrine, inputting such information into the AI large language model/program opens up the information to train the model for subsequent outputs.

Consent

Employees or clients’ vendors might feel obliged to consent against their will because a senior colleague wants to use an AI tool. This is important to keep in mind because consent varies by state. Most AI tools include a clear and conspicuous recording consent mechanism to comply with laws like the California Invasion of Privacy Act, which makes it a crime to record a person’s voice without their knowledge or consent. However, legal requirements vary: 11 states in the US, including California, have “all-party” consent laws, requiring all participants to consent to be recorded, while the remainder have “one-party” consent laws, where only one participant needs to consent.

Overreliance

Reliance on the accuracy of transcriptions, which may contain mistakes, can lead to these errors becoming part of the official record. Large language models (LLMs) can sometimes “hallucinate,” meaning they generate plausible sounding but incorrect or nonsensical information. Hallucination examples include (1) fabricated facts, where the model might invent details that were not in the original data (e.g., generation of a fictional biography for a real person or make up statistics); (2) inconsistent information, where the model might provide contradictory information in different parts of a conversation because it generates each response independently, without a consistent internal state; and (3) misleading associations, where the model may combine unrelated facts in a way that seems plausible but is incorrect, like attributing an event to the wrong country or historical period.

Privacy & Confidentiality

AI meeting tools pose significant privacy and security risks to corporate information and those being recorded. The potential for misuse is a pressing concern many organizations have yet to address adequately. As this technology spreads faster than awareness of its risks, immediate action is necessary.

Companies waive attorney-client privilege when a third party is present at a meeting or when information is shared with a third party. What does that mean in the context of an AI tool “listening” in on a meeting? Technically, the tool is a third-party service provider unless created and deployed by the company directly; therefore, privilege is eroded. More often than not, service providers work with other parties to develop platforms and deploy AI tools. Further, what happens if the transcription is stored or viewed by the service provider or used to train the models of the service provider?

Online meetings frequently involve discussions of personal data, intellectual property, business strategy, unreleased information, or security vulnerabilities. Leaks of such information can cause legal, financial, and reputational damage. Existing tools to stop leaks, such as data loss prevention systems, may not prevent the data from leaving the organization’s control.

The wheels of justice continue to move in this arena; however, state-by-state, 20th-century rules are applied to and revised to meet 21st-century situations like this. The trend has been the continual adoption of established laws adapted to modern times. In this circumstance, the AI tool is a third party intercepting privileged communications, thus rendering the conversation non-privileged and discoverable. All recordings, transcriptions, and notes become fair game to be turned over to opposing parties in litigation or to unauthorized third parties in breach of contractual confidentiality obligations.

Unauthorized Access

There is a significant potential for unauthorized access to or misuse of recorded conversations. While enterprise solutions might offer some control through administrative safeguards, third-party applications often need more protections. It may sometimes be unclear how or where a provider will store data, for how long, who will have access to it, or how the service provider might use it.

Bounds of Use

Some AI meeting tools and AI platforms generally may allow the provider to ingest and use the data for other purposes, such as training the algorithm. In 2023, users of virtual meeting provider Zoom raised concerns when an update to Zoom’s terms of service suggested Zoom would use customer data to train the company’s AI algorithm. Zoom had to update its terms and clarify how and when it would use customer data for product improvement.

Public-facing generative AI models, like ChatGPT’s free version, 3.5, and now 4.0, pose a tangible threat to confidential information: The models could repeat information in one user’s query to the next user asking about something similar. OpenAI has released an enterprise solution that provides additional protections; however, setup is critical to ensure the protections are in place while periodic auditing and penetration testing is needed to comply with laws to ensure confidentiality and data protections have not been abandoned, amongst other critical items.

Companies must also consider the high AI tool provider company. What would happen if confidential meeting transcripts ended up being the subject of a cyberattack ransom or leaked into the public domain? Depending on the state, the AI company may not be obligated to notify you in compliance with state notification laws, which means you might not even receive notice of a potential data breach, let alone have visibility into the security protocols of the AI provider.

In addition, once the AI tool is tied into your system, do you know the bounds of its access or whether there are options to limit such access? Many do not. If the AI tool gains access to unintended portions of a user’s calendar, email, customer management system, document management system, and more, it could also obtain and access other information contained in the system, such as appointments, contacts, financials, and more. In the event of a data breach, that information might become public.

Record Retention

Does the tool provide the means to establish how generating notes, recordings, and transcriptions are shared, stored, and destroyed? If it is a manual process, does the AI tool allow manual manipulation to comply with data retention obligations, whether contractual, by regulation, or by law?

Mitigating Risks & Managing Use

As AI tools become more integrated into professional spheres, it is crucial for companies to address privacy and security concerns urgently. Companies should assemble dedicated teams to assess emerging technologies, document policies, and socialize them across the organization.

A comprehensive AI-approved use policy should include the following, at minimum:

  • Authorized use of AI meeting tools
  • Consent requirements
  • Data management and protection protocols
  • Clear consequences for violations

Companies must also disclose such use to employees and clients via an AI Disclosure Policy. Companies are currently using such policies as their form of notice prior to obtaining consent (linking in engagement agreements, contracts, websites, etc.).

Continuous updates to these policies are essential as technology evolves. Additionally, educating employees about potential risks and encouraging a culture of vigilance is critical. Completing an analysis of the terms and conditions and policies of all tools used is also imperative to ensure the policies flow with the tool used and that stakeholders can make informed decisions about which programs to adopt or not.

Conclusion

By taking proactive steps, companies can harness the benefits of AI tools while safeguarding sensitive information and maintaining trust with employees and clients. Preventing incidents before they occur and ensuring that the integration of AI in meetings enhances productivity without compromising privacy and security can improve and revolutionize team collaboration. The same tips and analysis can be applied to top AI tools for construction project management, including OpenSpace, Procore, ALICE Technologies, Doxel, and Buildots, that enhance efficient project management through data-driven decision-making and real-time progress tracking.

If you have any questions about what you should consider when implementing AI tools in your business, reach out to the HBS Data Privacy & Cybersecurity team.

Disclaimer

This material is provided for informational purposes only. It is not intended to constitute legal advice nor does it create a client-lawyer relationship between Hall Booth Smith, P.C. and any recipient. Recipients should consult with counsel before taking any actions based on the information contained within this material. This material may be considered attorney advertising in some jurisdictions. Prior results do not guarantee a similar outcome.

Blog Overviews

Subscribe for Updates

About the Author

Jade Davis

Jade Davis

Partner | Tampa Office

T: 813.329.3890
E: jdavis@hallboothsmith.com

Jade Davis focuses her practice on data privacy, cyber security, and construction matters. Jade provides strategic privacy and cyber-preparedness compliance advice and defends, counsels, and represents companies on privacy, global data security compliance, data breaches, and investigations.

Leave a comment