Sales Intelligence AI for sales insights and conversation intelligence AI-powered

Transparency and Compliance: Best Practices When Deploying AI


Facebook Twitter Linkedin Copy link post URL copied
4 min read

Artificial intelligence (AI) is a rapidly-growing field, unlocking new and innovative capabilities seemingly each day. While some AI technology has existed for several years, new developments in generative AI have propelled it forward and made it today’s “must-have” technology. The technology is advancing quickly and that presents challenges for both users and business leaders.

Organizations and end-users alike need to better understand what data AI-powered solutions are trained on and how their data is being used, to be able to identify what types of privacy and ethical issues could arise. At the same time that these new technologies are developing, new regulations and guidelines around AI are emerging all over the world.  Navigating this patchwork of laws will require more than just deploying new technologies. IT, Security, and Privacy leaders will also need to address ethical and regulatory concerns.

Maintaining trust and security will be paramount, and requires transparency,  governance, and compliance.  With that in mind, this article explores best practices for deploying AI both inside your organization and for your customers

Transparency: Are You Clear About How You’re Using AI?

For both customers and end users in your organization, there must be clarity about how you’re using AI. AI is opening up new frontiers, but given its complex nature, how it will work  may not be well understood. This means that transparency is exceedingly important for ensuring your customers and end users will continue to have trust in your product.

AI transparency means helping all stakeholders and customers understand how the AI solutions work, including:

  • Where and how the AI is used
  • How the AI processes data
  • How the AI generates output and how such outputs are used
  • Whether the AI takes decisions and where human review is implemented
  • What data it’s trained on and where that data comes from
  • How user data is secured
  • Customer choice in using AI features

Transparency also means being clear about what AI-powered features can and can’t do, and any risks it may present. It’s also important to let users know when they’re engaging with AI— for example, in a customer service environment, it should be clear when a customer is interacting with a bot rather than a human.

Without a reasonable degree of transparency, some organizations will be hesitant to invest in AI solutions or may take on unknown risks, especially if they don’t have a good understanding of how the AI  works, or if the expectations set by vendors are overly optimistic. Security is another area where transparency really matters. For example, end users need to be assured that none of their sensitive personal information will be saved, used as training data, or shared with third-party AI providers for purposes other than to provide the feature. With AI being so new, this is crucial to pave the road for AI adoption.

Compliance: Are You Monitoring the Latest Regulations?

As AI technologies continue to develop, so too do regulations around them. Organizations need to make sure that they’re keeping up with the latest laws and regulations to ensure ethical AI usage and proper security and privacy protections. The regulations, as well as standards like the NIST AI Risk Management Framework, can help establish the foundation for policies, standards, and best practices around transparency, data management, risk assessment, and monitoring of  AI-powered solutions. 

It is important to recognize that existing data protection laws, such as GDPR and CPRA, already contain provisions that apply to AI in the context of personal data processing. While more is on the way nationally and internationally, we must take existing requirements and emerging standards into account when we develop or use AI. 

This applies to contact centers where generative AI used for automated self-service contact centers will need to follow regulations on employees’ privacy and on gathering, saving, and using customer conversation data. As another example, in the healthcare vertical, which is a prime market for AI across the entire spectrum of patient care, hospitals and other covered entities will have to comply with HIPAA and follow regulatory frameworks for AI in healthcare to address risks and requirements when they use AI algorithms for medical diagnoses and treatments.

Regardless of the business you are in, deploying AI-powered products or services requires an understanding of the relevant regulations and ensuring the products and services are compliant. Failure to do so can result in fines, penalties, and a loss of customer trust. Organizations should build specific AI controls into their compliance processes, from product reviews to third-party risk management.

The Challenges Ahead

Several challenges lie ahead as both AI technology and related regulations continue to advance. In fact, the speed at which both are developing presents one of the biggest challenges overall. Organizations need to  always be innovating, and to be ready to test and roll out new updates quickly.

At the same time, organizations need to be aware of the shifting regulatory landscape and ensure they continue to align with requirements. Because different countries and states will have different regulations, organizations will need to implement guardrails that can direct innovation without stifling it and without creating risks for the organizations. This requires a strong cross-functional collaboration between privacy and security, but also with product and engineering.

Businesses also need to ensure they’re using AI ethically. The news is filled with reports of AI misuse, such as using licensed works for training generative AI without permission or compensation, or bias in AI systems resulting in discriminatory hiring practices. Oversight and ethical standards will be important for navigating the AI landscape now and in the future.

Organizations utilizing AI must take the time to assess risks and opportunities for AI, understanding the benefits of AI technology, what the different AI-driven features and solutions can do for them, and what problems it will help them solve. However, to have sustained success, they will also need to develop best practices, and build transparency and compliance into their processes.

To learn more, read our recently published paper, Navigating AI With RingCentral.

Originally published Feb 08, 2024, updated Mar 22, 2024

Up next

Artificial intelligence (AI), CX / Customer experience

Using AI to unlock the hidden value of conversational data

As often as they wish they could see customers’ thoughts, sales representatives and customer support agents are not mind readers. And yet, every conversation is filled with valuable information that can help organizations better understand their customers, gaining unprecedented insights into consumer behavior, sentiment, and needs. It’s just a matter of deploying the right tools ...


Facebook Twitter Linkedin Copy link post URL copied

Related content