Optimizing generative AI applications through validation and verification

Refining content to help ensure alignment with business goals and objectives

September 13, 2023

Key takeaways

Generative AI can gather insight and guide decision making, but the platform does come with risks

Companies should strongly consider developing an independent validation and verification program 

Review solutions focused on microservices can establish stronger rules, governance and security  

#
Digital transformation
Risk consulting Generative AI Technology risk consulting Artificial intelligence

Companies of all sizes and in all industries continue integrating generative artificial intelligence (AI) into their current business strategies and plans. But while this technology can be instrumental in gathering insight and guiding ongoing decision making, the platform has limits, and organizations should implement an independent validation and verification strategy to offset potential risks.

Commonly, companies describe AI and generative AI tools and solutions interchangeably, but the platforms are very different. AI and machine learning decision making are no different than any other coded procedure, with “if this, then that” functions based on specific inputs and desired outputs. On the other hand, generative AI brings in language interpretation, context and sentiment. With this added complexity, generative AI applications that support critical business decisions should be subject to additional scrutiny and testing for validity, just like human behavior.

Generative AI models generate content based on neural networks that process data in a way that is similar to how data is processed within the human brain. While these networks and models constantly evolve and advance, they all pull data from the same materials on the internet and other available networks. Because they lack critical thinking ability, generative AI applications may not always produce the exact correct answers for every business, and the data may include bias.

“There is some level of subjectivity to data or an additional level of context necessary in many cases,” said Dave Mahoney, RSM US LLP director. “Instead of taking a left turn at a decision point, you might need to make a right turn depending on real-world or business context. Nothing has a holistic enough viewpoint to control that within model development and encompass all the necessary variables.”

Providing the right context

Companies must adopt generative AI applications to some extent, as the potential efficiency and insight gains are undeniable. But to make systems genuinely effective, companies need to eliminate the concern around context manipulation based on training and language models.

Mahoney provided insight into the importance of taking a deeper look at generative AI content. “Without some sort of independent validation and verification of how companies will reach their intent, how will they ever give constituents complete confidence?” he asked.

Companies should view generative AI not necessarily as an ultimate decision maker but as a powerful research tool. You can arm yourself with information from generative AI tools in a contextual manner, refine your research and develop an expedited process to present ideas to the organization that aligns with all business requirements.

“The fact that you can do that kind of context-based research on a global database of business leaders, bloggers and YouTubers is an incredibly useful resource if utilized properly,” said Mahoney. 

The risks don’t outweigh the opportunities, but companies need a disclaimer to ensure nothing is taken as a source of truth from generative AI systems without independent validation and verification.
Dave Mahoney, Director, RSM US LLP

Back to school

The validation and verification process is similar to how students develop high school or college work. Someone who wrote a textbook may have a specific bias or may not have done enough research. Newspapers present facts, but they may draw conclusions from facts that are subjective. When working through projects, students are expected to validate references and resources, and that should not change when leveraging generative AI in conjunction with critical business decisions.  

Companies can implement verification and validation processes in multiple ways, depending on how much they rely on generative AI solutions. Review solutions are typically a catch-all approach for generative AI-generated data within the entire organization or on a smaller scope for specific microservices. The latter is generally the most effective strategy.

Tailoring your approach

Combining verification and validation into one single process will likely result in many overlapping data and decision trees that could vary based on differences in how your business sells and delivers services and even how internal personnel are taught. However, with an enterprise view of specific microservices such as sales, marketing and the customer experience, your company can control insights at the lowest level and establish clear rules, governance and security. The smaller scale makes processes more manageable.

In addition, with a microservices verification and validation approach, you can implement only the integrations you need that are tied to specific use cases. Your organization can go through a responsible adoption and development process to consider factors such as why the integration is necessary, what data should be involved in the integration, what the anticipated outcomes may look like, and what rules and processes are necessary for models to maintain the desired level of control when services become connected.

As organizations continue to develop generative AI strategies and determine how the technology aligns with business goals, potential risks and controls must be part of the equation. With an independent verification and validation strategy, you can feel more confident that you are governing generative AI effectively, and company data is not exposed more than necessary.

“When running a business, you need to have control and be accountable to what you’re putting out in the world to clients, people and regulators,” said Mahoney. “Building generative AI microservices that you can govern easily and effectively, develop quickly, iterate and fix fast, and get into a secure development operations procedure that is agile can help you meet your goals and do the things that you need it to do safely.”  

Related solutions