Companies that don’t master speed and efficiency cannot grow. Many organizations are turning to AI to help them get faster—73% of organizations have increased their funding for AI initiatives in 2024, according to a recent Gartner survey. But in the flood of AI technologies, it’s difficult to know which tools deliver on their promises and which ones are just hype. It’s critical for organizations to evaluate each tool carefully to avoid implementing poorly built AI tools that may cause more harm than good.
Cue the rise of the AI committee, a panel of internal stakeholders that evaluates the AI needs of the business. They determine what acceptable risk looks like when implementing AI and the outcomes they’re looking for from different tools. So when marketing inevitably asks for the latest AI content generator, they’re the ones who have to determine whether it aligns with organizational goals.
To make that call, there are a few things AI committees need to check on. They should understand the model the AI tool uses and how the model is trained, what quality control measures the provider has in place, and how they secure customer data.
Here’s some questions AI committees can use to start a conversation with a provider.
Training Data
1. What model or models does your tool use, and what kinds of data are those models trained on?
Why it matters: It’s important to know what models and datasets the tool is built on. Do the models use real-time information and other historical records or are they trained on static data? Understanding how a model is trained will help AI committees understand how a tool “thinks,” which can help them set up the right guardrails for the organization’s use case.
Example: An AI tool for retail may use a model trained on both historical sales data and current market trends, helping the model predict future sales by spotting emerging trends. Without real-time updates, though, the model’s “thinking” can quickly become outdated, worsening customer experience.
2. Do you use customer data in any of your training models? How are you making sure there’s no cross-contamination of data?
Why it matters: Vendors may collect customer data to improve their models and keep them up to date, but AI committees should have a solid understanding of how and when their data might be used. They should know who will have access to their data and if/how that data is being obfuscated to assess whether the model may “accidentally” leak privileged information.
Example: A pharmaceutical manufacturer uses an AI tool to help them test and refine their drug formulas. The model it’s built on collects the manufacturer’s formula data to retrain its model. A future version of that model becomes cross-contaminated and exposes this proprietary formula to a competitor that is using the same model.
3. Can the model you’re using access real-time data (internal or external), and what controls do you have in place to protect privileged data?
Why it matters: Access to real-time data can help AI models stay relevant and accurate, but AI committees need to know how and when they access that data. For example, are there controls to align data retrieval with user permissions? If not, the model may produce outputs using data a user is not supposed to see. Committees should also ask how the model authenticates user permissions—is it using an administrative account or acting on behalf of the user?
Example: A health care AI tool leverages databases filled with sensitive patient details. Without proper access control, the model can be exploited to harvest confidential patient information, which can harm the health care organization’s reputation or even endanger the patient.
Feedback and Quality Control
4. Do you regularly audit AI-generated content or evaluate the model’s decision-making? If so, what steps are involved in this review process, and is it possible to opt out?
Why it matters: Human review, whether from data science teams, focus groups, or internal testing, ensures AI content and decisions are accurate and safe. However, AI committees from organizations that handle sensitive data, like the health care example above, may want to opt out of human review.
Example: An energy company uses an AI system to oversee and enhance their operations, but the company works with data that requires security clearance. They may want to decide when human review is appropriate or opt out altogether.
5. How do you ensure accurate outputs and mitigate hallucination?
Why it matters: AI models can hallucinate for many reasons, like poor-quality training data, lack of access to real-time data, poorly engineered prompts, or a combination of these. AI committees should ask about the quality of the training data, how the model connects to internal sources, and whether the vendor has the expertise to guarantee high-quality prompting.
Example: A medical AI assistant cross-references symptoms described by a patient with a verified medical database. A poorly engineered prompt causes the AI assistant to use the wrong data in its output, increasing the risk of incorrect advice.
6. How can we verify the reasoning behind the tool’s outputs? Are the underlying mechanisms visible?
Why it matters: AI committees should make sure that every step of an AI’s process—including data retrieval, required inputs, executed actions, and observations—is clearly displayed so the organization can identify missteps.
Example: A credit-scoring AI tool might transparently show how it weighs factors like payment history, credit utilization, and credit history length. This transparency shows the model’s work and helps users understand decisions and provide feedback on any potential misjudgments.
7. Can the AI model’s decision-making process be explained in understandable terms? Do you provide tools or documentation to help users interpret the AI’s outputs?
Why it matters: The processes of an AI model should be not only visible but also easy to understand. Organizations need to be able to justify AI-driven decisions: Besides fostering trust among users and stakeholders, explainability is sometimes a compliance requirement.
Example: The health care organization in the example above may need to explain to a regulatory body how and why their model gave a patient incorrect advice. Clear and easy-to-understand records of AI processes can help the organization with its reporting.
Ownership and Data Security
8. Who owns the data processed by the AI tool and any derivatives of that data? Are there any restrictions on how we can use the outputs?
Why it matters: Clarifying data ownership and usage rights prevents future disputes and ensures that the organization can use the AI’s outputs as needed.
Example: A marketing team needs to know if they can freely use content generated by an AI tool for advertising campaigns.
9. Describe the data security and privacy controls in your AI tools and models.
Why it matters: AI committees should look for robust encryption methods, access controls, and compliance with regulations like GDPR or HIPAA to protect data from unauthorized access and breaches.
Example: An AI system in health care must comply with HIPAA regulations by encrypting patient data in transit and at rest and by restricting data access to authorized users.
10. How do you adjust your models for uncontrolled bias or harm?
Why it matters: AI models produce outputs based on the data they’re trained on, and if that training data has flaws, the AI can inadvertently perpetuate those flaws. Providers need to show AI committees that they have mechanisms to identify, monitor, and correct these biases.
Example: A hiring AI system is adjusted to avoid gender or racial biases by removing biased training data and incorporating fairness constraints during the training process.
11. How do you keep copyrighted materials from appearing in your models or their outputs?
Why it matters: If an AI model uses copyrighted materials, it could lead to legal issues. Providers should have mechanisms to detect and prevent the use of copyrighted material in training data and generated outputs.
Example: A music composition AI tool has checks in place to ensure that any generated melodies or lyrics do not closely resemble existing copyrighted works, thereby avoiding potential copyright infringement.
12. How do you ensure minimal downtime and data loss in case of failures? What are your disaster recovery and business continuity plans?
Why It matters: AI committees should make sure that the vendor has a plan in the event of an outage or a disaster.
Example: A manufacturing company uses an AI tool for supply-chain management. The vendor that provides the tool is hit by a hurricane that causes outages, resulting in halted production.
Conclusion
AI committees face a difficult task—they have to balance security with speed while facing pressure from business stakeholders eager to put AI to work. This involves more than just a quick look at AI tools; it requires a thorough assessment of each one against specific risks and business goals. Through careful selection and implementation, AI committees not only protect their organizations from potential issues but also unlock AI’s ability to boost speed and efficiency.