Artificial intelligence (AI) has become a strategic priority for many organizations looking to drive digital transformation. However, successfully adopting AI requires careful planning and execution. This is where an AI Center of Excellence (COE) can provide tremendous value. AI COEs are cross-functional teams that provide leadership, governance, and best practices for an organization's AI initiatives.
In this blog post, we will explore key considerations for building an effective AI COE using the example of a fictional gig economy company in the rides and deliveries space.
The TL;DR: version of the blog. The flowchart below gives a high level overview of what such an endeavor entails and looks like. While this is not an ‘etched in stone’ plan and nor is it exhaustive, it presents considerations you are well advised to follow in setting up a productive COE experience for yourself and your teams. In the ensuing sections we will expand upon all of these sections starting with the AI Center of Excellence.
Figure1: COE Overview Flowchart
Defining the Role of the AI COE
The first step is clearly defining the role and scope of the AI COE within the organization. Some questions that may help guide the thinking include: What are the COE's responsibilities? Who are the stakeholders? What does success look like?
For our fictional company, the AI COE could take on 5 primary responsibilities:
Developing an AI strategy aligned with business goals
Providing AI expertise to enable teams to build capabilities
Facilitating data management, model development, and ethical AI practices, as well as all other things relevant and necessary to taking a platform approach to AI applications development
Monitoring AI risks and measuring ROI of AI investments
Mentoring, advising and building technical thought leadership across teams as well as amplifying such thought leadership through published papers and industry events participation
The COE would collaborate closely with internal AI teams, platform engineers, product managers, and support staff. Key stakeholders span business leaders setting the AI vision to data scientists building the models.
Next look at defining what success looks like. For our fictional company, success for the COE may include metrics like:
Number of AI projects delivered
Improved predictive accuracy of models
Faster development cycles for AI applications
Increased revenue and cost savings from AI
Papers published, talks given, panels participated in
With a clear mandate established, the COE can start executing on its mission. Here it will be key to focus on a few key areas and iterate quickly. Do not aspire to build everything perfectly on day 1. Rather build as needed and as criticality demands.
Assembling the Right Team
Staffing an effective AI COE requires a diverse mix of skill sets. Key roles could include:
AI researchers to produce original innovations
Data scientists to apply state-of-the-art techniques
Data engineers to build data pipelines
Machine learning engineers to develop models
Subject matter experts in domains like operations, marketing, finance, product, and support, to identify use cases
Project managers to coordinate cross-functional collaboration
Cloud architects to implement robust AI infrastructure
Ethics advisors to ensure responsible AI practices
Don’t be alarmed by the exhaustive list above! Our fictional company may want to start small with 5-10 core members and grow as the COE practice matures. Also, all of the functions and domains may not be needed at the get go. Start small and iterate is the mantra here!
Defining Responsibilities and Goals
The responsibilities of the AI COE can be grouped into a few key areas:
Strategy - Developing an enterprise AI strategy aligned to business goals and guiding principles. Activities include assessing the competitive landscape, identifying high-impact use cases, and creating AI adoption roadmaps.
Expertise - Serving as a center for AI thought leadership. The COE builds internal capabilities through training programs and recruiting top talent. It also provides consulting services to teams on AI best practices.
Execution - Assisting AI application development lifecycles. This includes facilitating access to data,and helping with developing models, best practices for monitoring systems post-deployment, and driving continuous improvement.
Governance - Establishing guardrails for ethical, transparent, and responsible AI. The COE creates standards for factors like data quality, model explainability, and bias testing in addition to ensuring eng-sec concerns around data security, PII redaction etc are taken into consideration.
Value - Quantifying the ROI of AI investments and benchmarking performance. The COE ties AI initiatives to business KPIs and helps justify budgets.
For our fictional gig economy firm, sample goals could be:
Reduce customer support costs by 10% through an AI chatbot
Improve demand forecasting accuracy or gig economy supply by 15% using ML predictions
Cut fraud losses by 30% using real-time anomaly detection
Enable or augment self-driving capabilities to reduce driver costs
The COE’s priorities should align directly to the company's goals, AI needs and opportunities.
Defining Use Cases
To deliver tangible value, the AI COE needs to identify high-potential AI applications and execute targeted projects. Relevant use cases for our fictional company could include:
Customer Service - Build an AI chatbot to handle common support queries and route complex issues to human agents. This can reduce call volume and operational costs while also improving customer experience.
Core Delivery Product - Apply ML to demand forecasting to optimize routing and staffing for delivery orders. More accurate predictions improve service levels and operational excellence.
Core Ride Hailing Product - Leverage AI & ML for dynamic pricing and dispatch of rides. Factoring real-time supply and demand can maximize revenue.
Fraud Detection - Implement real-time anomaly detection models to identify fraudulent transactions and abuse by users. This minimizes leakage and also frees up time to tend to real customer issues.
The COE should work closely with business stakeholders and technical teams to translate use cases into executed AI projects. Starting with quick wins is advisable to demonstrate value.
Establishing Best Practices and Supporting Teams
To maximize its impact, the AI COE needs to codify best practices and standards to be adopted across the organization. Examples include:
Data management - Defining protocols for data collection, storage, security, access control, and governance. This provides high-quality datasets for model development.
Model development - Establishing frameworks and tools to accelerate building, evaluating, deploying, and monitoring models. This increases productivity.
MLOps - Creating repeatable pipelines for managing ML models post-deployment. This ensures continuous model improvement.
Responsible AI - Formulating guidelines for transparent, fair, and accountable AI. This builds trust with stakeholders.
Vendor strategy - Creating evaluation criteria and an approved list of AI product and service providers. This streamlines procurement.
The COE should draw from both internal experience and external benchmarks to develop best practices. Regularly revisiting them is key as methods evolve, especially in this fast changing field.
Measuring and Communicating Success
To sustain support for the AI COE over time, it is critical to measure progress and demonstrate ROI across initiatives. Quantifiable metrics to track include:
Key business KPI improvements from AI adoption. This is critical in leadership embracing AI and more importantly seeing value in the COE. Across cost savings, experience improvement and increase in share of wallet, our fictitious gig-economy company should look at how AI can be leveraged to move these metrics in a quantifiable way.
Number of use cases identified and projects completed. This is more a measure of productivity. Often a COE may be big on plans but poor when it comes to execution. With the right checks and balances a productive outcome can be achieved
Cycle time reductions in developing AI applications. From an engineering perspective, efficiency is paramount to not only reduce costs but also increase the pace of iterative innovation. Being able to streamline and make consistent AI application development, will smooth out friction points.
Improved model accuracy, explainability, and fairness. A direct offshoot of COE’s efforts, almost an expectation is around being able to rely on it for guidance and best practices in key areas such as model accuracy and lack or reduction in hallucination, explainability and lack of bias.
The COE needs to transparently communicate results to stakeholders through reports and sessions. This highlights the tangible benefits being delivered to the organization. At our fictitious enterprise, an AI steering committee has been set up where the COE presents once a quarter.
The Road Ahead
Building an effective AI COE takes thoughtful planning, resources, and 12-18 months for full establishment. The COE must align to business needs, assemble experts, identify use cases, codify best practices, and measure ROI. With a solid foundation, an AI COE can transform an organization into an AI-First enterprise delivering sustainable value. The journey requires collaboration across teams but the destination is well worth the effort!
~10xManager
There’s a lot to like about this COE approach for AI strategy and execution. The way a structure like this could create a context for cross-functional collaboration and opportunities for strategic alignment are two of the strongest benefits I see.
I would suggest that in the structural diagram near the beginning the AI strategy and vision might be better represented beneath the AI COE to better reflect the idea that the COE is best positioned to leverage data and learnings to influence strategic direction and outcomes. Perhaps splitting the vision to come from leadership and the strategy to be generated collaboratively within the COE? Would that be a helpful adjustment?