Boards Taking AI More Seriously
Discussions center on where AI lives within the organization, with the CIO seen having an oversight role, while line of business managers pursue AI services within a framework
By John P. Desmond, Editor, AI in Business

Boards of directors are beginning to take AI seriously.
The board of a Fortune 200 financial services company recently had a discussion on the future of AI in their company. They discussed generative AI; they discussed AI-related risk, such as confidential material being loaded into generative AI; they were concerned that AI is moving too fast for them to keep up. They concluded with an agreement to have an AI status update at every future board meeting, according to a recent account in The Harvard Business Review
“All boards must start discussing AI in a similar, serious manner. But they will inevitably need to get much more deeply engaged,” stated the authors of the account, David Edelman and Vivek Sharma. “As the custodians for enterprise risk management, it is time for boards to be prepared as AI steers them into potential danger.” Edelman is a senior lecturer at Harvard Business School; Sharma teaches data science at USC and serves on the boards of JetBlue and Kaiser Permanente.
Questions the board may not have asked for a while should now be on the table, such as, who are the emerging competitors we will face? How and where will we compete? What ecosystem do we need to secure? What is our business plan? For AI and the partnership connections we may need to build?
“These are critical questions for which investors understandably expect clear and reassuring perspectives,” the authors state.
Their observation is that boards are focused on these five principles in this new era of AI:
They recognize data as a fundamental competitive asset;
They own AI strategy, implementation and risk at the board level;
They are proactive about their workforce strategy and talent needs;
They aim to shape the industry ecosystem;
They push for transformative and measurable impact.
Some takeaways:
“Because all AI models are based on data, your proprietary first-party data, especially about customers and their behaviors, is pure gold.”
One board decided to have an AI strategy discussion at every board meeting, but it delegated specific areas for committee oversight: the audit committee for an enterprise risk matrix, and a technology committee for tools and products, for example. They expect the CEO to oversee how the C-suite handles AI, and they added AI implementation to the CEO’s goals to hold him/her accountable if the goals were not achieved.
AI development platform foundations will be diverse and need to be defined. “Many of the focused applications that sit on that foundation, such as specialized content generation, chatbot functionality, and optimization models, will come from a mix of off-the-shelf applications, custom developers, and in-house teams,” the authors stated.
One company testing offerings from OpenAI, Microsoft, Google and Anthropic is finding they offer an openly trained foundation as a base, on which the company can further teach and optimize the system. “They give users a partitioned instance of the model, so data that the company uploads from its own systems is protected in a more secure way. The board is quickly realizing that these are just a few of the many conditions to consider,” the authors stated.
Finally, “The job of a board is to protect shareholders’ interests. But because AI is so fundamentally disruptive (strategically, operationally, and competitively), the board has an obligation to its shareholders to drive and oversee the change.”
Where AI Responsibility Lies
Where responsibility for AI lies in the organization is being answered company by company, depending on how IT is structured and where AI makes inroads. While CIOs have traditionally been the executives most likely to lead technology initiatives, the shift to cloud computing in recent years has enabled line-of-business managers to take on more sourcing and procuring of IT systems and services, suggests a recent account in ZDNet.
When a chief data officer (CDO) reports to the CIO, accountability for AI will rest with the CIO. "The CIO is the one executive who has the helicopter view of the different needs of the organization and, of course, AI has the power to impact every single facet of the business," stated Lily Haake, head of technology and digital executive search at recruiter Harvey Nash. “So, we're tending to see the CIO in charge of AI. They want to be the person to control this area and have accountability for it."
AI responsibility is best spread out throughout the organization, suggests Gartner VP and analyst Avivah Litan. “AI crosses business units. So, if you're talking about the opportunities or the risks, it's across the lines of business – its compliance, its privacy, its marketing, its customer service," she stated.
Organizations with some experience using AI for business will identify a number of stages for addressing the integrity of the AI system. Carter Cousineau, VP of data and model governance with Thomson Reuters, emphasizes responsible AI. "When we look at responsible AI, we look at it for all of our use cases," she stated. "So, whether it's in a testing phase or we're looking to actually create a true model and move it into production, there's governance and ethics stages that we put in place."
CIOs generally have strong budget control as well, which helps to keep AI projects moving. And with their oversight of business operations and understanding of technology and data, CIOs are a good choice for leading the AI charge, in the view of Cathrine Levandowski, global head of operations at lifestyle management company Quintessentially.
"Personally, I do think it should be the CIO. And I think that's because I feel like they have an overlapping view of, not only data, but also operations," she stated. "I think it's key that whatever AI decisions you make are operational because they should be positive for the business. We would want to use AI to enhance our operations and efficiency."
Cost of AI Difficult to Project
The move to AI is coming at a higher cost, and a cost difficult to project due to the lack of experience with the pricing models and efforts required to deliver on AI in business. (See When the Bill Comes Due, GenAI is Pricey, AI in Business, Sept. 22, 2023)
In an account breaking down the cost challenges of scaling AI for the enterprise published on Spiceworks, Yonatan Geifman, cofounder and CEO of Deci, a deep learning company, cited the following: in general, the cost of generative AI is higher than “traditional AI models;” the cost per inference is not fixed; the bigger the model, the higher the cost; and controlling inference cost is the key to profitability.
Beyond the financial costs, “The carbon footprint of running large generative AI models is also substantial,” Geifman stated, citing research showing that the IT sector will account for some two percent of global CO2 emissions, with the infrastructure needed to train and deploy machine learning models a significant factor. “The larger these models become, which has become the trend for generative AI, the greater their carbon emissions will grow,” he stated.
Read the source articles and information in The Harvard Business Review, in ZDNet and on Spiceworks.
Click on the image to buy me a cup of coffee and support the production of the AI in Business newsletter. Thank you!