When Procuring AI, an AI Advisor May Be The Way to Go
Someone with a track record of implementing AI projects, familiar with risk assessment, savvy about data infrastructure and able to assess tradeoffs in contracting for “AI as a service”
By John P. Desmond, Editor, AI in Business

Business executives in a position to shop for AI to help the company could use an AI advisor.
That’s a person with some track record of success in implementing AI at their own company or helping another do it. This important relationship might best be cultivated from a network of professionals, business leaders tuned into the value of AI.
“If you’re a business leader, grow your professional network and create meaningful personal and professional connections with peers who are finding success in their AI adoption journey,” advises Mattia Ciollaro, an assistant professor at Carnegie Mellon University and a machine learning engineer at PayPal, in a recent account in Information Week. “Then, leverage your network to source high-quality AI advisors for your own business.”
An effective AI advisor would have a track record in building or managing AI projects, and a relevant education background. Many developers with a machine learning background may know how to make a model work; that may not be enough. “You want to find an experienced technician who has delivered projects successfully,” recommends Ryan Ries, data and machine learning (ML) practice lead with managed and professional services firm Mission Cloud. “You need to dig in with the advisor to make sure they’re not just repeating buzzwords.”
It’s likely to be easier to find an advisor with experience launching an AI-based product, versus one expected to work on AI infrastructure and models, which could require a PhD scientist with experience delivering multiple projects, according to Ries.
Ciollaro suggested the search for an AI advisor should cover these three points:
Can the advisor help you determine if your data infrastructure is ready for integration with modern AI systems, or whether additional development is required to reach that stage;
Can the advisor help you evaluate the tradeoffs between buying “AI as a service” and putting together an in-house AI team;
Can the adviser help establish and track meaningful metrics to measure the value AI is, or is not, generating for the business?
World Economic Forum Guidelines Emphasize Ethics of AI
The World Economic Forum recently released guidelines for procuring AI solutions, a report written by Mudit Kumar, VP consulting for GEP, a company offering supply chain and procurement software, and Cathy Li. head of AI and data, and a member of the executive committee of the World Economic Forum.
“While nearly all C-suite executives view AI as critical, most acknowledge the struggle to navigate the procurement and deployment of AI technologies,” the authors state.
The key factors the authors suggest that enterprises consider include:
Business strategy alignment;
Technology and data integration;
Ethics alignment;
Risk assessment;
Agile and collaborative AI system integration.
The authors put ethics at the center of the AI procurement process. They suggest, “The foundation of responsible AI acquisition lies in a holistic framework with ethics and sustainability at the core, driving business goals, commercial objectives and data strategy, all strongly supported by an ongoing governance, compliance and risk strategy.”
Developers need to be very conscious of the risks around corruption of the system from biased data. “Designers could unknowingly introduce bias into the model, or biases may enter the system through a training data set or during training interaction with end user,” the authors stated, adding, “Hence, eliminating bias should be one of the top priorities during the selection and deployment of an AI system.”
A savvy advisor will plan for remediating when things go wrong. “An ideal partner will have a mitigation plan in case the AI solution starts producing biased outcomes,” the authors advise. “Conversely, these biased outcomes can be pre-empted if the supplier prioritizes diversity and inclusivity within the team from the initial stages.”
Organizations trying to roll out AI systems on a wide scale are blazing trails, with few guideposts.”The guidance on industry-standard practices and ways to minimize organizational risks while adopting AI technologies is very limited, and there’s a pressing need for a responsible AI procurement toolkit,” the authors state.
Parker Poe Lawyers Point to Best Practices
Lawyers will suggest that a company update its risk management framework to reflect best practices in procurement in the process of building and rolling out AI systems.
“Failing to do so can lead companies to adopt what seems like an AI panacea, but is actually a Pandora’s box of regulatory enforcement and litigation risks,” state the authors of a recent account from the blog of Bloomberg Law. The account was written by three lawyers with Parker Poe: Sarah Hutchins, partner, focused on cybersecurity and data privacy; Debbie Edney, counsel, focused on commercial litigation; and Robert Botkin, an associate, focused on data privacy and security issues.
The authors flagged the need to manage cybersecurity risk by understanding how the AI tools are integrated and whether they may create a new back door to company systems, particularly AI tools that “crawl” through systems looking for efficiencies. “It is essential to understand the vendor’s cybersecurity defenses, including what proactive steps it takes to detect attacks and how its incident response plan would minimize the effects of a breach,” the authors suggest.
Among best practices, the authors suggest businesses include appropriate clauses in contracts with AI vendors to address the use of data, data retention and destruction, and intellectual property rights. Additionally, businesses should consider requiring a statement of compliance for master service agreements, such as a certification from the Responsible AI Institute.
“AI is rapidly changing how the world does business. To maximize the promise of AI while minimizing its risks, companies should be diligent in the proactive assessment of AI tools and protect themselves through each contract,” the authors suggest.
Read the source articles and information in Information Week, the World Economic Forum and Bloomberg Law.
Click on the image to buy me a cup of coffee and support the production of the AI in Business newsletter. Thank you!