Secure AI More Within Reach From Path to Digital Transformation
Security expert Patricia Titus suggests firms yet to embark on a digital transformation journey are likely behind the curve; new security threats posed by AI include “fuzzing”
By John P. Desmond, Editor, AI in Business
Companies embarked on implementing AI systems are savvy about laying the groundwork to secure the new AI systems with appropriate governance.
Much of this groundwork is a continuation of digital transformation efforts that had been underway, and now need to respond to a changing environment brought on especially by generative AI.
For example, in general automation efforts, a chief digital officer would have been exploring ways to increase productivity using Robotic Process Automation (RPA), and as AI has come on the scene, using it with machine learning.
“However, those industries that have yet to embark on a digital transformation journey are likely well behind the curve and could be severely impacted financially by the power of AI implemented by their competition,” stated Patricia Titus, a longtime security executive recently named chief information security officer for Booking Holdings of Norwalk Connecticut, a leader in online travel. Titus, formerly CISO at Markel Insurance, Freddie Mac, Symantec and Unisys, was quoted in a recent account in CIO.
Ideally, the CISO was involved in the new tech investments to ensure that sensitive personal and business data was properly projected along the way. “A good rule of thumb for implementing a new capability like AI is to set guidelines in collaboration with IT, legal, and the CISO organization,” Tate stated.
‘Shadow IT’ Can Slow AI Cybersecurity Governance
A company with a culture of shadow IT that tries new tech outside an overall governance structure might be vulnerable to a vendor of a platform capability you employ announcing that the product now incorporates AI. The shadow IT group might be reluctant to mention it to the CISO office, knowing they will want to conduct a review. “This happens more frequently than we like to admit,” Tate stated.
The security exec sought to dispel the notion that going slow on AI, taking time to explore and test use cases, will cause the organization to become extinct. “I’m not sure I agree with this,” Tate said. “Because there are plenty of companies not embracing digital transformation and data analytics properly, so they may just take a hit to their bottom line, but extinction is a fear tactic. I know that companies are approaching this cautiously for many reasons, especially relative to the ethical use of AI.”
Care must be taken with the data used to feed AI models, that it is of high quality and well-scrubbed, “or your outputs will be worthless,” Tate stated. “There could be a lot of work to do here.”
For example, if the privacy policy in use clearly states how the data will be used, and AI introduces some differences, the company needs to decide whether consent needs to be reacquired from the user or customer. And when the new models are running, they need to be monitored to detect whether they are behaving properly or possibly in an unethical way. “Most companies will be very cautious just letting AI run models that replace humans if there’s a risk of AI running amok,” Tate stated.
Rick Grinnell, author of the CIO account and founder and managing partner of Glasswing Ventures, which invests in AI-enabled security and enterprise infrastructure firms, offered suggestions for CISOs encountering AI implementation in their companies:
Introduce AI training aligned with company policies and processes;
Test publicly available large language models in a sandbox environment, to shield AI tests from disrupting operations;
Monitor all AI traffic in the organization to look for anomalies and potential threats; allow for prompt intervention; and
Provide firewall capabilities to safeguard against AI-related vulnerabilities.
“There are obstacles to implementing AI in any organization, but there are also common-sense strategies that can work,” Grinell stated. “The bottom line is to move swiftly but carefully, maintain focus, and implement a well-thought-out plan.”
AI Poses New Types of Security Threats
Being a fairly new technology to be implemented widely, AI makes possible new types of security threats. According to Steven Orrin, federal CTO and senior principal engineer with Intel, writing recently in FedTech, these include:
Poisoning: Introduces false data into model training to trick the AI solution into producing inaccurate results;
Fuzzing: Malicious fuzzing presents an AI system with random “fuzz” of both valid and invalid data to reveal weaknesses;
Spoofing: Presents the AI solution with false or misleading information to trick it into making incorrect predictions or decisions; and
Prompt injection: Attackers input malicious queries with the intention of generating outputs that contain sensitive information.
Referring to President Joe Biden’s Executive Order on AI issued in October, and the Department of Defense outlining ethical AI principles, Orrin described them as well-meaning but needing some grounding in AI governance principles.
“While these efforts are well-intentioned and necessary, they risk emphasizing general guidelines for responsible AI over more immediate and practical solutions for safeguarding AI against error and misuse,” he stated. “AI governance and assurance are necessary for the technology to become truly trustworthy, democratized and ubiquitous.”
AI governance covers the security and privacy of data used to train AI models, and AI assurance involves technical processes for testing the behavior of algorithms, so that users can trust the outputs. “Together, these efforts help ensure that AI models and outputs from agencies are secure, accurate and trustworthy,” Orrin stated.
Securing AI Presents Opportunity for Software Startups
Naturally, whenever disruption occurs in technology, it presents an opportunity for new companies to step into the breach with solutions. For example, DynamoFL is offering tools to enable private, secure and regulation-compliant generative AI models. The company was recently selected from hundreds of applicants for the Enterprise Ai Accelerator program being run by Comcast NBCUniversal LIFT Labs, according to a client news account posted on the site of Procopio, an attorney network.
DynamoFL, based in San Francisco, was founded by Christian Lau and Vaikkunth Mugunthan, machine learning and privacy experts with PhDs from MIT.
Comcast NBCUniversal LIFT Labs supports startup-enterprise partnerships by making connections to Comcast’s network of business leaders and corporate partners. The accelerator welcomes startups from pre-seed to growth stage, and does not require equity to be issued in exchange for participation. This program is LIFT Labs’ second accelerator in 2023, following a Generative AI-themed program earlier this spring that resulted in all participants securing partnerships with Comcast organizations.
Read the source articles and information in CIO, in FedTech and in an account about startup DynamoFL on the site of Procopio.