Generative AI in the Enterprise: Developing Skills, Managing Risk
Prompts being seen as necessary to focus generative AI results; lawyers suggest risk mitigation strategies such as policies for acceptable AI use, disclosure and audits
By John P. Desmond, Editor, AI in Business
Rolling out ChatGPT and other large language models in the enterprise requires striking a balance between helping workers develop the needed skill sets and managing risk around privacy and ethics.
On the skill set side, many generative AI users are concentrating on learning how to use prompts, which guide the output and influence its tone, style and quality.
“The nature of a prompt that goes into a generative AI engine can have an incredible impact on the results that comes out,” stated Manish Goyal, senior partner and global leader for the AI and Analytics practice at IBM Consulting, in a recent account in CIODive. “A well-crafted prompt helps the AI generate more relevant, accurate and useful information.
”Goyal recommended that IT teams become familiar with tools to make generative AI productive in the enterprise. “The best way to become a good prompter is through practice and an understanding of the AI system you are working with,” he stated. “Try different prompt styles and approaches to discover what works best for the task you are trying to accomplish – question answering, summarizing and translating.”
He suggested that most good prompts have these six things in common:
Clarity: clear and easy to understand;
Specific: about the information or output they seek;
Context: they include relevant context such as background information, purpose of the response and target audience;
Understand: shows understanding of the AI system;
Concise: prompts that are to the point;
Structure: includes a question format or a clear framework.
Prompts can also be helpful in reducing risks, suggests Alex Toh, local principal for Baker & McKenzie, Wong & Leow, IP and technology practice, and a certified AI Ethics and Governance Professional by the Singapore Computer Society, in a recent account from ZDNet.
The lawyer recommends that businesses should carry out a risk analysis to identify the potential risks and assess whether these can be managed.
The country of Singapore has issued frameworks to guide trustworthy AI development. Singapore's Ministry of Communications and Information has rolled out tools to help, including a test toolkit known as AI Verify. And Singapore’s Model AI Governance Framework. Organizations including DBS Bank, Microsoft, HSBC and Visa have adopted the governance framework
.Lawyers Suggest Creating Acceptable Use Policies
To mitigate risk associated with the use of ChatGPT in the enterprise, attorneys Myriah Jaworski and Chirag H. Patel, writing recently in JD Supra, made some suggestions:
Create a policy around acceptable AI internal use: Conduct a risk analysis as to the various ways ChatGPT could be employed within the organization, and form an acceptable use policy based on the organization’s risk tolerance.
Track internal use: Require employees to disclose the ChatGPT in any work product, and then maintain an inventory of any work product that utilized ChatGPT for easy identification.
Disclosure, consent, and contract compliance: Determine where it is necessary to disclose the employment of ChatGPT with consumers and contract parties to avoid claims of authorized disclosure or breach of contract. Where necessary, obtain appropriate consent. The type and level of consent will depend on the context; changes in law may impose specific consent requirements.
Audits, training, and monitoring: Continue to monitor the use of ChatGPT in relation to the acceptable use policy, unanticipated and new uses, and changes in legal requirements. The organization should also conduct regular training to keep employees informed of the acceptable uses of ChatGPT, and the risks to the organization of authorized uses.
Similar suggestions for managing risk associated with generative AI are offered on the blog of Fairly AI, a company offering an on-demand AI audit platform. For example:
Ensure responsible design and training: When designing and training generative AI systems, it is important to use diverse and representative data sets to reduce the risk of bias, and to carefully evaluate the quality and accuracy of the generated output.
The authors, not identified, state, “The speed of AI innovation is outpacing risk and compliance. By understanding and mitigating risk early on, generative AI can become a powerful force of transformation. That transformation has already begun.”
Schneier Sees New Generation of Risk
Writing recently in Wired, cybersecurity expert Bruce Schneier, who also lectures at Harvard University, warns that generative AI poses a new generation of risk. Email phishing scams, for example, are likely to see an increased number of gullible targets.
“LLMs’ ability to confidently roll with the punches, no matter what a user throws at them, will prove useful to scammers as they navigate hostile, bemused, and gullible scam targets by the billions,” Schneier stated.
The volume of scams is potentially overwhelming. “This is a change in both scope and scale,” Schneier stated.”LLMs will change the scam pipeline, making them more profitable than ever. We don't know how to live in a world with a billion, or 10 billion, scammers that never sleep.”
The LLMs will also enable more sophisticated attacks, that exploit the surveillance capitalism business model of the internet to balloon the volume of data on individuals available for purchase from data brokers.
“Combine the digital dossiers that data brokers have on all of us with LLMs, and you have a tool tailor-made for personalized scams,” Schneier stated, forebodingly.
While we need to worry about email bad guys abusing the latest AI innovations from generative AI for selfish purposes, it seems one audience has less to worry about.
In a talk at the recent MIT MIMO symposium at MIT, held to focus on how AI is being used in manufacturing, the CEO and cofounder of Aitomatic, Christopher Nguyen, queried the audience about its use of ChatGPT. He asked for a show of hands by those who have been experimenting with it; most indicated they had. “Folks who have not used ChatGPT, please leave the room,” he quipped.
Aitomatic focuses on translating domain knowledge from natural language to machine learning models for industrial companies. When he asks within his own company about who is using ChatGPT, he said, “I’ve been frustrated that it’s the opposite of what I would have expected. The marketing team is all over it, and engineering is using it the least. There’s something weird going on here.”
Maybe the liberal arts majors are taking over from the computer science majors in this new era of AI.
Read the source materials and information in CIODive, from ZDNet, in Singapore’s Model AI Governance Framework, in JD Supra, on the blog of Fairly AI and in Wired.
(Write to the editor here; tell him what topics you would like to read about in AI in Business.)