
Discover more from AI in Business
Workers May Be Hiding Use of ChatGPT; Policies Encouraged
Transparency encouraged in policies around the use of AI tools; Wired magazine set a policy and published it; employers encouraged to provide a framework for workers
By John P. Desmond, Editor, AI in Business

Some 70 percent of workers using ChatGPT on the job are not telling their boss, and overall use was up to 43 percent in early 2023, according to survey results published on the blog of Fishbowl, a social network where professionals discuss career topics anonymously.
The survey was conducted from January 26 to 30, 2023, and received responses from 11,793 professionals on the Fishbowl app. Respondents included companies such as Amazon, Bank of America, Edelman, Google, IBM, JP Morgan, McKinsey, Meta, Nike and Twitter, according to the account of the results.
A previous Fishbowl survey, conducted earlier in January and answered by 4,500 professionals, found that 27 percent were using generative AI for work purposes; the rate of use dramatically increased within a month. Popular topics of discussion included using the tool for resumes, cover letters, copywriting, coding and drafting sales and marketing emails..
The stage is set for workplace policies around the use of generative AI to be put in place. Here is what is happening on that score:
According to a report from Reuters, a growing number of businesses around the world have set up guardrails on the use of generative AI at work, including Samsung, Amazon, Deutsche Bank and Google.
Publishers Experimenting; Wired Issues an AI Policy
Many publishers are experimenting with how to use generative AI in appropriate ways, and at least one has decided to specify to readers how AI tools will be used.
The publication Wired, for example, has published an AI policy that tells readers how it will use the technology, according to a recent account from NiemanLab.
The effort followed early experience with AI-generated content at CNET that resulted in plagiarism and factual errors. The Wired policy includes descriptions of when generative AI will not be used. “It may seem counterintuitive for a publication like Wired to have a policy of mostly not using AI,” stated global editorial director Gideon Lichfield. “But I think people appreciate both the transparency and the attempt to define clear standards that emphasize what quality journalism is about.”
Among the rules: “Wired does not publish stories with text generated by AI,” Lichfield wrote in the policy. The door is open to using AI to help write marketing copy, but the Post will notify readers if it does so.
Writers are required to tell editors when they are using AI tools to help them produce content. “If a writer uses it to create text for publication without a disclosure, we’ll treat that as tantamount to plagiarism,” the policy states.
Wired will use AI tools to suggest headlines, social media posts and story ideas. Wired will not use AI tools to generate images or video, for fear of encountering legal issues and also out of a concern for workers. Wired will avoid using AI-generated images instead of stock photography.
“Selling images to stock archives is how many working photographers make ends meet,” Lichfield states in the policy. “At least until generative AI companies develop a way to compensate the creators their tools rely on, we won’t use their images this way.”
Businesses Urged to Set Policies Around Use of AI Tools
Executives in many industries are seeing the need to develop and communicate policies around use of generative AI at work.
“As long as you have team members using these public models, you have to give them a framework for usage right away,” stated Maya Mikhailov, the founder of SAVVI AI, a firm offering to help companies employ machine learning, in a recent account in CIODive.
Banning the use of ChatGPT may work temporarily, Mikhailov advised, but with Microsoft, Google and other companies embedding generative AI in software already in use, bans are unlikely to be sustainable. Instead, organizations “need to think about an information security policy yesterday, because the convenience that these tools offer is so tremendous,” Mikhailov stated.
CIOs and tech leaders are advised to engage important stakeholders and set expectations realistically for how to use generative AI. For example, all content generated by an AI model should be treated as a first draft that is reviewed by appropriate experts before it is released, suggests Avivah Litan, an analyst at Gartner.
“You should have domain experts review the quality and accuracy of the information before it’s sent to anyone, whether it’s customers, partners or even other employees,” stated Litan.
The language used to define policies around AI tool use needs to be clear, advised Bill Wong, principal research director at Info-Tech Research Group. For example:
ChatGPT should augment, not replace, research.
If you use ChatGPT, assess the accuracy of its responses, check for bias and judge its relevance.
Be transparent about how ChatGPT is being used.
“Historically, when you tell people you cannot use it in any shape or form, people find workarounds,” stated Wong. “One way to govern it is through education and saying, ‘Listen I know it’s productive here, but do you really want our competitors to understand our supply chain algorithms?’”
Key Considerations From a Legal Perspective
An account on the legal intelligence site JDSupra offers key considerations for companies looking to set policy on generative AI use. They include:
Transparency: Companies should understand when and how employees are using AI tools. “Depending on the specific use case, companies also may need to tell customers and business partners whether and how these tools are being used,” the authors state. A policy that requires employees to document when they use generative AI can help.
Avoiding bias: “Companies will want to ensure that antidiscrimination and fair treatment policies apply to the use of AI tools, and that they are adequately monitoring for unfair and unlawful outputs and impacts.”
Human review: “Generative AI tools are useful, but are not generally appropriate to be the final word in answering questions. Companies should put accountability mechanisms and procedures in place to ensure adequate oversight and use of generative AI outputs, including checks for accuracy and relevance.”
Limiting use cases: Be especially careful when using AI tools in customer relationships. The authors advise that “companies should consider identifying any particularly risky or sensitive use cases that are off-limits for their employees to use.”
Conversely, companies can specify use cases that would be allowed, as Wired did. Companies may also want to consider requiring up-front approval of use cases for generative AI tools.
From a broader perspective, setting policy around managing generative AI tools should be part of how a company manages overall AI risk. Frameworks such as NIST’s AI Risk Management Framework can be helpful.
Read the source articles and information on the blog of Fishbowl, from Reuters, from NiemanLab, in CIODive and from JDSupra.
(Write to the editor here; let him know what you would like to read about in AI in Business.)