Dragonfly Editorial‘s President Samantha Enslen announced an updated policy on the company’s usage of artificial intelligence (AI) tools.
“As we spend more time with AI tools, our policy has become more nuanced — though just as protective of accuracy, authenticity and proprietary data,” Enslen said.
Dragonfly’s first Policy on Ethical AI Use was written in May 2023.
“When I revisited that policy recently, it felt a bit like rereading something I’d written at the start of my career: it was a solid piece for the time, but now, with the benefit of more experience and new information, I’d say a few things differently,” she said.
“As we move on from judging the tool’s performance against some (very lofty) initial promises and become more familiar with its actual strengths and weaknesses, we’re finding it no longer serves us to simply stress AI’s propensity for error and dismiss its potential. (Neither, by the way, do we think it’s useful to accept the potential and ignore the pitfalls — though we were never really in that camp to begin with).”
Enslen said the company, in learning how to work with AI tools, has found ways that it and its clients could benefit from AI capabilities to save time on mundane tasks, free up time and focus for more expert work, and suggest new avenues of research.
“We’ve also added backstops into our workflows to guard against hallucinations, cliches and opportunities for plagiarism. A big part of that lies in always having an experienced, skeptical — and human — editor involved in an AI-assisted (rather than AI-driven) process,” she said.
Read the online version of Dragonfly’s red-pen edits to its original policy here.
Enslen said the company will never use AI to:
- Replace staff or contract Dragonflies
- Produce copy completely for customers
- Create entire images, designs or videos for customers
- Edit content for customers, without careful review of all suggested changes
- Replace credible sources of information, such as original research articles, personal interviews or trusted news outlets
- Analyze content that is internal, confidential or proprietary to clients
- Analyze any client-provided content without their permission
The company may use AI to:
- Generate story ideas
- Generate an outline for a story
- Suggest social media posts
- Suggest headlines
- Do a pre-edit check for punctuation, spelling and simple grammar so our editors can focus on readability, clarity and conciseness
- Summarize long reports
- Provide SEO recommendations
- Modify images, designs or videos for customers
- Provide basic information about a new topic, knowing that the information may be untrustworthy and that we will need to vet it in the same way we would vet information from Wikipedia
- Suggest ways to clarify confusing sentences
- Suggest possible interview questions we may have missed
- Transcribe interviews (but not without the client’s specific permission to upload the original audio file)
“The AI tools we use are private to Dragonfly, and the data is not used to train global AI models. Any suggestions provided to us by AI will be carefully evaluated by our human creatives.
“As you see throughout this list, something else that’s foremost in our minds — along with continuing to ensure accuracy and authenticity in our work — is safeguarding client confidentiality in all of our processes. We will continue to communicate openly with our clients about their comfort level with our use of AI in their projects, and if you have questions, always feel free to reach out and ask,” Enslen said.
In keeping with the theme of open communication, she said Dragonfly is planning a panel-discussion-style webinar for its customers in September. Look for more details and an invite to “An AI Conversation with Dragonfly,” coming soon.