From Steen Erik Larsen, Maersk Ombuds Function
The IOA Board asked the Research and Assessment Committee to take the lead in helping members understand how they can effectively use artificial intelligence (AI), including considerations/pitfalls to pay attention to when using AI. Ideally, the Independent Voice Blog, CommUnity, Good Day IOA Videos, conferences, webinars, and informal events (i.e. Community Connections) are platforms that could be leveraged for discussion and dissemination.
By now, we expect that most ombuds have heard of Artificial Intelligence (“AI”). Stories of Large Language Models (LLMs, such as ChatGPT) that can replicate human speech and writing (and code) with shocking speed and considerable accuracy seem to be everywhere. Generative AI programs, such as Midjourney and DALL-E can create extraordinary photorealistic images based on simple text prompts. Meanwhile, “Deep Learning” programs such as AlphaFold have already solved medical and scientific problems previously thought nearly impossible. We are in a new world–one that contains potential for extraordinary benefits and risks to ombuds practice. It is certain that AI will affect the work of Organizational Ombuds in fundamental ways. Already, many ombuds are using AI, such as using Microsoft CoPilot to draft documents and articles, or even just using AI-integrated search engines. Others may be exploring ways to use LLMs to enhance productivity, for instance using ChatGPT to draft emails, refine and edit documents, or create blog posts. These uses only scratch the surface of what AI has to offer to our field. Some
other possible uses include:
- Data analysis. Programs, such as Microsoft Power BI are beginning to incorporate powerful AI to assist with data analysis, pattern recognition, and visualization. Microsoft CoPilot also offers integration with Excel and the Microsoft365 suite of products, any of which may offer ombuds previously unreachable opportunities for analyzing data.
- Training. Various companies have begun to explore using AI to create scenarios and feedback for training in customer-oriented fields such as nursing. Simply directing a LLM such as ChatGPT to “act as a visitor bringing a problem to an ombuds” yields a viable tool for practicing visitor interactions, solutions exploration, and conflict coaching.
- Workflow Efficiencies. Many are becoming aware of the immense value AI can add to automating and saving time on administrative tasks. Even now, LLMs can be directed to act as “executive assistants” to provide draft work products such as emails, meeting agendas, communications copy, and even blog content quickly and efficiently. Spending less time on these tasks can allow more time for case work, visitor interaction, and office promotion.
- Visitor interface. AI may enable even small ombuds offices to dramatically increase visitor interactions through the use of sophisticated chat software which can respond to many questions quickly and efficiently.
- Conflict Resolution. AI has the potential to assist in generating creative options or solutions to the opportunities and conflicts that visitors bring to us.
However, in addition to these exciting use cases, AI presents significant risks that the ombuds community will need to manage in order to use AI effectively and ethically. Some of these include:
- Confidentiality. Because information input into the chat interface of LLM essentially becomes part of the body of information that LLM uses to create content, that information can appear in searches and inquiries posted by other users.
- Veracity. In 2023, an attorney’s court filing in litigation contained citations to nonexistent legal authority. The attorney later admitted to using ChatGPT to draft the filing, leading to professional censure and reputational harm. LLMs are well known to “hallucinate” false facts in ways that are misleadingly real. While ombuds usually do not file papers in court, we still must review and verify our citations to research or other authority in our writing, especially when that writing is assisted by AI.
- Perceived Redundancy. Precisely because AI offers such powerful potential for users to answer questions and receive guidance and feedback, it poses a risk to ombuds offices’ viability unless leaders understand the necessity of the “human” element of ombudship. Our perspective is that AI can be used to assist or support ombuds work but that it cannot take the place of human ombuds.
- Copyright. There are ongoing legal cases against OpenAI, for hosting articles under the copyright of some of the major newspapers and using these without consent or approval. As a user, you may not fully comprehend the premise on which AI is responding, and thereby inadvertently be violating a copyright. This pertains to all data sources, e.g. the Ombuds case stories in Charles L. Howard’s “A Practical Guide To Organizational Ombuds” (ISBN978-1-63905-053-6) is under the copyright of ABA.
- Hidden Bias. We pledge to be impartial, so how do we ensure that the data used or produced by our AI also is impartial? There are examples of Microsoft’s Co-Pilot having a bias that can reinforce existing inequities or disparities and therefore undermine ombuds’ standards.
- What are the ethical implications of AI in ombuds work?
- What AI tools can best address ongoing challenges in our profession?
- How can ombuds limit the threat of perceived redundancy?
- Are there skills and capacities that AI can help ombuds develop?
Whether we like it or not, AI is changing the world in ways that we are only beginning to understand. Following the advice we give to others, ombuds can embrace the opportunities of this tectonic change while taking steps to reduce the risks. Our AI Working Group looks forward to supporting our community in collaborating on ways to navigate the risks and to use AI that will increase ombuds’ relevance and abilities. By thoughtfully embracing AI in our work, ombuds can also learn to better assist our organizations as they seek to navigate this brave new world.