CEOs Can Provide Needed Perspective and Principles on AI
The Conference Board uses cookies to improve our website, enhance your experience, and deliver relevant messages and offers about our products. Detailed information on the use of cookies on this site is provided in our cookie policy. For more information on how The Conference Board collects and uses personal data, please visit our privacy policy. By continuing to use this Site or by clicking "OK", you consent to the use of cookies. 

CEOs Can Provide Needed Perspective and Principles on AI

May 16, 2023 | Report

What should CEOs say to their workforce about generative artificial intelligence (AI) tools like ChatGPT?

Staying silent is probably not a realistic option. Quite apart from whether generative AI becomes a $100 billion+ industry, or significantly increases global productivity and GDP, the technology is already in the hands of employees, creating risks to corporate data, and raising questions about the future business models and operations of any enterprise that relies on creativity, knowledge, or communication—in short, pretty much every company.

At the same time, getting the message substantively right on AI can be a challenge. The technology is fast developing, and it will take time for boards and executives to think through the implications of generative AI for their business. Tone can also be hard. While investors might see opportunities for operational efficiency, employees may worry about being replaced, and CEOs need to deliver a consistent message that will withstand scrutiny by all audiences.

At this stage, perhaps the best thing CEOs can do is to offer perspective and principles.

Putting Generative AI in Perspective

CEOs can begin by putting the technology itself in perspective. It may be reassuring to note that most companies have been using some form of AI for decades, and it is part of the daily life of anyone who uses a smartphone or engages in online commerce. In addition, it is useful to distinguish between the public versions of generative AI and licensed versions of the technology under development that companies will be able to use more confidently in the future. Perhaps most importantly, CEOs can remind the organization of the current limitations of generative AI: it produces some interesting, but often unreliable, results.

CEOs will also need to put generative AI in the context of the company’s business. While it may be too early to give definitive answers, CEOs can outline in general terms how it might affect the products and services the company offers in the marketplace, as well as how it operates in the workplace.

Finally, CEOs will want to be sensitive to the perspectives that others will bring to this technology. No matter the audience, generative AI promises three things—change, speed, and uncertainty—that naturally create anxiety. Moreover, generative AI has made its broader debut at a time of increased economic uncertainty and geopolitical instability. Candidly acknowledging this broader background is the empathetic thing for corporate leaders to do.

Providing Guiding Principles

While each company will want to craft messages that reflect its business and culture, here are five basic principles that CEOs can embrace for their organizations.

  • Trust. Companies have worked very hard in our digital era to earn and maintain the trust of their customers, employees, business partners, regulators, and communities. Generative AI threatens to undercut that trust, by replacing a human voice with an automatically generated one, and by gathering massive amounts of data and using it as large-language-model software developers see fit. Maintaining trust needs to be a company’s North Star. Any use of generative AI must be carefully reviewed for accuracy and reliability.
  • Inclusivity. Generative AI inherently include biases—as a result of the algorithms themselves and the data they draw upon. Companies need to review generative AI tools and the information they generate for inclusivity and fairness.
  • Data protection. It is not enough for companies to focus on data security. They need to be equally mindful of safeguarding privacy and protecting their intellectual property, both of which are also threatened by AI. CEOs should underscore this “trifecta” of data protection, reminding employees of existing policies, while delivering a simple message: if in doubt, don’t download.
  • Transparency. Companies should also be transparent about their use of AI, acknowledging in plain language how they are using it. Companies should seek that same level of transparency from any third parties who provide content to them.
  • Accountability. There is no escaping human responsibility for the use of generative AI. Companies must ensure that they are using generative (and other forms of) AI carefully, ethically, and in accordance with all applicable laws and regulations. In the event of misuse or error, companies should take corrective action and have protocols for appropriately reporting internally and externally on breaches.

General principles will, of course, soon lead to questions about specific, practical applications. CEOs may therefore want to designate a cross-functional team of leaders in the organization who can provide ongoing, authoritative guidance as the technology, regulation, and the market uses of generative AI develop. At a minimum, strategy, legal, human resources, and technology should be involved.

Acting with Humanity in the Face of New Technology

In a report issued in 2021, The Conference Board noted that in this current era of multistakeholder capitalism, “execution, agility, and trust are all important” for C-suite leaders, “but trust is the most important.”

The report recommended several steps that C-suite leaders could take to reinforce that trust, but three stand out in the current moment. The first is “to listen more” to stakeholders—not just customers, employees, and investors, but to others who can help corporate leaders have a complete picture of the implications of their company’s actions. The second is to communicate with candor and authenticity. The third, and perhaps most important, is “to act with humanity, humility, and integrity—building trust that will enable them to credibly explain the inevitable difficult trade-offs among stakeholders.”

In an era of rapid technological change, economic anxiety, and political uncertainty—and amid a cacophony of artificially generated messages—that kind of human voice at the top is essential.

 

Defining AI

What exactly is AI? Here is a basic definition, refined by The Conference Board:

AI is technology that mimics human thinking by making assumptions, learning, reasoning, problem solving, or predicting with a high degree of autonomy.

That’s a definition that AI experts would subscribe to. But in the layperson’s world, the term AI is often applied more loosely to mean the use of computer systems or agents to perform any task that, up until now, humans had to do. The disconnect between the expert’s view and the popular one causes some confusion in the business world. Think of AI as an umbrella term that covers a variety of leading-edge capabilities. Under today’s AI umbrella, we see machine learning, deep learning, natural language processing, text analytics, voice recognition, speech recognition, and computer vision.

Source: Mary Young et al., Artificial Intelligence for HR: Separating the Potential from the Hype, The Conference Board, December 2019.

 

AUTHOR

PaulWashington

President and CEO
Society for Corporate Governance
Fellow
The Conference Board ESG Center


OTHER RELATED CONTENT

CONFERENCES & EVENTS

AI: Leading Through the AI Transformation

AI: Leading Through the AI Transformation

November 07 - 08, 2024 | (Brooklyn, NY)

hubCircleImage