Futurist Jim Carroll is running a series that began November 27, 2023, and will end on January 1, 2024 - '24 Strategies for 2024.' Rather than running a trend series for the upcoming year as he has previously, this series will examine a number of his personal beliefs on how to best align yourself with the future. There will be a post each weekday, excluding weekends and holidays, until the series runs its course. You will find it on his blog at
or on the website
With AI, we are on the edge of an era of mass misinformation at scale.
Will you do the right thing?
I'm presuming that you've already got a firm ethical grounding in place, so that's not a question. But given what's coming, our ability to skirt the lines of ethical behavior might be challenged like never before. The world we are going into with AI is going to be wild.
Look, we all know that we can already generate images that bear no resemblance to reality - you've seen me doing this with the various face-swap images I've been posting of myself in various settings. Here's me in the greaseball stage of my life.
I'm doing this mostly for fun, and simply to point out that we are witnessing fascinating technological advancements at a furious speed.
And yet it's also pretty obvious that we are only moments away from being able to do the same type of thing with full-motion video and audio - we're headed to a world in which anyone will have the capability to, generate completely realistic clips of anyone saying anything in their own voice. Have you seen the video of Leonardo DiCaprio addressing the UN in multiple different voices, all AI-generated?
This clip is already 9 months old, which is a lifetime in AI years It's already actually kind of trivial to do this type of thing - but what is coming at us is the ability to generate completely fake images and video that purport to show anything. The 2024 election in the US? It's going to be wild. Did I mention that already?
Through all this, will you commit to always doing the right ethical thing - in essence, recommitting to the ethical grounding that has guided you so far? That's strategy #7 in my 24 Strategies for 2024, and it's an important one.
One of the things I'm talking about at my corporate AI leadership sessions - based on my AI for Leadership Teamstopic track - is the importance of what we are calling "AI Governance." Essentially, it's a phrase that describes the framework that an organization should have in place concerning issues such as bias mitigation in an AI model, as well as around security and privacy issues involving AI (such as making sure that corporate information isn't injected into a large-language model by someone using an AI tool.) But it also has to do with the ethics, morality, and integrity of the various actions that should guide the use of AI by the organization.
In other words, always doing the right thing when it comes to AI - you know, issues involving stuff like corporate ethics, responsibility, and integrity.
You probably need a personal AI governance framework as well - because we are headed for absolutely wild times! (Did I mention that?)
Think about where we are already. There certainly has been a fair share of recent egregious examples where organizations are skirting what should be clearly defined corporate bounds. You might have seen two recent new challenges that have emerged just in the last - one of which has to do with AI, and the other of which does not, but which is worth sharing because it is simply such an unbelievable situation.
First, Sports Illustrated got caught generating AI-based articles using entirely fictitious writers. It was easily confirmed by the folks at Futurism; for example, they suspected that some articles were bogus, and did a simple reverse image search on one of the purported writers to find that their image was available for sale on a stock footage site. It's a crazy story and an example of what happens when AI governance issues are not properly respected.
The second story involves a tech conference that featured an impressive number of female speakers on the agenda - until someone discovered that all of them were entirely fake, falsified, and simply made up. You can read the story here- actually, just search and you'll find hundreds of articles related to the bizarre situation - and a pretty extreme example of a pretty major ethical lapse.
.Marsh Mclennan, a risk management firm, has an excellent overview of corporate AI governance issues:
To mitigate risks and realize the potential of artificial intelligence (AI), businesses need to have a governance framework that is based on intent, fairness, transparency, safety, and accountability.
The explosion in AI usage by businesses over the past few years has driven an unmistakable inflection in innovation, efficiency, and profitability. However, as the technology grows in sophistication and ubiquity, it becomes increasingly difficult both to monitor and understand how the algorithms derive outputs and to anticipate downstream ramifications for a firm’s business processes as well as society at large.
This opacity can expose businesses—and those individuals and communities dependent on them—to undesirable consequences in the absence of appropriate risk management mechanisms. A poorly deployed AI solution may result in suboptimal decisions based upon flawed outputs and diminished returns on technology investments. Enduring reputational damage may arise if businesses sell or otherwise capitalize on sensitive data and analytical or behavioural insights obtained in inappropriate ways.
This type of AI governance has to do with the integrity of the algorithm we have in place - there should not be bias built into the algorithm. But responsible AI use goes far beyond that, to always ensuring that we are always doing the right thing.
The same type of thinking should guide our use of AI and other technologies. We always need to be thinking - particularly in the era of social media and fast-moving information - about issues involving our own reputational risk, personal integrity, and more.
I mentioned months ago that way back in 2001, after the Enron scandals and dot.com collapse, I started writing a book with the working title Integrity: How to Get It. How to Keep It. I abandoned it, several chapters in, because I realized it wasn't my topic to chase - but it was reflective of my mindset at the time; I was particularly in shock not only at Enron but the ethical collapses that occurred during those infamous 'dot-com years.'
The "How to Get It" phrase was a play on words, specifically challenging the reader to rethink the issue of ethics - how to 'get it' again, in light of the Enron scandal and so much more that was going on at the time.
I think with the arrival of AI and other issues, it's time that we try to 'get it' again. Consider what I wrote way back in 2002:
A recent survey by the Pew Center for the People and the Press found that 73% those polled believe that people aren’t as “moral and honest” as they used to be. The Washington Post, in a study undertaken in 1995 with Harvard University and the Kaiser Family Foundation, found that two out of three Americans believe that generally, “people can’t be trusted.”
Not only that, the survey showed that most people believe that if someone had the chance, they would cheat someone else. In the face of such stunning general mistrust, it is not surprising that another conclusion of the study was that most people believe that everyone out there is only looking out for themselves.
In an environment such as this, the public trust that has driven our society has been shredded, and the problem of ethics has existed long before the era of recent corporate scandal.
Here we are, 20 years later, and AI is on our doorstep. Whoah!
I believe that with the arrival of AI and other issues, it's time that we try to 'get it' again - integrity, that is - and so the strategy for 2024 is to recommit to our moral grounding and our ethical compass. I'm not trying to be preachy, just observant, because as I have said - our future is going to be wild.
Futurist Jim Carroll has always tried to do the right thing. He thinks he has mostly succeeded.
Thank you for reading Jim Carroll's Daily Inspiration. This post is public so feel free to share it.