(Editor note: This Monday, May 1, at 10 AM EST, Futurist Jim Carroll will be interviewed by an AI about the impact of AI. You won't want to miss it! It will go out LIVE to his LinkedIn channel, Youtube, and personal and corporate Facebook pages. He'll post more details in his Monday morning Daily Inspiration. If he can make it work, it will also be streamed live on his blog.)
Most organizations are going to need a Senior Vice-President of Artificial Intelligence Risk Management within a year.
The position, reporting to the Board of Directors, will be responsible for monitoring, assessing, and interpreting an ever-increasing flood of new AI-based risks and preparing scenarios for management to counter them; quarterbacking the deep ethical and moral issues that come with the increasing use of AI within the organization; coming up with and working with HR to monitor a company-wide "AI Code of Conduct," applicable to all staff; working with the Chief Legal Officer on the never-ending number of new AI-related copyright, trademark, and intellectual property issues hitting the organization; spending time with the corporate IT department to come up with policies, strategies, and structure to firewall corporate information sources so that they don't become fodder for content within large-language models (and hence, become a part of the answer than AI chatbot offers up); ensuring that any new products or services developed utilizing AI are based on an appropriate quality, reliability and ethical frameworks; and working with HR to ensure that workforce skills and capabilities are enhanced and developed so that the organization has a proper sills-base for the artificial intelligence era. Not only that, but the individual will have to continually modify and adjust these responsibilities in the face of new AI risks not yet identified and yet to come.
The individual will be required to report on this initiative, their responsibilities, and their findings, to the Board on a quarterly basis, to the public annually or semi-annually, and to the CEO weekly - if not daily. They must have effective media and PR skills, as they will have to quickly become the public face of the response by the organization to the forthcoming flood of new corporate PR disasters that will threaten to do significant damage to the organization.
All of this is going to be necessary sooner than you think.
Think about it - many companies did such a lousy job in preparing for cyber risk and computer security issues that we live in a world of constant security attacks and IT. penetrations, relentless cyberattacks, privacy breaches, and more. Even though we'd been telling them for years that IT security should be a Board level issue, funded with the proper resources, companies have ignored this risk.
They will do an equally lousy, if not worse job, dealing with AI.
Isn't this all far-fetched?
Not at all - as the latest 'shiny new object,' every organization is rushing to get into this new world with wild enthusiasm but few strategic, moral, or ethical guardrails. AI-based deep fake technology already allows us to create entirely false images and videos. The Senior Vice-President of Artificial Intelligence Risk Management better makes sure that the public relations department isn't using such technology to create disparaging comments about a competitor, or that the marketing department isn't using it to make fanciful product claims!
After all, our fake future is already here, one in which ethical boundaries have disappeared and a moral sense of right and wrong is already gone:
The judge overseeing a wrongful death lawsuit involving Tesla's Autopilot system rejected Tesla's claim that videos of CEO Elon Musk's public statements might be deepfakes.
Tesla's deepfake claim "is deeply troubling to the Court," Santa Clara County Superior Court Judge Evette have wrote in a tentative ruling this week. "Their position is that because Mr. Musk is famous and might be more of a target for deep fakes, his public statements are immune. In other words, Mr. Musk, and others in his position, can simply say whatever they like in the public domain, then hide behind the potential for their recorded statements being a deep fake to avoid taking ownership of what they did actually say and do. The Court is unwilling to set such a precedent by condoning Tesla's approach here."
It's in this context that I'm thrilled that my friends over at the Washington Speakers Bureau just published my article on the future of legal risk and the increasing complexity of the law and new risks issues as a result of accelerating AI. The Washington Speakers Bureau is the world’s largest talent agency specializing in corporate speaking events, for more than 40 years, and they have been booking me into events for more than 20. It's always fun (and remarkable) to know that I share a speaking roster with some of the best keynote speakers and high-profile thought leaders in the world, including individuals like George W. Bush, Alex Rodriguez, Terry Bradshaw, and Tony Blair.
You need to be on top of these issues - NOW. I want you to hit their Web site right now to read it.
It's always such a thrill when these folks book me into an event. Here's a tease from the article.
What are we going to do the first time someone wants to have an AI chatbot offer evidence in a court trial? Is the evidence reliable? Will it be trustworthy, or might it include some invalid or plainly incorrect misinformation? Will lawyers on both the plaintiff and defendant side be prepared to deal with these thorny, complex new legal issues? This is not a mythical issue – indeed, we will probably see this challenge arrive before we know it.
What are association executives going to do as ChatGPT and other technologies begin to chip away at the knowledge base within their profession? What will they do to deal with the absolute explosion of new knowledge that is already emerging – and how will they prepare their members for that? What do they need to do with professional education so that careers don’t disappear – but evolve at the speed of AI? Not only that – do they have a concise view of the new opportunities and challenges all this fast moving AI technology is presenting to their industry?
What will corporate risk managers and legal staff do to deal with an absolute flood of new legal risk issues, involving complex new legal issues that did not previously exist? How will they manage fast emerging trademark, copyright, intellectual property and other new issues? What will they do as artificially-generated ‘misinformation-at-scale’ floods their world posing complex new defamation, libel and other challenges?
What will regulators and politicians do as AI comes to challenge the very foundation of so many existing laws and regulations, at the same time that it poses vast new economic, geopolitical and societal challenges and opportunities? How do we move forward an already slow moving government process to deal with blazing fast AI technologies?
These are not theoretical questions – these are new realities that we have to begin thinking about right now, at this very moment – because the future of AI is happening faster than we think.
Most of us already know that – the last few months have been a whirlwind with new AI technologies.
The big question is – what do we do about it?
Read the full thing here. It's long - and it will make you think!
Bottom line - you can try and ignore AI-related risk, but AI-related risk won't ignore you.
In fact, it's going to make your life pretty miserable at times!
Futurist Jim Carroll has long predicted that we live in an era that is seeing the rapid of new careers based on skills that didn't previously exist. He guarantees that within a year, organizations will be scrambling to find someone to fill the newly created position of Senior Vice-President of Artificial Intelligence Risk Management. You heard it here first - because it is the job of a Futurist to tell you what you need to know before you know that you need to know it!
Thank you for reading Jim Carroll's Daily Inspiration. This post is public so feel free to share it.