I'm busy putting together some slide decks for a number of upcoming keynotes, particularly around - what else - AI.
It's a complex topic to take on - the art of sharing insight involves developing a narrative to get key points across, emphasize what's important, avoiding the trivial. It's a lot of work to build a good deck that will tell a great story, and I always end up putting a substantial amount of time into pulling them together.
Taking on the topic of AI risk is particularly challenging - we all know that there are big challenges, major risks, and huge misinformation issues. And yet, I need to be able to share not just up-to-the-moment risks - new ones are emerging every day - but also share information on a pathway and structure to deal with those risks.
That's why I hit upon the idea of using the slide above to open up the whole section on "AI, Ethics, and the Algorithm." Even when taking on a serious topic, you need to be able to have a little bit of fun with the audience. I'm still using my Jetsons theme a bit within the deck, so why not take on this particular issue by noting that in the original Jetsons TV series, when people weren't looking, the robotic vacuum cleaner cheated by sweeping the dirt under the rug? (I've told the story on stage, although I got it wrong, suggesting that it was Rosie herself who was at fault!)
In other words, the robot cheated; the algorithm was bad; the ethics were lacking.
The issue of AI ethics and responsibility is a complex one. To guide me, I'm spending quite a bit of time going through the Artificial Intelligence Index Report 2023 from the Stanford University Human-Centered Intelligence group. You can find it here; it has some pretty fascinating information on everything having to do with AI. Starting on page 296, the report dedicates quite a bit of coverage to the entire issue of AI bias, which boils down to this:
AI systems are increasingly deployed in the real world. However, there often exists a disparity between the individuals who develop AI and those who use AI. North American AI researchers and practitioners in both industry and academia are predominantly white and male. This lack of diversity can lead to harms, among them the reinforcement of existing societal inequalities and bias.
A key point that emerges is that those developing with, enhancing, and delivering AI technologies - working on 'the algorithm - must do a better job in eliminating bias, reducing the risk in assumptions, and setting the systems towards neutrality. This is easier said than done - because we are, of course, dealing with incredibly complex technologies. Even so, that doesn't mean any efforts in this regard should be avoided.
But it's not just built-in bias that we need to watch out for; we are programming machines to undertake tasks, whether in a factory, a warehouse, or in a home. As we do so, we need to make sure we 'get it right' - not only in terms of safety and security but also in terms of the ethical boundaries that we are prepared to let the machines exist within.
As we do this, we need to make sure we don't repeat all the same mistakes we did when building 'smart home' technology - much of which was riddled with security and privacy problems, as well as unforeseen opportunities for malicious use. I'd often take this issue on from the stage with a simple picture - simplicity over complexity - which told the story of when the creators of South Park realized they could have their characters shout out "Hey Siri" and "Alexa" - and generally cause havoc with these in-home technologies.
In essence, let's make sure that we don't allow the robot to cheat by sweeping the dirt under the rug - or mess with our smart devices!
And that's how you take a complex issue and boil it down into simplicity!
Since Futurist Jim Carroll talks about the future - an incredibly complex topic - he's taught himself how to deal with complexity through simplicity.