There are a multitude of things being done with AI today, the range goes from frivolous to world changing to downright scary. Autonomous vehicles and deep fakes are getting a lot of public attention, and the impact of AI in healthcare is predicted to revolutionize medicine. The raw potential to use the vast amounts of data available coupled with machine learning and AI is more than game changing- it represents nothing less than the ability for humans to challenge and overcome problems that have been intractable until now (or the near future).
That said, there are still many, many issues that need to be worked out with AI. These include facets from the technical to the sociological to policy, ethics, and plain old bias. For example, to achieve fully autonomous vehicles requires the development of yet to be invented breakthrough technologies. These types of technological advances are interesting and exciting, but there are other, I would argue, more important advances that need to be made first. Namely, there is what appears to me an all to pervasive misconception of two critical concerns related to AI: bias and ethics.
I’ve been participating in some pretty interesting conversations of late on these two topics (bias and ethics in AI) and it’s become frighteningly clear that some otherwise super smart people don’t fully grasp the nuance and pervasiveness of bias across the entire lifecycle of AI technology- from the mechanisms of how source data is collected all the way through to the learning algorithm architectures and training models. To be fair, there are also some super smart people that understand these issues very well and are quite vocal about them.
Google’s recent announcement of the creation of an external ethics board (which looks like a poorly masked PR stunt to me) underscores how poorly the intersection of ethical behavior, accountability, and AI is regarded in the current milieu. Kudos to Alessandro Aquisti for declining to participate in this facade. The very idea of an “ethics board” or as Microsoft put it, a “checklist” tells me that not only do these technocrats not get it, they are not even in the right conversation.
AI, more than any technology that has come before, needs input and involvement from and across disciplines outside of STEM, especially from the social sciences and related disciplines. The raw potential of impact on humans that AI brings, the possibility that AI can (and indeed must) generate solutions no human could conceive- must be thought through starting now. While the use of AI can, and will, have as yet unimagined positive benefits for humans and the long-term future of our planet, so too are the outsized potential for intentional misuse and sadly, unconscious harm. The very potential of AI embodies the greatest risks of using it unconsciously (or unconscionably).
I am a huge fan of machine learning and AI, and data driven stuff in general. I have built and sold multiple companies based on the premise of making data intelligent, including using machine learning algorithms and models in otherwise mundane enterprise software as far back as 1997. Once upon a time, I too was misguided on the nature of bias and what ethical AI really means.
This is why I invite anyone involved in AI research and/or commercialization to take some time to understand bias and ethics as related to AI. We need to build AI with the understanding that bias is pervasive across the entire life-cycle, and needs to be actively weeded out, it’s not something one can simply leave out of the input or easily program out of the model. We must also practice AI with the idea that ethical AI demands accountability, not just a checklist or a set of principles.