If you knew that AI would destroy mankind in 150 years, would you still support it?
This question was posed at the “What is AI and How Can We Keep It From Harming Humanity?” event recently hosted by Tech2025 in Brooklyn, NY, to a few people who lingered afterward.
This innovative startup is creating a community “eager to learn what [. . .]disruptive technology is, how it will change their businesses, and what they can do to prepare for the future.” Several AI companies estimate their tech will be ready for implementation in 8-10 years, or around the year 2025.
After a few seconds of thinking about the end of mankind, I responded with an adamant, Yes! Since homo sapiens have not and will not always inhabit Earth, destruction by AI isn’t that surprising of a conclusion. Perhaps we deserve this fate because we are destroying our natural habitat through climate change.
Another attendee chimed in with a tentative yes, confessing the selfish logic that since he will already be dead in 150 years, who cares? YOLO!
Pros and Cons
The potential advantages of AI have been eloquently discussed elsewhere, particularly the advancement of mankind through technological and scientific/healthcare achievements. Reversing climate change, assisting archaeological discovery, and historical preservation and analysis are just a few potential benefits of AI.
The imaginations of filmmakers, screenwriters, and authors have immortalized the disadvantages of AI– namely the glorious doom of all humanity.
There is an innate force that drives all species on Earth to go forth and multiply. This keeps the species alive. If this is valid, it explains part of why we humans are terrified of being decimated by robots.
The idea of AI robots mercilessly destroying ourgrandkids and great-grandkids en masse is quite disturbing. Yet, this idea seems too abstract of a concept to inspire the kind of fear that breeds action.
After all, this same concept doesn’t prompt us to do 100% of what we could to halt climate change (i.e. eat 100% plant-based, stop using all non-recyclable plastics, recycle our gasoline cars for parts and buy bikes or Teslas, etc).
1. If self-preservation of a species is an inherent drive of its members, then isn’t this why we are developing AI in the first place? To help us be bigger, faster, smarter and generally better humans?
2. Is this self-preservation ‘instinct’ a feature of consciousness? Meaning if (or when) AI becomes conscious, will it become aware of its superior intelligence? Will it perceive that our threatened, fragile egos (#notmypresident) may threaten their existence?
3. Will AI software destroy us because of its instinctual attempt to remain in existence?
This is not a pointless game of logic; the reasons why do matter.
Our attempt at self-preservation through AI could be the first step on the path to our destruction, but this can be avoided. We could learn to coexist with a species whom we cannot exploit. Based on the world today and the last 150 years, it’s not looking good.
What’s the takeaway?
One, if the robots kill us off, we will probably deserve it.
Two, although this event is likely a long way off, it’s crucial to be mindful of and intentional about the AI algorithms currently in development.
This means we need to create algorithms that are free of racial, gender, sexual, and special bias.
This means we need diversity in IT. We need diversity at AI startups, from engineers to the C-suite. We need to call out each other’s implicit bias and privilege.
Three, we need to keep the conversation going.
Blind fear of an unknown future is ignorant. We don’t have to be software engineers to opt-in to guide the tech that will determine our fate.