Artificial intelligence (AI) is one of the biggest technologies of our future. Every country and organization is preparing for the AI revolution in their own way. But when humans are involved with building the foundations of AI, can AI truly be objective?
Tech leaders have been publicly expressing their opinions of AI’s potential to create a dystopian society, or worse, humanity’s end. But esteemed industry experts think there is a more pressing matter at hand — integrating bias into AI.
And if it’s not taken care of, those other problems could become a reality.
Start With the People
Some of the common challenges facing the tech industry today are a lack of diversity and problems with equal pay and opportunity for minorities and females. These problems have already manifested in other ways, like the disparate wealth distribution by gender in San Francisco. So is it so farfetched to see these conundrums carrying over into the actual technology?
Kriti Sharma is the Vice President of Bots and AI at UK tech titan Sage Group. And she thinks it’s already happening. Amazon’s Alexa, Apple’s Siri, and Microsoft’s Cortana all have female voices. Whether inadvertent or not, subtle design choices like these could reinforce the gender stereotype of women being better suited for support or service-type roles. On the other hand, IBM’s Watson and Salesforce’s Einstein are touted to be intelligent machines capable of tackling the more complex problems.
Sharma helps battle this fallacy with Sage’s genderless AI assistant, Pegg. But she believes the best long-term strategy to fix this is to diversify the pool of people working on AI in terms of gender, expertise, and educational backgrounds. Currently, Sharma thinks “AI development is a Ph.D.’s game,” and that it’s time for the San Francisco developer community and other tech hubs to expand their horizons.
To make AI robust and viable to serve society as a whole, it must include society as a whole in its creation.
Making AI a Machine of the World
It’s regular protocol for engineers to run a gamut of tests on new products to ensure they’re ready for the world to use. Occasionally, a design flaw or security vulnerability may slip through the trials. But for the most part, there would be a whole slew of problems with new devices debuting in the real world if it weren’t for this process.
Sharma believes that it’s time to introduce a new type of testing into AI development. Dubbing it ‘bias testing,’ she thinks it could help mitigate more nebulous harms (social, ethical, emotional, etc.) that devices may bring to the market. Testing of this kind is more important for AI than other technologies because AI is ever-changing and evolving even after it leaves the lab.
As AI becomes more integrated with society over the next few years, it’s integral that we rise to the challenge of addressing these hard problems. Like any tool, AI could be purposed for either bad or good. By identifying and fixing these issues now, we can ensure that it’s the latter.Tags: AI, AI App Development, amazon, app developer, app development, artificial intelligence, bias, data science, deep learning, Google, machine learning, mobile app development, mobile tech, mobile technology, mobiletechnology, news, san francisco AI app developer, San Francisco mobile app developer, tech, tech news, technology advancement