By Kriti Sharma, VP of AI at Sage Group and creator of Pegg, AI accounting assistant
Humans develop biases over time. We aren’t born with them. However, examples of gender, economic, occupational and racial bias exist in communities, industries and social contexts around the world. And while there are people leading initiatives to fundamentally change these phenomena in the physical world, it persists and manifests in new ways in the digital world.
In the tech world, bias permeates everything from startup culture to investment pitches during funding rounds to the technology itself. Innovations with world-changing potential don’t get necessary funding, or are completely overlooked, because of the demographic makeup or gender of their founders. People with non-traditional and extracurricular experiences that qualify them for coding jobs are being screened out of the recruitment process due to their varied backgrounds.
Now, I fear we’re headed down a similar path with Artificial Intelligence. AI technologies on the market are beginning to display intentional and unintentional biases – from talent search technology that groups candidate resumes by demographics or background to insensitive auto-fill search algorithms. It applies outside of the business world as well – from a social platform discerning ethnicity based on assumptions about someone’s likes and interests, to AI assistants being branded as female with gender-specific names and voices. The truth is that bias in AI will happen unless it’s built with inclusion in mind. The most critical step in creating inclusive AI is to recognize how bias infects the technology’s output and how it can make the ‘intelligence’ generated less objective.
We are at a crossroads.
The good news: it’s not too late to build an AI platform that conquers these biases with a balanced data set upon which AI can learn from and develop virtual assistants that reflect the diversity of their users.This requires engineers to responsibly connect AI to diverse and trusted data sources to provide relevant answers, make decisions they can be accountable for and reward AI based on delivering the desired result.
Broadly speaking, attaching gendered personas to technology perpetuates stereotypical representations of gender roles. Today, we see female presenting assistants (Amazon’s Alexa, Microsoft’s Cortana, Apple’s Siri) being used chiefly for administrative work, shopping and to conduct household tasks. Meanwhile, male presenting assistants (IBM’s Watson, Salesforce’s Einstein, Samsung’s Bixby) are being used for grander business strategy and complex, vertical-specific work.
Read the source article at mashable.com.
Source: AI Trends