What’s in a voice? Just ask the creators of Alexa or Siri and they will confirm how important their dulcet tones are. There’s a lot in a tone, then. Imagine the strain of having Amitabh Bachchan rasp out the weather report in the morning and answering sundry other queries throughout the day. For those still sceptical about the significance of a soothing timbre, there are the findings of a recent Unesco report that examines the implications of the charming feminine voice that almost all virtual assistants have been blessed with. The report noted that technology companies justify the use of obliging female voices by citing surveys that show that this is what consumers of both sexes prefer. What is seldom mentioned is that the same surveys show that people like the sound of a male voice when authoritative statements are being made and a female voice when help is being offered. As a result, the image of the woman which is being disseminated in a world that is being threatened with takeover by intelligent machines is one where she is, under all circumstances, a servile entity. Little wonder then that these virtual assistants ensure that they never lose their “submissive”, “flirtatious” tone, while “deflecting” or giving “lackluster or apologetic” responses to insults and gendered expletives hurled at them.
These docile, virtual creatures can have a very real impact — linking women with the virtuous idea of pliant assistance, thereby leading to real women being penalized for being assertive. But man — literally — is the father of machines, and artificial intelligence is a mere reflection of deeper gender disparities in the world of technology. Women make up just 12 per cent of AI researchers and just 6 per cent of software developers. Ironically, AI is increasingly being used by human resources departments to recruit a more diverse pool of employees. Given the preconceived notions coded into AI by their creators — mostly men — this, unfortunately, had perpetuated the vicious cycle of discrimination. Amazon’s experimental hiring tool penalized resumes that included the word “women”. The reason was that the data set used to train the tool came from a constituency of overwhelmingly male employees, leading the machine to think that they are more desirable as candidates. Gender is hardly the only kind of prejudice that has infected these machines. Cars powered by AI can often not recognize people of colour; facial recognition used by police has, in the United States of America, failed to distinguish faces unless they are of white people. The contingent of black employees in tech companies is even smaller than that of women.
One solution could be to use AI to reveal biases instead of feeding it large data sets predicated upon prejudices. For instance, it can be taught to identify preconceptions by being supplied with cases where discrimination is explicit. Data curated using human intelligence — as opposed to unfiltered data that is used now — may make for more discerning machines. But it must also be asked whether those curating the data are sensitized to recognize divisions. This is unlikely given the unrepresentative nature of the workforce. Attempts to weed out discriminations are unlikely to succeed without the presence of people who have cogent experiences of bias.