Technology AI programs exhibit racist and sexist biases, research reveals

16:31  14 april  2017
16:31  14 april  2017 Source:   The Guardian

Google’s AI video classifiers are easily fooled by subliminal images

  Google’s AI video classifiers are easily fooled by subliminal images Google is currently in a bit of hot water with with some of the world’s most powerful companies, who are peeved that their ads have been appearing next to racist, anti-semitic, and terrorist videos on YouTube. A recent research paper, published by the University of Washington and spotted by Quartz, makes the problem clear. It tests Google’s Cloud Video Intelligence API, which is used to automatically classify the content of videos using object recognition. (The system is currently in private beta, but has been “applied on large-scale media platforms like YouTube,” says Google.

Machine learning algorithms are picking up deeply ingrained race and gender prejudices concealed within the patterns of language use, scientists say.

AI programs exhibit racist and sexist biases , research reveals . Is this the beginning of the end for neoliberalism? Children’s centres must be preserved.

A robotic hand tapping a keyboard: AI has the potential to reinforce existing biases because, unlike humans, algorithms are unequipped to consciously counteract learned biases, researchers warn. © Getty Images/Science Photo Library RF AI has the potential to reinforce existing biases because, unlike humans, algorithms are unequipped to consciously counteract learned biases, researchers warn. An artificial intelligence tool that has revolutionised the ability of computers to interpret everyday language has been shown to exhibit striking gender and racial biases.

The findings raise the spectre of existing social inequalities and prejudices being reinforced in new and unpredictable ways as an increasing number of decisions affecting our everyday lives are ceded to automatons. 

EXCLUSIVE: Mama June Reveals Exactly How Much She Weighs Today After Stunning Transformation

  EXCLUSIVE: Mama June Reveals Exactly How Much She Weighs Today After Stunning Transformation 'Y'all are seeing me what I've always seen myself as.'After losing nearly 300 pounds on her WE tv show, From Not to Hot, the 37-year-old reality star sat down for an exclusive interview with ET, where she opened up about her shocking transformation and how much she weighs now.

AI programs exhibit racist and sexist biases , research reveals . The selection and placement of stories on this page were determined automatically by a computer program .

AI programs exhibit racist and sexist biases , research reveals . The selection and placement of stories on this page were determined automatically by a computer program .

In the past few years, the ability of programs such as Google Translate to interpret language has improved dramatically. These gains have been thanks to new machine learning techniques and the availability of vast amounts of online text data, on which the algorithms can be trained.

However, as machines are getting closer to acquiring human-like language abilities, they are also absorbing the deeply ingrained biases concealed within the patterns of language use, the latest research reveals.

Joanna Bryson, a computer scientist at the University of Bath and a co-author, said: “A lot of people are saying this is showing that AI is prejudiced. No. This is showing we’re prejudiced and that AI is learning it.”

But Bryson warned that AI has the potential to reinforce existing biases because, unlike humans, algorithms may be unequipped to consciously counteract learned biases. “A danger would be if you had an AI system that didn’t have an explicit part that was driven by moral ideas, that would be bad,” she said.

Now you can rate Netflix programs with a thumbs up or down

  Now you can rate Netflix programs with a thumbs up or down Starting Wednesday, Netflix watchers will have a simpler choice for rating their favorite stuff: thumbs up or thumbs down? The streaming media service rolled out changes to its ratings system, replacing the five-star system with streamlined ratings to improve recommendations users receive and eliminate confusion about how ratings work.

However, as machines are getting closer to acquiring human-like language abilities, they are also absorbing the deeply ingrained biases concealed within the patterns of language use, the latest research reveals . And by that implication, that 'reality = racist , sexist '?

Robots are learning to be racist AND sexist : Scientists reveal how AI programs exhibit human-like biases . Researcher gave AI that learns from online text word associate tasks. Prompted it to link certain words with ones that are pleasant or unpleasant.

The research, published in the journal Science, focuses on a machine learning tool known as “word embedding”, which is already transforming the way computers interpret speech and text. Some argue that the natural next step for the technology may involve machines developing human-like abilities such as common sense and logic.

“A major reason we chose to study word embeddings is that they have been spectacularly successful in the last few years in helping computers make sense of language,” said Arvind Narayanan, a computer scientist at Princeton University and the paper’s senior author.

The approach, which is already used in web search and machine translation, works by building up a mathematical representation of language, in which the meaning of a word is distilled into a series of numbers (known as a word vector) based on which other words most frequently appear alongside it. Perhaps surprisingly, this purely statistical approach appears to capture the rich cultural and social context of what a word means in the way that a dictionary definition would be incapable of.

Helen Hunt Reveals She Once Tried to Date Rick Springfield: 'I Was So in Love'

  Helen Hunt Reveals She Once Tried to Date Rick Springfield: 'I Was So in Love' The 53-year-old actress was once madly in love with the music icon.The 53-year-old actress appeared on Thursday's episode of Watch What Happens Live With Andy Cohen and revealed she had a major crush on Rick Springfield.

AI programs exhibit racial and gender biases , research reveals . But, what if the robots later relearn from racist and sexist humans that, say, human females are more interested on average in arts and humanities than in engineering and math?

Machines can't be all bad, right? Especially if they're crypto racists ?

For instance, in the mathematical “language space”, words for flowers are clustered closer to words linked to pleasantness, while words for insects are closer to words linked to unpleasantness, reflecting common views on the relative merits of insects versus flowers.

The latest paper shows that some more troubling implicit biases seen in human psychology experiments are also readily acquired by algorithms. The words “female” and “woman” were more closely associated with arts and humanities occupations and with the home, while “male” and “man” were closer to maths and engineering professions.

And the AI system was more likely to associate European American names with pleasant words such as “gift” or “happy”, while African American names were more commonly associated with unpleasant words.

The findings suggest that algorithms have acquired the same biases that lead people (in the UK and US, at least) to match pleasant words and white faces in implicit association tests.

These biases can have a profound impact on human behaviour. One previous study showed that an identical CV is 50% more likely to result in an interview invitation if the candidate’s name is European American than if it is African American. The latest results suggest that algorithms, unless explicitly programmed to address this, will be riddled with the same social prejudices.

Tesla reveals a sleek solar panel built for your existing roof

  Tesla reveals a sleek solar panel built for your existing roof Tesla's home energy efforts might be centered around its solar roofs, but it knows that not everyone can (or wants to) rip up their roof just to bring renewable energy to their home. To that end, the company is offering a first glimpse at Panasonic-made solar panels that would go on top of your existing roof. Unlike many aftermarket options, this would be relatively slick and unintrusive -- the panels have "integrated front skirts and no visible mounting hardware." While it'll be patently obvious that you have solar energy on your roof, it shouldn't be the eyesore you sometimes get with conventional designs.

The research involved testing an AI model, trained to understand words using a statistical approach called word embedding, for implicit bias . FOSSBYTES DEALS. Latest Articles. Researchers Reveal AIs Are Turning Racist And Sexist , You’ll Be Surprized

3) If she says no, call her a racist . 4) Enjoy shame fuck. Even if you DID rape her she'd never tell anyone for fear of being decried as a racist for perpetuating negative stereotypes about non-whites. sexist - biases - research - reveals .

“If you didn’t believe that there was racism associated with people’s names, this shows it’s there,” said Bryson.

The machine learning tool used in the study was trained on a dataset known as the “common crawl” corpus – a list of 840bn words that have been taken as they appear from material published online. Similar results were found when the same tools were trained on data from Google News.

Sandra Wachter, a researcher in data ethics and algorithms at the University of Oxford, said: “The world is biased, the historical data is biased, hence it is not surprising that we receive biased results.”

Rather than algorithms representing a threat, they could present an opportunity to address bias and counteract it where appropriate, she added.

“At least with algorithms, we can potentially know when the algorithm is biased,” she said. “Humans, for example, could lie about the reasons they did not hire someone. In contrast, we do not expect algorithms to lie or deceive us.”

However, Wachter said the question of how to eliminate inappropriate bias from algorithms designed to understand language, without stripping away their powers of interpretation, would be challenging.

“We can, in principle, build systems that detect biased decision-making, and then act on it,” said Wachter, who along with others has called for an AI watchdog to be established. “This is a very complicated task, but it is a responsibility that we as society should not shy away from.”

Campus clash: Students battle over what constitutes free speech .
Campus clash: Students battle over what constitutes free speech"This really opened my eyes and made me realize that all speech should be protected," says the Ryerson University political science student who started the Men's Issues Awareness Society. "Today it's me that can't talk about men's issues and men's mental health. Tomorrow it might be another group.

—   Share news in the SOC. Networks

Topical videos:

This is interesting!