قالب وردپرس درنا توس
Home / Business / Oh dear! Amazon's face recognition is racist and sexist – and there is a JLaw deep fake that will make you tear your eyes

Oh dear! Amazon's face recognition is racist and sexist – and there is a JLaw deep fake that will make you tear your eyes



Roundup Here's an overview of this week's other AI news. In short: Experts continue to stumble upon the Amazon Facial Recognition Service, and there's a new deepfake that makes you stare at horror.

China and the United States are miles ahead: A study prepared by the UN World Organization for Intellectual Property found that China and the United States both dominate the AI ​​industry, with both countries paving the way for patents and academic research.

It is no surprise that experts have preached this for many years. However, the specifications in the report are still interesting. Here are some of the key issues:

  • Almost 340,000 AI related patents have been filed since the term was coined in the 1
    950s.
  • Of these applications, more than half were submitted since 2013 when the boom kicked in.
  • The recent rise of AI is down to the resurrection of machine learning, so it is no surprise that 49 percent of patent applications are data related – an area that has been most successful.
  • IBM has filed most AI patents compared to other companies or universities with 8,290 inventions so far. Then, Microsoft is 5,930. Japan's Toshiba is third at 5.223. No Chinese companies have done the best three, but the report shows that the number of registered applications has increased 70 percent annually from 2013 to 2016, so it is rapidly rising.
  • 434 AI companies have been acquired since 1998, over half – 53 percent of all acquisitions were made since 2016.
  • The alphabet has created most AI startups. Other US giants who also invest include Apple and Microsoft. Half of the very best 20 AI papers are from Chinese companies and research institutions.

You can read the entire report here.

Amazon's Recognition PR Disaster Continues: Amazon's circuitry technology, Recognition, has made headlines again.

In July, a study by the American Civil Liberties Union (ACLU) showed that Recognition was sold to authorities and could be inaccurate – especially when trying to identify people with darker skin tones.

Now, a research assignment published by the Massachusetts Institute of Technology Media Lab provides additional evidence. Recognition became worse when analyzing images of women – they were misidentified as men 19 percent of the time – and the results fell even longer when the women were black.

Matt Wood, CEO of AI on Amazon Web Services, resigned and said the researchers were studying an outdated version of Recognition and trying to improve their product.

He also pointed out that when the service was used by the police, Amazon recommended a 99 percent confidence threshold. The percentage defines how safe the system is the starting result.

Unfortunately, it looks like police departments don't care. Washington County Sheriff's Office in Oregon, one of Amazon's few publicly-identified customers, told Gizmodo: "Nor do we use a trust threshold."

Oh dear, since there are currently no government regulations around facial facial recognition, the police, technically, can use it, but it will.

Fighting Face Detection Disorders: While we are on the subject of face recognition, IBM has released a data set that promises to be more diverse to combat interference.

Diversity in Faces data set contains a million tagged images of human faces scraped from the Creative Commons sections of Yahoo and Flikr. Developers chose the images based on a number of factors, including: head length, nose length, forehead height, face symmetry, age, gender, bag and so on.

"The challenge in training AI is manifested in a very obvious and in-depth way with face recognition technology," IBM says.

"Today, there may be difficulties in creating face recognition systems that meet justice expectations. The problem is not with the AI ​​technology itself, but in itself, but with how the AI-powered face recognition systems are trained. In order for the face recognition systems to be performed as desired – and the results become increasingly accurate – the training data must be different and offer a range of coverage. "

The data set is not publicly available and you must apply for access if you want to take care of it.

New viral deepfake alert: Hey, have you ever wondered what Jennifer Lawrence would look like She had Steve Buscemi's face? [19659003] Well, here's your lucky day.

The terrible mashup was created by VillanGuy a computer analyst living in Washington.

It's not that bad, In fact, the skin tones match and the facial expressions are not completely out of place, taken from a clip by Jennifer Lawrence giving a speech at this year's Golden Globes Awards.

New! ®

Youtube Video


Source link