Microsoft is heavily involved in its artificial intelligence tech. However, investors want to know that technology can go wrong and damage the company's reputation in the process.
Or it warned investors in their last quarterly report, which was first discovered by quartz Dave Gershgorn.
"Questions in the use of AI in our offerings can cause damage to reputation or liability," Microsoft wrote in the filing.
"AI algorithms may be wrong. Datasets may be insufficient or contain preset information. Inaccurate or controversial data practices from Microsoft or others may impair acceptance of AI solutions … Some AI scenarios present ethical issues," it adds. . You can read the entire application below.
And the company is not wrong.
Despite the great conversation from technology companies like Microsoft about the virtues and capabilities of AI, the truth is that technology is not as smart yet.
Today, AI is mainly based on machine learning, where the computer has limited ability to derive conclusions from limited data. There must be many examples for it to "understand" something, and if the first dataset is biased or incorrect, its output will also be.
It obviously changes. Intel and startup like Habana Labs are working on chips that can help computers accomplish the complicated task. Inference is the basis for learning and people's ability (and machines) to justify.
But we're not there yet. And Microsoft has already had some high profile cases of snafus with its AI tech. In 201
More recently, and more seriously, research was done by Joy Buolamwini at MIT Media Lab, reported a year ago by The New York Times. She found three leading facial recognition systems – created by Microsoft, IBM, and China's Megvii – did a terrible job identifying non-white faces. Microsoft's error rate for darker women was 21%, which was still better compared to 35% for the other two.
Microsoft insists that it listened to that criticism and has improved its face recognition technology.
In addition to Amazon being the Recognition Face Detection Service, Microsoft has begun to request facial recognition technology regulation.
Microsoft CEO Satya Nadella told reporters last month: "Take this idea of face recognition, right now it's just awful. It's just a run to the bottom. It's all in the competition's name. Whoever wins a deal , can do something. "
If this regulation comes, and what it will look like, remains to be seen.
Last summer, after the American Civil Liberties Union published a report claiming that Amazon's technology misidentified a number of congressional members, several members sent letters asking for Amazon for information. Amazon, for its part, counteracted in a blog post asking the validity of ACLU's tests.
Meanwhile, Microsoft's face recognition technology, chatbots and other AI technologies are already in the world.
For example, in an official company's podcast on Monday, a Microsoft executor talked about how the company builds AI for humanitarian needs. The company uses face recognition technology to reunite refugee children with their parents in the refugee camps, says Justin Spelhaug, CEO of Social Effect Technology at Microsoft.
"We use face recognition to really give a positive result where, through machine vision and image reconciliation, we can measure the symmetry of a child's face and match it to a database of potential parents," said Gamble. .
Microsoft also created a chatbot for the Norwegian Refugee Council to help refugees to humanitarian services.
In general, Microsoft is investing so heavily in AI that it was named as one of the main reasons why operating costs increased by 23% or $ 1.1 billion for its intelligent cloud unit in the last six months of 2018, the company said. Other reasons included investments in cloud, investment in sales teams and purchase of GitHub.
But, Microsoft said, AI is still risky business – after all, things can still go very wrong.
Here is full investor warning:
"Questions in the use of AI in our offerings can lead to damage to reputation or responsibility. We build AI in many of our offers and we expect this element of our business to grow We see a future in which AI operating in our devices, applications and cloud helps our customers become more productive in their work and personal lives, as with many disruptive innovations, AI presents risks and challenges that can affect the decision, and therefore ours
"Al algorithms may be wrong. Datasets may be insufficient or contain predetermined information. Inappropriate or controversial data practices from Microsoft or others may impair acceptance of AI solutions. These deficiencies can undermine decisions, predictions or analyzes that AI applications produce, expose us to competitive damage, legal liability, and brand or reputational damage.
"Some AI scenarios present ethical issues. If we activate or offer AI solutions that are controversial because of their impact on human rights, privacy, employment or other social issues, we may experience brand or reputational harm."