The agency wants to figure out how much you trust artificial intelligence

[ad_1]

Himabindu Lakkaraju, assistant professor at Harvard University, studies the role of trust in human decision-making in a professional environment. She is working with nearly 200 doctors in Massachusetts hospitals to understand how trust in artificial intelligence can change the way doctors diagnose patients.

For common diseases such as influenza, artificial intelligence is not very helpful, because human professionals can easily identify them. But Lakkaraju found that artificial intelligence can help doctors diagnose difficult-to-identify diseases such as autoimmune diseases. In her latest work, Lakkaraju and colleagues gave doctors the records of approximately 2,000 patients and the predictions of the artificial intelligence system, and then asked them to predict whether the patient would have a stroke within six months. They changed the information provided about the AI ​​system, including its accuracy, confidence interval, and explanation of how the system works. They found that when doctors get the most information about artificial intelligence systems, their predictions are the most accurate.

Lakkaraju said that she is happy to see that NIST is trying to quantify trust, but she said that the agency should consider explaining the role that humans can play in trust in AI systems. In the experiment, when the doctor gave an explanation without data to inform the decision, the doctor’s accuracy in predicting a stroke would decrease, which means that a separate explanation would lead people to trust artificial intelligence too much.

“Even when it is not necessary, explanation can bring an unusually high degree of trust, which is the source of the problem,” she said. “But once you start using numbers to illustrate the quality of the explanation, people’s trust will gradually calibrate.”

Other countries are also working to resolve the issue of trust in artificial intelligence.The United States is one of 40 countries that signed the agreement Principles of Artificial Intelligence Emphasize credibility.A document signed by more than a dozen European countries stated that integrity and innovation go hand in hand. Considered “Two sides of the same coin.”

NIST and the OECD, which is composed of 38 advanced economies, are developing tools to designate AI systems as high-risk or low-risk.The Canadian government created a Algorithmic impact assessment Processes for companies and government agencies in 2019. There, artificial intelligence is divided into four categories-from not affecting people’s lives or the rights of communities to very high risks and permanent harm to individuals and communities. It takes about 30 minutes to evaluate an algorithm. The Canadian method requires developers to notify users, except for the least risky systems.

EU lawmakers are considering Artificial Intelligence Regulations This can help define the types of artificial intelligence that are considered low-risk or high-risk, and global standards for how the technology is regulated.Like a European landmark GDPR Privacy laws, the EU’s artificial intelligence strategy may lead the world’s largest companies to deploy artificial intelligence to change their global practices.

The regulation requires the establishment of a public registry for high-risk forms of artificial intelligence used in databases managed by the European Commission. Examples of artificial intelligence deemed high-risk in the document include artificial intelligence used for education, employment, or as a security component of utilities such as electricity, gas, or water. The report may be revised before it is passed, but the draft calls for a ban on artificial intelligence for government social scoring and real-time facial recognition of citizens.

The EU report also encourages companies and researchers to experiment in areas known as “sandboxes” to ensure that the legal framework “is conducive to innovation, future-oriented and resistant to destruction.”Earlier this month, the Biden administration Introduction The National Artificial Intelligence Research Resources Working Group aims to share government data to study issues such as healthcare or autonomous driving. The final plan needs to be approved by Congress.

Currently, artificial intelligence user trust scores are being developed for artificial intelligence practitioners. However, over time, these scores can enable individuals to avoid untrustworthy artificial intelligence and drive the market to deploy powerful, tested, and trusted systems. Of course, the premise is that they know that AI is being used.


More exciting connection stories

[ad_2]

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker