Sunday, May 28, 2023
HomeTechnologyMicrosoft is asking lawmakers to pass new guidelines for responsible AI

Microsoft is asking lawmakers to pass new guidelines for responsible AI

On Thursday, Microsoft presented a draft regulation for artificial intelligence that calls for building on existing structures for controlling AI.

Microsoft’s proposal is the latest in a series of industry ideas to regulate a technology that has garnered public attention, attracted billions of dollars in investment, and has led several of its key architects to argue that AI urgently needs regulation before it does far-reaching, detrimental effects on society.

Speaking to an audience in Washington, DC on Thursday, Microsoft President Brad Smith proposed a five-point plan to govern AI: implement and build existing frameworks, call for effective brakes on the use of AI, develop a broader legal and regulatory framework, promoting transparency and pursuing new public-private partnerships.

“We need to have a clear view and take responsibility as we develop this technology,” Smith said.

“It will send a signal to the market that this is the future that we all need to embrace,” Smith told an audience that included congressmen, government officials, labor leaders and civil society groups.

Smith’s comments come amid growing interest in Washington in regulating the fast-growing AI industry. At two Senate hearings last week, lawmakers urged executives and researchers at tech companies on how to regulate the technology and address the many concerns raised by AI, including its ability to accelerate harms such as cyberattacks, consumer fraud, and discrimination and bias.

Earlier this week, the Biden administration released an updated framework to promote responsible AI use, including a roadmap for investment in AI research and design priorities. The White House also called for input from the public to mitigate AI risks. The government has previously raised concerns about bias and equity issues with the technology.

Microsoft’s recommendations align with OpenAI CEO Sam Altman’s testimony before Congress last week. Both Altman and Smith called for a licensing system to govern AI companies overseen by a new independent agency. Smith added that he would like to see AI specialists in regulators evaluate products.

In his comments on Thursday, Smith pointed to NIST’s Artificial Intelligence Risk Management Framework as an example of frameworks for regulators to build on, and said he would like an executive order that would require the federal government to only provide AI services to companies to acquire that adhere to the principles of responsible use.

Microsoft has played a pivotal role in OpenAI’s recent advances, funding the company with billions of dollars in investments and cloud computing credits that the startup has used to train its GPT models, which are widely regarded as industry leaders. Microsoft has begun incorporating OpenAI’s technology into its products, including its Bing search engine, and the partnership between the two companies is a key driver of recent AI advances.

The companies’ critics reacted skeptically to their regulation proposals, saying a licensing system could potentially harm other start-ups. Critics have also pointed to similar calls from companies like Meta, which called for regulation after coming under congressional crosshairs in the wake of the Cambridge Analytica scandal. OpenAI has already spoken out against stricter regulation in the European Union and threatens to pull out of the market if regulators continue on their current course.

Asked by Rep. Ritchie Torres, DN.Y. how lawmakers can balance the need to slow down and regulate the technology while maintaining a strategic competitive advantage over China, Smith said part of the solution is to build strong partnerships with other nations a global framework for responsible AI. He also urged Congress not to move so slowly that it falls behind US allies, and said Microsoft hopes Congress will pass federal privacy legislation this year.

Noting that it is important to address the national security concerns of deep fakes and their ability to support foreign intelligence operations, Smith called for more transparency about when AI is being used to generate content. Smith says Microsoft strives to produce an annual transparency report for its AI products.

Smith acknowledged lawmakers’ numerous concerns while citing positive examples of the use of AI, including using the technology in real time to map 3,000 schools in Ukraine damaged by Russian forces and then sending that information to the United Nations as part of the framework of investigations into war crimes.

Corrected on May 25, 2023: This story has been corrected to indicate that Brad Smith called for the creation of a new independent agency to regulate AI companies.


TOI.NEWS Tech News Click here

Follow and Subscribe to Our YouTube, Instagram and Twitter – TwitterYoutube and Instagram.

News & Image Credit – Click Here


Hurry Up!

TOI News TOI.News
TOI News TOI.Newshttps://www.toi.news
We are TOI.News and we provide Top Latest Breaking News of Entertainment, Game Guide, Sports News, etc.
RELATED ARTICLES

Leave a Reply

- Advertisment -

Most Popular

Recent Comments