Last Posts

"The regulation of artificial intelligence by the U.S. government"

The U.S. government is contemplating legislation to assist society in adjusting to the rise of artificial intelligence.


Early adopters of this technology are already experiencing increased productivity. For instance, Klarna, a provider of buy now, pay later financial services, anticipates that its AI assistant tool will boost its profits by $40 million by the end of 2024.
Klarna CEO Sebastian Siemiatkowski stated in an interview with 24  News that the AI assistant tool essentially performs the work of 700 full-time agents and handles two-thirds of all incoming tasks over chat.
Klarna's AI assistant tool is based on OpenAI's systems, which also power ChatGPT and Sora — two products that have gained attention from the public and Congress.
In 2023, members of Congress engaged in discussions, private meetings, and educational sessions with prominent tech executives, including Sam Altman, CEO at OpenAI. The White House subsequently sought commitment from 15 private industry leaders to help lawmakers understand how to identify risks and utilize new technologies. The list includes major tech industry players as well as newcomers like Anthropic and OpenAI.

The Senate Task Force on AI, established in 2019, has enacted at least 15 bills focusing on research and risk assessment. However, compared to the measures passed by the European Union in 2024, the U.S. regulatory environment seems relatively lenient.
Erik Brynjolfsson, a senior fellow at Stanford Institute for Human-Centered AI, expressed in an interview with 24  News that the bureaucratic rules from Brussels hinder innovation for companies, unlike the entrepreneurial environment in the United States.
Economists have long been concerned that artificial intelligence could diminish job opportunities for white-collar workers, similar to how globalization has affected blue-collar workers.

Comments



Font Size
+
16
-
lines height
+
2
-