📢 Gate Square #MBG Posting Challenge# is Live— Post for MBG Rewards!
Want a share of 1,000 MBG? Get involved now—show your insights and real participation to become an MBG promoter!
💰 20 top posts will each win 50 MBG!
How to Participate:
1️⃣ Research the MBG project
Share your in-depth views on MBG’s fundamentals, community governance, development goals, and tokenomics, etc.
2️⃣ Join and share your real experience
Take part in MBG activities (CandyDrop, Launchpool, or spot trading), and post your screenshots, earnings, or step-by-step tutorials. Content can include profits, beginner-friendl
Singapore government: currently does not intend to regulate AI, and is developing AI testing tools with enterprises
Source: The Paper
Reporter Shao Wen
"We're not currently thinking about regulating AI. At this stage, it's clear we want to be able to learn from industry. Understanding how AI is being used before we decide whether we need to do more from a regulatory perspective."
The Singapore government called on companies to cooperate in the development of "the world's first AI testing toolkit" AI Verify. Google, Microsoft and IBM have all joined the AI Verify Foundation.
With many countries exploring the regulation of generative artificial intelligence (AI), the Singapore government has stated that it is in no rush to formulate artificial intelligence regulations.
On June 19, local time, Lee Wan Sie, director of trusted artificial intelligence and data at the Information and Communications Media Development Authority (IMDA) of Singapore, said in an interview with CNBC: "We are not currently considering regulation. artificial intelligence."
However, the Singapore government is also working to promote the responsible use of artificial intelligence, calling on companies to collaborate on AI Verify, the "world's first AI testing toolkit." AI Verify, which includes an AI governance testing framework and a software toolkit that enables users to technically test AI models and document process checks, was launched as a pilot project in 2022, with tech giants IBM and Singapore Airlines already participating.
Government and business establish cooperation
Concerns about the risks of generative AI have grown in recent months as the chatbot ChatGPT has become all the rage. "At this stage, it's clear that we want to learn from the industry. We need to understand how AI is being used before we decide whether we need to do more from the regulatory side," Li Wanshi said. Introduced at a later stage.
“We recognize that as a small country, as a government, we may not be able to solve all the problems. So it is very important that we work closely with industry, research institutions and other governments,” Li said.
It is reported that Google, Microsoft and IBM have joined the AI Verify Foundation, a global open source community designed to discuss AI standards and best practices and collaborate on AI governance. “Microsoft applauds the Singapore government for its leadership in this area,” Microsoft President Brad Smith said in a release. “By creating practical resources such as AI governance testing frameworks and toolkits, Singapore is helping organizations establish robust governance and testing processes."
“The industry is much more hands-on when it comes to AI. Sometimes, when it comes to regulations, you can see a gap between what policymakers think about AI and what businesses actually do.” US National AI Advisory Committee Consultant Haniyeh Mahmoudian told CNBC, “So with this type of collaboration, specifically creating these types of toolkits, with input from the industry. It’s beneficial for both parties.”
At the Asia Tech x Singapore summit in June, Singapore's Communications and Information Minister Josephine Teo said that while the government recognizes the potential risks of AI, it cannot promote the ethical use of AI on its own, "with professional The private sector of knowledge can meaningfully engage with us in achieving these goals."
While "there are very real fears and concerns about the development of AI," she said, AI needs to be actively steered toward beneficial uses and away from bad ones. "It's at the heart of how Singapore sees AI."
Meanwhile, some countries have taken steps to regulate AI. On June 14, the European Parliament passed the "Artificial Intelligence Act" (AI Act), imposing greater restrictions on generative artificial intelligence tools such as ChatGPT, and developers will be required to submit systems for review before release. French President Emmanuel Macron also said last week that AI regulation is needed. The United Kingdom is setting up the AI Foundation Model Taskforce to study the security risks brought by artificial intelligence, and is preparing for the Global AI Security Summit, with a view to making it the geographic center of global AI security regulation.
Father of ChatGPT speaks in Singapore
Stella Cramer, head of Asia-Pacific at international law firm Clifford Chance's technology group, said Singapore could act as a "steward" in the region, allowing innovation to take place in a safe environment. Clifford Chance is working with regulators to develop a series of market guidelines and frameworks.
“What we’ve seen is a consistent approach around openness and collaboration. Singapore is seen as a safe jurisdiction to test and roll out your technology in a controlled environment with the support of regulators,” Cramer said .
This idea seems to coincide with the idea of OpenAI CEO Sam Altman (Sam Altman). On June 13, Altman attended OpenAI's global tour "Fireside Dialogue" event at Singapore Management University (Singapore Management University), where he elaborated on how to manage AI risks. He believes the focus is on letting the public know about and experience new developments, which will ensure that any potential harm is detected and addressed before its impact becomes widespread.
"It's more efficient than developing and testing a technology behind closed doors and releasing it to the public assuming all possible risks have been identified and prevented," Altman said. Learn everything. No matter how much a product is tested to minimize harm, someone will find a way to exploit it in ways its creators never thought possible.” This is true of any new technology, he notes.
"We believe that iterative deployment is the only way to do this." Altman added that the gradual introduction of new versions would also allow society to adapt as AI evolves, while generating feedback on how it can be improved.
Singapore has currently launched several pilot projects, such as the FinTech Regulatory Sandbox or the HealthTech Sandbox, for industry players to test their products in a live environment before going public. "These structured frameworks and testing toolkits will help guide AI governance policies to enable enterprises to develop AI safely and securely," Cramer said.