Read More
Morning Recap - April 1, 2026
6 hours ago
Night Recap - March 31, 2026
15 hours ago
Six senior counsel appointed
23 hours ago
"Before providing services to the public that use generative AI products a security assessment should be applied for through national internet regulatory departments," states a proposal by the Cyberspace Administration of China.
And AI-generated content must "reflect core socialist values and not contain content on subversion of state power."
The regulator said it is seeking public input on the new rules, which under Beijing's system are almost certain to become law.
The regulations come as a flurry of mainland companies rush to develop services that can mimic human speech since San Francisco-based OpenAI launched ChatGPT in November.But rapid advancements in AI have stoked global alarm over the technology's potential for disinformation and misuse, with deepfake images and people shown mouthing things they never said.
ChatGPT is not available in China, but the software is gaining a base of mainland users who use virtual private networks to get around the ban, deploying it to write essays and cram for exams.Beijing in January brought in rules that would require businesses offering deepfake services to obtain the real identities of their users. They also require deepfake content to be appropriately tagged to avoid "any confusion."
Beijing has also warned that deepfakes present a "danger to national security and social stability."Beijing has announced ambitious plans to become a global leader in AI by 2030.
But domestic efforts to develop competing products have faltered, hamstrung by Beijing's strict censorship and a US squeeze on chip imports.A US Commerce Department agency, meanwhile, is opening an inquiry into how companies and regulators can ensure that AI systems are trustworthy, legal and ethical.
The probe by the National Telecommunications and Information Administration will focus on the best methods for auditing AI systems and will eventually produce policy recommendations for the White House and Congress as the technology rapidly becomes mainstream."The goal of this process is to come up with recommendations, based on what we hear from experts in this field, about what the government can do to make sure we're promoting responsible AI innovation," said NTIA head Alan Davidson.
Chatbots - computer programs designed to convincingly simulate human conversation - are quickly becoming a central part of everyday life.The rapid uptake of ChatGPT, currently the most popular chatbot, prompted thousands of AI experts and leaders to call for a pause on the technology before proper guardrails are in place. Google and Microsoft have introduced their own chatbots.
Critics say AI systems can perpetuate real-world biases, confuse and deceive consumers, spread misinformation and even violate preexisting laws if there is not proper government and industry oversight.AGENCE FRANCE-PRESSE,BLOOMBERG