China and EU regulate AI, US speculates

While the European Union is playing the long game in drafting regulations for AI, China has surprised many with quick but deep regulations. A report compares the situation of the two jurisdictions. On a different track, the US government is looking at ways to improve its AI research infrastructure.
The European Union has a long reputation for regulating many aspects of life. Its GDPR has been a worldwide success. With its upcoming AI law, it tries to protect human rights and society in general. China has also introduced regulation on AI, with regulators scrambling over who controls it.
A report for CNBC compares approaches to ask whether one could become the dominant version of regulation, albeit with the EU project much larger (and slower) than China’s.
The report examines whether China’s requirement that companies notify users if an algorithm is used to pass certain information to them and give them a choice to opt out is in the public or government interest. Or maybe it’s just a large-scale experiment that the rest of the world can learn from.
In some ways, the technical goals of the EU and China are similar, and the West should pay attention to China’s actions, according to one commentator. A notable difference is China’s willingness to test new approaches directly on the public.
Some commentators foresee a divide in approaches to AI development, and in particular in policing. Companies may need to adapt their products to comply with local regulations, which they are already good at, a commentator told CNBC.
First steps towards cross-pollinating federal AI research for the United States
United States National AI Initiative Act of 2020 became law in 2021 and brings together AI research within the federal government to accelerate processes for economic and security gains. As a member of National Initiative on Artificial Intelligencethe act established a team to create the roadmap for shared research infrastructures, the National Artificial Intelligence Research Resource (NAIRR) Task Force in June 2021.
It released its first assessment of the situation via public meetings and expert consultations. ‘Considering a National Artificial Intelligence Research Resource (NAIRR): Preliminary Findings and Recommendations‘ sets up a landscape where access to AI development resources is too limited to large corporations and wealthy universities.
“The strategic goal of establishing a NAIRR is to strengthen and democratize the U.S. AI innovation ecosystem in a way that protects privacy, civil rights, and civil liberties “, says the report. It also calls for the day-to-day operations of the NAIRR to be independent of government, to set standards for responsible AI research, to be accessible, and for resource items, including testbeds, “to be accessible from friendly way”.
Benchmarks are where the NAIRR intersects with biometrics, as examples of conducting research in specific areas. He gives NIST and its facial recognition test an example for testing biometrics.
To protect NAIRR, he calls for the use of a zero-trust architecture with strong identity and access controls, although this is in line with the decision of all federal agencies to adopt the technology.
However, some believe the US has already fallen behind China, with other groups such as AI Now also calling for a democratization of AI research.
Article topics
AI | biometrics | China | EU | European | facial recognition | confidentiality | regulation | research and development | United States