Massachusetts Institute of TechnologyA group ofLeadersA temporary committee of scholars has issued a series ofAIThe main policy document is titled "A Framework for the Governance of American Artificial Intelligence: Building a Safe and Prosperous Artificial Intelligence Industry," and suggests that existing U.S. government agencies could be expanded to oversee AI.AI Tools, and stressed the importance of defining the purpose of AI tools in order to develop appropriate regulatory provisions.
Source Note: The image is generated by AI, and the image is authorized by Midjourney
“We already have regulation and governance in this country for a lot of relatively high-risk things,” said Dan Huttenlocher, dean of the MIT Schwarzman School of Computing. “We’re not saying that’s enough, but let’s start with where we already have regulation, and that’s a practical approach.”
The policy document emphasizes the importance of AI providers defining the purpose and intent of applications in advance. In this way, the regulatory system can clarify which existing regulations and regulators apply to specific AI tools. In addition, the document discusses situations where AI systems may exist in multiple layers, what technologists call a "stack" system, highlighting the complexity of developing responsibilities and supervision.
The policy document not only involves existing institutions, but also proposes to enhance new regulatory capabilities. Among them, the document calls for audits of new AI tools, which can be initiated by the government, driven by users, or derived from legal liability lawsuits. The document also recommends the development of public auditing standards, which can be established by a non-profit entity similar to the Public Company Accounting Oversight Board (PCAOB) or a federal entity similar to the National Institute of Standards and Technology (NIST).
In addition, the document considers the possibility of creating a new, government-approved "self-regulatory organization" (SRO) agency that would function similarly to the government-created Financial Industry Regulatory Authority FINRA. Such an agency, focused on AI, could accumulate domain-specific knowledge to remain flexible and responsive when interacting with the rapidly changing AI industry.
The policy document states that there are specific legal issues that need to be addressed in the field of AI, such as copyright issues related to intellectual property. In addition, the committee recognizes that "human-plus" legal issues, that is, situations where AI has super-human capabilities, such as large-scale surveillance tools, may require special legal considerations.
This series of policy documents covers the analysis of AI regulation issues from multiple disciplinary perspectives, demonstrating the interim committee's commitment to influencing policy making from a broad perspective, not just limited to technical issues. The committee emphasizes the importance of academic institutions' expertise in the interaction between science, technology and society, and believes that policymakers need to think about the relationship between social systems and technology.
The committee hopes to bridge the gap between those who are radical and those who are concerned about AI, and promote the healthy development of the AI industry by advocating that technological advancement be accompanied by appropriate regulation.