In June 2023 700 members from 80 countries passed a resolution on effective regulation of  Artificial Intelligence, see here, based on a draft Charter and background paper “Artificial General Intelligence – Regulating to Promote Human Control” (update 24 May 2023).

By Christine Elwell, B.A., LL.B., LL.M and CEO of University-Rosedale Green Party of Canada (GPC).

  • Monitor and Regulate the development of AI/AGI systems 
  • Human Oversight and Control 
  • Transparency 
  • Serves Democracy and Fundamental Rights
  • That AI/AGI serves to bring benefits to human life both local and global
  • That there be regional and international cooperation 
  • On Environmental Protection 


At the GPC’s 2022 Virtual Meeting, a presentation was made by Wyatt Tessari on the importance of AGI; an informal committee was struck to develop a Green AGI Charter. A comparative policy paper was produced that provides in more detail the elements of a AGI Charter recommended here, see Artificial General Intelligence-Regulating to Promote Human Control, Christine Elwell, “backgrounder”. For GPC values in developing policy, consistent with the Global Greens Charter, backgrounder p 4. 


AGI is software that can match human beings at all tasks. The system’s further improvement is mainly driven by the system’s own actions rather than by work performed on it by others. This could lead to levels of intelligence far beyond ours, sometimes known as superintelligence. For definitions see 2021 EU draft Reg on AI and 2019 OECD Principles on AI, backgrounder p. 7-8 and 17. The control problem recognizes machine superintelligence could assert itself against the project that brought it into existence and the rest of the world, backgrounder p. 3-4. 


Summary of common AI/AGI policy 


1.) Monitor and Regulate the development of AI/AGI systems 


Based on specified high-risk AI systems, the EU draft Reg sets out enforceable obligations before a system is placed on the market, see backgrounder p. 8- 10: 


  • Adequate risk assessment and mitigation systems; 
  • High quality of the datasets feeding the system to minimise risks and discriminatory outcomes;  logging of activity to ensure traceability of results; 
  • Detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance (including when the system is substantially modified); 
  • Clear and adequate information to the user (including to address “deep fakes”);
  • Appropriate human oversight measures (including the ability to intervene in the AI operation through a “stop” button that cannot be overridden by the system itself) to minimise risk; 
  • High level of robustness, security and accuracy. 

The 2019 US National Institute of Standards and Technology’s AI risk management framework summarize the characteristics that relate to trustworthy AI technologies: accuracy, reliability, resiliency, objectivity, security, explainability, safety, and accountability. Testing methodologies are needed to validate and evaluate AI technologies’ performance, especially to prescribe protocols and procedures. Applications include testing for conformance, interoperability, security, and comparing AI systems to human performance. These are also part of ongoing technology and system assessments and whenever existing AI systems and their outputs are repurposed for a new task that is outside of their original intent, backgrounder p. 12. 


The OECD sets out similar principles: Principles for responsible stewardship of trustworthy AI: inclusive growth, sustainable development and well being; human-centred values and fairness; transparency and explainability; robustness, security and safety; and v) accountability, backgrounder p. 17-18. 2) 


2.) Human Oversight and Control 


Under the EU’s draft Reg. authorities provide Market surveillance, users ensure oversight and monitoring, providers conduct post-market monitoring and report serious incidents, malfunctioning and corrective action taken. 


Regulation should Include capacity control methods eg boxing methods, limiting the internal capacity by stunting (eg limiting memory) and mechanisms to automatically detect and react to various kinds of containment failure or attempted transgression, eg tripwires, a mechanism that performs diagnostic tests on the system (possibly without its knowledge) and effects a shutdown if it detects signs of dangerous activity, p. 3. 


The EU’s draft Reg. includes human veto power, stop buttons and sandboxes to promote  responsible AI/AGI, see backgrounder p. 8-10.  The OECD Principles also support an enabling policy environment for experimentation of AI, backgrounder p. 19. 


Canada’s Bill 27 provides authority to require a person responsible for a high-impact system cease using it or making it available for use if there are reasonable grounds to believe that the use of the system gives rise to a serious risk of imminent harm, p. 17.


California’s 2017 Asilomer AI principles: Highly autonomous AI systems be designed so their goals and behaviors can be assured to align with human values throughout their operation. Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact. And on Recursive Self-Improvement: AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures, p. 14. 


The OECD’s Principle on Human-centred values and fairness also sets out that AI actors implement mechanisms and safeguards such as the capacity for human determination. AI systems should be robust, secure, safe and traceable throughout their entire lifecycle so that, in conditions of normal use, foreseeable use or misuse, or other adverse conditions, they function appropriately and do not pose unreasonable safety risk, backgrounder p. 18.


Promote Professional Codes of Conduct around the idea of provable safe AI programs.  


3.) Transparency 


The EU’s 2022 Digital Services Act requires service providers to explain in a transparent and comprehensible way the use of AI and algorithmic decision systems, backgrounder p. 7. 


4.) Serves Democracy and Fundamental Rights 


Key features of regulation: check for discriminatory bias, complies with duty to give reasons, with a right to file objections and request government professionals to overrule AI decisions. 


The Asilomer AI principles feature Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority. 


The OECD Transparency and Explainability Principles also provide for the right to challenge AI decisions, backgrounder p. 18. 


EU Providers of high-risk AI systems shall immediately report any serious incident or any malfunctioning of those systems that constitutes a breach of obligations to protect fundamental rights - for human dignity, freedom, equality, democracy and the rule of law, including the right to non-discrimination, data protection and privacy and the rights of the child - to the market authorities where that incident or breach occurred, p. 10. 


The 2018 California Consumer Privacy Act establishes consumer rights over their personal information, including rights related to disclosing personal information, deleting personal information and stopping a business from selling personal information, p. 12. 


Google’s 2018 AI Principles include a list of AI applications the company will not pursue, including weapons, surveillance technologies that violate international norms, or any technologies that contravene international law and human rights, backgrounder p. 14. 


5.) That AI/AGI serves to bring benefits to human life both local and global 


The Asilomer AI principles: AI technologies should benefit and empower as many people as possible; Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity and Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization; p. 14. 


OpenAI, a San Francisco-based AI research and deployment company released the 2018 OpenAI Charter: “We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power”, backgrounder p. 15.  


6.) That there be regional and international cooperation  


The OECD Principles recommend to Members and non-members national policies to: i) invest in AI research and development; ii) foster a digital ecosystem for AI; iii) shape an enabling policy environment for AI; iv) build human capacity and prepare for labour market transformation; and v) engage  in international co-operation for trustworthy AI, backgrounder p. 18-19.  


The EU’s draft Reg. provides a specific example of regional cooperation. Where the operator of an AI system does not take adequate corrective action to comply with the EU draft Reg., the market surveillance authority shall take all appropriate provisional measures to prohibit or restrict the AI system's being made available on its national market, to withdraw the product from that market or to recall it and shall inform the Commission and the other Member States, backgrounder 11. 


Open AI Charter: “We will actively cooperate with other research and policy institutions; we seek to create a global community working together to address AGI’s global challenges”, p. 15. 


Indeed, international cooperation with like-minded partners in order to safeguard fundamental rights, transparency and cooperate on minimising current and new technological threats, is an urgent political priority. An updated version of the Intergovernmental Panel on Climate Change should be considered as a global model to monitor and regulate AGI systems development, backgrounder p. 20. 


7.) On Environmental Protection 


European Greens insist that regulation addresses the environmental risks of AI/AGI given the demand for resources increase in terms of energy consumption, the carbon footprint and the exploitation of metals and minerals, backgrounder p. 6-7. 


Non-High-Risk AI systems are encouraged to conform with voluntary Codes of Conduct to foster environmental sustainability but given the serious environmental risks, the EU’s draft Reg only provides for Voluntary Codes of Conduct. This is insufficient, p. 11. 


The OECD’s Principle on Inclusive growth, sustainable development and well-being, however, includes protecting natural environments, backgrounder p. 18.


Badge ImageBadge Image