The Information Technology Industry Council (ITI), an industry group that represents several tech companies like Apple, Google, Microsoft, Amazon, and Facebook, this week released Artificial Intelligence Policy Principles [PDF] covering responsible and ethical artificial intelligence development.
"We recognize our responsibility to integrate principles into the design of AI technologies, beyond compliance with existing laws," reads the document. AI researchers and stakeholders should "spend a great deal of time" working to ensure the "responsible design and deployment of AI systems." Some of the specific policies addressed are outlined below:
Government: The ITI supports government investment in fields related to AI and encourages governments to evaluate existing tools and use caution before adopting new laws, regulations, and taxes that could impede the responsible development and use of AI. ITI also discourages governments from requiring tech companies to provide access to technology, source code, algorithms, and encryption keys.
Public-Private Partnerships: Public-Private Partnerships should be utilized to speed up AI research and development, democratize access, prioritize diversity and inclusion, and prepare the workforce for the implications of artificial intelligence.
Responsible Design and Deployment: Highly autonomous AI systems must be designed consistent with international conventions that preserve human dignity, rights, and freedoms. It is the industry's responsibility to recognize potential for misuse and commit to ethics by design.
Safety and Controllability: Autonomous agents must treat the safety of users and third parties as a paramount concern and AI technologies should aim to reduce risks to humans. AI systems must have safeguards to ensure the controllability of the AI system by humans.
Robust and Representative Data: AI systems need to leverage large datasets to avoid potentially harmful bias.
The ITI goes on to encourage robust support for AI research, a flexible regulatory approach, and strong cybersecurity and privacy provisions.
ITI President Dean Garfield told Axios that the guidelines have been released as a way for the industry to get involved in the discussion about AI. In the past, the group has learned "painful lessons" about staying on the sidelines of debates about emerging technology.
"Sometimes our instinct is to just put our head down and do our work, to develop, design, and innovate," he said. "But there's a recognition that our ability to innovate is going to be affected by how society perceives it."
Top Rated Comments
We can all see what the permissive regulatory framework applied to internet businesses has wrought to society: erosion of personal sovereignty online and easily manipulated media leading to eroded trust in essential institutions.
Imagine what this lax framework will curse us with in the AI era. At a minimum, personal identity needs to be treated legally like personal property and a restrictive legal framework for AI needs to be imposed. The burden should be to prove utility and safety *before hand* not after the fact.