Skip to content
Building a foundation
Nick KingJune 21, 2023 8:34:00 AM EDT6 min read

Taking accountability for Advanced Applied AI governance - a proposal for an open AI Impact assessment scale (AIIAS)

Introducing the AI Impact Assessment Scale: The Importance of Governance and the Power of Self-Regulation



As artificial intelligence (AI) continues to revolutionize various aspects of our lives, we must not only embrace its potential but also recognize the need for proper governance and oversight. The development and implementation of AI technologies present novel challenges, making it crucial for stakeholders to proactively address potential risks and ethical considerations.

While the world evolves with the additional of LLMs and their various applications, and in turn looks to understand how various cultures, communities, and governments choose to respond to the impact it can have, there is a clear gap in pragmatically adopting these technologies.

In this context, working with customers, partners, and industry specialists, we came up with a proposal for an AI Impact Assessment Scale (AIIAS) is a vital tool to bring consistency and transparency to the AI industry.

The Importance of Governance

AI governance is essential in ensuring that AI applications are designed and deployed responsibly, fairly, and securely. It involves establishing guidelines, principles, and frameworks that promote accountability, transparency, and ethical considerations. Proper governance helps to prevent potential harm, mitigate unintended consequences, and build trust in AI systems among users and stakeholders.

The Power of Self-Regulation

While government entities play a crucial role in shaping AI policies and regulations, self-regulation within the AI industry is equally important. By taking proactive steps to develop and adopt ethical standards, AI developers and operators can demonstrate their commitment to responsible AI practices. This not only fosters trust but also allows the industry to adapt quickly to emerging issues and concerns.

Self-regulation can be more agile and flexible than waiting for government entities to weigh in, which often involves lengthy legislative and bureaucratic processes. By actively addressing ethical considerations and potential risks, the AI industry can set the stage for a more efficient and responsive regulatory environment.

 

Why We Need a Standardized AI Assessment and Classification System: Introducing the AI Impact Assessment Scale



The AIIAS is a practical and effective tool for self-regulation in the AI industry. By providing a standardized way to assess and classify AI applications based on their impact, exposure, and potential for miscalculation, the AIIAS encourages developers and operators to carefully consider the implications of their technologies. This empowers the AI industry to take responsibility for its creations and work proactively to ensure responsible AI development and deployment.

To safeguard the future of AI and harness its potential for the greater good, it's essential that we embrace the importance of governance and the power of self-regulation. By implementing tools like the AI Impact Assessment Scale and adhering to ethical principles, we can work together to create a responsible AI ecosystem that benefits everyone.

Our proposal is that organizations openly communicate where their applications sit on the AIIAS and all users are clearly informed of how these technologies are assessed. 


The AI Impact Assessment Scale: Breaking Down the Levels


The AIIAS consists of five levels, each taking into account factors that determine an AI application's potential influence on society and individual lives.

1. AI-Level 1 (Low Impact): These AI applications are designed for broad usage and have minimal potential for negative consequences. Examples include chatbots, weather prediction apps, and recommendation systems for online shopping or streaming platforms.

2. AI-Level 2 (Moderate Impact): AI-Level 2 applications may have limited potential for negative consequences but warrant some caution. These applications could include AI-driven content moderation systems, which may occasionally flag or remove legitimate content, or educational AI tools that may not always produce accurate results.

3. AI-Level 3 (Significant Impact): AI applications in this category are more likely to have significant societal impacts or require careful oversight. Examples might include AI-driven surveillance systems, facial recognition technologies, or AI-powered credit scoring systems that could lead to discriminatory outcomes.

4. AI-Level 4 (High Impact): These AI applications have a high potential for negative consequences and should be used with strict supervision. Examples include autonomous weapon systems, AI-driven manipulation or identity assuming technologies  (e.g., deepfakes), or AI systems that could exploit personal data for malicious purposes.

5. AI-Level 5 (Extreme Impact): AI-Level 5 applications are considered extremely high-risk and should be restricted from public use. Examples could include AI technologies with potential to cause catastrophic damage, such as AI-driven cyber warfare systems or force directed weapons.


Non-Negotiable Foundation for All AI Applications

To ensure that all AI applications adhere to ethical and responsible principles, the following non-negotiable requirements must be met:

1. Transparency: Clear communication of the AI system's purpose, capabilities, and limitations is essential for building trust and managing user expectations. For example, an AI-driven medical diagnostic tool should provide users with information about its accuracy rate, the data it uses, and any potential limitations it may have in certain cases. By being transparent, users and stakeholders can make informed decisions about the tool's suitability for their needs and understand any potential risks associated with its use.

2. Accountability: Holding AI operators responsible for the system's performance, KPIs and unintended consequences is crucial for maintaining ethical standards. For instance, if an AI-powered hiring tool inadvertently discriminates against a specific demographic group, the developers should be held accountable and take corrective measures to address the issue within a companies resolution policy. This may involve refining the algorithm, retraining the system, or placing additional human in the loop processes.

3. Privacy & Security: Protecting user data and ensuring the secure storage and transmission of sensitive information are essential components of responsible AI development. For example, an AI-based financial management app should encrypt user data, implement robust access controls, and regularly monitor for potential security breaches. By prioritizing privacy and security, AI applications can build user trust and protect individuals from identity theft, fraud, or other potential harms.

4. Fairness & Non-discrimination: Designing AI systems that minimize biases and prevent unfair treatment of individuals or groups is vital for ethical AI development. For instance, an AI-driven loan approval system should be carefully designed and tested to ensure it does not discriminate based on factors such as race, gender, or age. This may involve using diverse training data, applying fairness metrics, and conducting regular audits to ensure the system remains unbiased and equitable.

5. Human-centric Design: Prioritizing the well-being of users and respecting human values and autonomy is crucial for responsible AI development. For example, an AI-powered health assistant should be designed with empathy, cultural sensitivity, and a clear understanding of user needs. It should also allow users to override the AI's recommendations if they believe it is not in their best interest. By prioritizing human-centric design, AI applications can enhance user experiences, empower individuals, and foster positive societal outcomes.

A Call to Action for AI Stakeholders: Adopting the AI Impact Assessment Scale

The responsible development and deployment of artificial intelligence (AI) require proper governance and self-regulation. AI engineers, business leaders, and influencers play a crucial role in shaping the future of AI, and the AI Impact Assessment Scale (AIIAS) is a key tool in this process.

The AIIAS provides a standardized method for assessing and classifying AI applications, promoting responsible AI practices and fostering trust. Adopting and promoting the AIIAS enables the AI industry to adapt quickly to emerging challenges.

As stakeholders, it's essential to prioritize transparency, accountability, privacy, security, fairness, and human-centric design. By focusing on these principles, we can mitigate potential risks and unintended consequences.

Now is the time for AI stakeholders to unite and lead the charge for a responsible AI ecosystem. By embracing governance, leveraging self-regulation, and implementing tools like the AIIAS, we can create a better future with AI that benefits everyone.

I welcome feedback and discussion on this complex topic. However relying on government alone is not the solution to accelerate AI innovation and adoption.

RELATED ARTICLES