Skip to content

Latest commit

 

History

History
65 lines (55 loc) · 6.35 KB

21.Identify governing practices for responsible AI.MD

File metadata and controls

65 lines (55 loc) · 6.35 KB

Identify governing practices for responsible AI

  • Now that you have reviewed this module, you should be able to:

    1. Determine the elements of an AI governance system.
    2. Choose an AI governance model that fits your organization’s needs.
  • Actions your organization can take - To help you consider how to leverage governance and external engagements in your own organization, we developed the recommendations below.

  • Establishing an AI governance system

    1. Choose a governance structure that best fits your organization’s AI maturity, unique characteristics, culture, and business objectives
    2. Encourage your governance system to develop a set of guiding ethical principles based on your organization’s foundational values.
    3. Outline the specific role of your governance system within your organization. Consider having them develop and implement policies, standards, and best practices, build a culture of integrity, provide advice, educate employees, help mitigate risks associated with AI systems, and respond to violations in a timely and consistent manner.
    4. Provide your governance system with the financial and human resources they need to affect real change within your organization.
    5. Adapt your governance system(s) as your AI maturity and business objectives change and industry best practices improve.
  • Governing AI engagement

    1. Create a handbook or manual to govern the use of AI in your organization to help ensure your employees follow the policies your governance system establishes.
    2. Train employees on the policies, standards, and best practices that your governance system establishes.

Third-party AI systems 1. Research the third party’s stance on responsible design before purchasing out-of-the-box AI solutions to ensure they were designed in a manner consistent with your principles, policies, and standards. 2. Include your principles in your request for proposal, so the solution can be design with your principles in mind. 3. Create guidelines on how to safely operate and monitor the system and train your employees on these guidelines before deploying the system. 4. Rigorously test the system to ensure it operates as intended and in a manner consistent with your principles, policies, and standards.

  • First party AI systems

    1. Consider having your governance system review or provide advice before the release of any new AI system, especially for sensitive use cases.
    2. Create processes for employees to analyze an AI system’s purpose, technical capabilities, reliability, and use case prior to its release.
    3. Provide clear guidelines to ensure your ethical principles are reflected in an AI system if you are developing AI systems in-house. Support your developers by using industry-established guidelines or developing your own, especially for AI systems that raise complex ethical or human rights considerations.
    4. Consider integrating internal guidelines into project management processes, such as a checklist aligned to the phases of a data science project.
    5. Leverage tools and resources to make it easier for developers to spot and design against potentially harmful issues like biases, safety and privacy gaps, and exclusionary practices.
  • Participating in external engagements

    1. Engage in public and private partnerships to advance responsible use of AI. Collaboration between enterprises, public organizations, government, and non-profits is crucial as we address the concern and challenges of AI, while maximizing its potential to deliver broad benefits.
    2. Join coalitions with organizations to foster a technologically savvy workforce and help ensure workers are prepared for the changing economy.
    3. Share your AI perspective with the greater community, such as governments, businesses, and standards organizations, to help guide responsible policies and legislation.
    4. Apply your AI expertise and technologies to benefit your community and improve the lives of people around the world.

To learn more * https://blogs.microsoft.com/uploads/2018/02/The-Future-Computed_2.8.18.pdf * Download PDF of Responsible AI: Establishing a governance and external engagement model to share with others. - https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE4DIvg * Download PDF of "AI Maturity and organizations: Understanding AI maturity." https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE4DIvg

Quiz

  1. A company uses a decentralized organizational model when it comes to AI implementation. What might be a disadvantage of their decentralized model when compared to a centralized model?
  • They can’t utilize the cross sharing of knowledge.
  • Their ability to rapidly innovate is extremely inhibited.
  • They lack the ability to create standards at the grassroots level.
  • The nature of decentralization can constrain innovation.
  1. Why is it important to clearly define an AI framework when adopting a responsible AI approach?
  • A clearly defined AI framework provides a reference point for AI systems, helping them adjust their own behavior to remain in compliance and eliminate bias.
  • Having a clearly defined AI framework doesn’t matter when adopting a responsible AI approach because AI systems won’t always be compliant with the framework.
  • A clearly defined AI framework establishes consistent guidelines across the organization, building trust in AI systems.
  • A clearly defined AI framework establishes consistent guidelines across the organization to ensure all teams are evaluating AI systems consistently and compliance is maintained.
  1. You want to implement a governance approach to AI within your organization that will incorporate a diverse array of outside experts and senior leaders from within the company. What will be part of your governance approach?
  • First-party AI system.
  • Third-party AI system.
  • Ethics committee.
  • Ethics office.
  1. What role do humans play in the creation and use of AI?
  • Humans play a role in creating the rules and policies that govern AI.
  • Humans play a role in the entire lifecycle of an AI model, from design and development to continuous evaluation and improvement.
  • Humans play a role in assessing AI performance and should run occasional quality assurance tests.
  • AI systems don’t require humans, they’re designed to operate without human involvement.