Únete a getAbstract para acceder al resumen.

Responsibility: Responsible AI in Action

Únete a getAbstract para acceder al resumen.

Responsibility: Responsible AI in Action

Microsoft AI Business School Podcast Episode 3

Microsoft,

5 mins. de lectura
3 ideas fundamentales
Audio y Texto

¿De qué se trata?

As AI becomes crucial to business, companies look for ways to eliminate machine-learning biases.


Editorial Rating

8

getAbstract Rating

  • Analytical
  • Eloquent
  • Hot Topic

Recommendation

With AI technology quickly becoming essential to business, ensuring these technologies are handled in a way that puts people first and makes the responsible use of AI a cornerstone of innovation is vital. This Microsoft AI Business School podcast, led by David Carmona, general manager for AI and innovation, discusses how to put Responsible AI in action with other experts.

Take-Aways

  • Considering the pace with which AI is advancing, it’s critical to retain society’s trust.
  • Responsible AI begins with organizations defining their values through a set of principles.
  • Putting Responsible AI into action requires principles, practices, governance, and tools.

Summary

Considering the pace with which AI is advancing, it’s critical to retain society’s trust.

Natasha Crampton, Microsoft’s chief responsible AI officer, thinks companies must treat AI like security and privacy, as a “core element of trust.”

“We know that people don’t use technology that they don’t trust, and so making sure that we are baking in responsible AI considerations when we’re building the technology, also when we’re deploying the technology, is really just an essential part of unlocking the value of these promising new AI technologies.” (Natasha Crampton)

Sarah Bird, Microsoft’s former leader of responsible AI for Azure Machine Learning, believes that internal communications should begin with discussions of ethics, technology and society. Crampton adds that open communication is central, and we must be humble enough to admit that they don’t always have the necessary answers.

Responsible AI begins with organizations defining their values through a set of principles.

Microsoft’s 2018 book The Future Computed established responsible AI principles, delineating the company’s approach to its challenges. Carmona thinks every company must “take a stand” in how it will address these challenges. Microsoft formed a committee known as Aether, which stands for AI Ethics and Effects on Engineering and Research. Aether serves as a think tank that promotes discussion between people who have a variety of backgrounds. Its work groups provide guidance for initiatives such as interpretability algorithms.

“This work resulted in six principles about our shared responsibility in AI in Microsoft – fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability.” (David Carmona)

Since establishing those principles, Carmona says new obstacles have popped up, such as those relating to deep fakes and facial recognition. Nick McQuire, chief of enterprise research for CCS Insight, says his company has recently become aware of the importance of investing in interpretability, security and especially privacy. He points out that the shift to this level of priority has emerged in the past year, indicating an increased awareness of responsible AI.

McQuire thinks that when companies see AI technology becoming crucial to their aim, they’ll integrate it into key operational processes and begin to recognize the hazards and responsibilities they might incur with dependence on machine learning and AI.

“Responsible AI requires a culture transformation.” (David Carmona)

AI must be inclusive and fair, with no exclusions due to race, gender or where people live. AI can learn bias through training algorithms. It’s a “people” job to understand how to check the system’s models and accountability. A diverse team embracing open attitudes and listening to community groups helps to develop non-biased principles and tools.

Putting Responsible AI into action requires principles, practices, governance, and tools.

Companies should establish a governing body that develops guidelines and best practices for implementing AI across the development lifecycle.

The Office of Responsible AI (ORA) puts governance models into practice at Microsoft. Any such framework within an organization should make responsibilities and roles clear, while offering support and advice.

One of McQuire’s bank clients installed a machine learning process that builds bias checks and diversity into the data fed into its AI structures. Carmona’s Microsoft framework makes sure that no one person oversees responsible AI. Various teams share it, with individuals understanding their specific roles, and how that affects overall governance. Building in metrics helps measure how well a team is serving its goals.

“There’s no tool that helps you with this part of the process. This is really just thought work, it’s really about reframing the way that you think about the technology that you’re building. And for me, it’s just really heartening to see that more holistic perspective coming out.” (Natasha Crampton)

Microsoft’s ORA implemented its Responsible AI Champs Program to scale AI practices and guidelines across the company and create awareness. This reinforces cultural change through leadership. AI champs use “impact assessment” to understand how AI technology affects society and individuals.

“What we’re starting to see is…best practices emerg[ing] from some of the more mature organizations with respect to machine learning [which] are heavily pivoting to responsible approaches that fundamentally help them increase agility and bypass challenges that will slow them down, down the road.” (Nick McQuire)

Microsoft is developing tools that focus on data protection, understanding and assessing bias in AI models, and governing/auditing AI processes.

Responsible AI tools include FairLearn, Azure Machine Learning and Interpret ML. These tools are created in open source, so that experts can verify and implement best practices, while moving the technology forward.

Carmona offers three recommendations for developing responsible AI:

  1. Establish responsible AI principles that define and address problems within a specific organization.
  2. Create a process for governance that incorporates responsible AI throughout the organization.
  3. Thoughtfully analyze technology and tools that help create robust responsible AI.

Carmona recommends humility for people dealing with responsible AI. No one possesses all the solutions and answers. People must learn from other organizations with diverse perspectives.

About the Podcast

Host David Carmona is general manager for AI and innovation at Microsoft. In this Microsoft AI Business School podcast, he interviews guests Sarah Bird, former leader of responsible AI for Azure Machine Learning; Natasha Crampton, Microsoft’s chief responsible AI officer; and Nick McQuire, who leads CCS Insight’s enterprise and artificial intelligence research.

This document is restricted to personal use only.

Did you like this summary?

Get the Podcast

Comment on this summary

More on this topic

Related Skills

人工智能转型,实施基于人工智能的流程优化,跨部门整合人工智能解决方案,评估企业采用人工智能的准备情况
推动人工智能转型,在各部门实施人工智能转型,协调与利益相关者的人工智能倡议,监督人工智能技术的整合,
领导力
利用人工智能提升领导力,将人工智能洞察融入领导决策,使用人工智能工具增强领导战略,通过人工智能分析推
在日常任务中利用人工智能,将人工智能工具整合到工作流程中,使用人工智能自动化重复性任务,利用人工智能
理解人工智能,向同事解释基本的人工智能概念,区分人工智能与传统软件解决方案,总结人工智能在现代工作场
职场技能
引导团队顺利过渡到人工智能,促进团队适应人工智能工具,准备员工迎接人工智能驱动的角色,管理对人工智能
为人工智能组织,建立以人工智能为中心的组织结构,设计人工智能整合的工作流程,实施人工智能项目的团队结
安全使用人工智能,确保人工智能安全合规,保护敏感数据,安全管理人工智能访问,降低人工智能安全风险,确
了解人工智能对社会的影响,评估人工智能对社会结构的影响,评估人工智能对公共政策的影响,认识人工智能在
以德驾驭 AI
了解人工智能风险,识别商业中的人工智能风险因素,评估人工智能故障的影响,评估人工智能系统中的潜在偏见
作为领导者,伦理地使用人工智能,驾驭人工智能领导中的伦理挑战,确保伦理的人工智能部署决策,优先考虑人
建立人工智能治理,定义人工智能治理政策,实施人工智能伦理指南,评估人工智能风险管理流程,设计人工智能
理解人工智能的伦理影响,导航人工智能开发中的伦理考虑,评估人工智能对数据隐私的影响,预见人工智能算法