{"id":911151,"date":"2023-01-12T09:00:00","date_gmt":"2023-01-12T17:00:00","guid":{"rendered":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/?p=911151"},"modified":"2023-01-13T12:11:17","modified_gmt":"2023-01-13T20:11:17","slug":"advancing-human-centered-ai-updates-on-responsible-ai-research","status":"publish","type":"post","link":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/blog\/advancing-human-centered-ai-updates-on-responsible-ai-research\/","title":{"rendered":"Advancing human-centered AI: Updates on responsible AI research"},"content":{"rendered":"\n<p class=\"has-text-align-center\"><strong>Editor\u2019s note: <\/strong><em>All papers referenced here represent collaborations throughout Microsoft and across academia and industry that include authors who contribute to Aether, the Microsoft internal advisory body for AI Ethics and Effects in Engineering and Research.<\/em><\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1400\" height=\"788\" src=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/RAI_blog-2023Jan_hero_1400x788.jpg\" alt=\"illustration of a lightbulb shape with different icons surrounding it on a purple background\" class=\"wp-image-911742\" srcset=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/RAI_blog-2023Jan_hero_1400x788.jpg 1400w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/RAI_blog-2023Jan_hero_1400x788-300x169.jpg 300w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/RAI_blog-2023Jan_hero_1400x788-1024x576.jpg 1024w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/RAI_blog-2023Jan_hero_1400x788-768x432.jpg 768w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/RAI_blog-2023Jan_hero_1400x788-1066x600.jpg 1066w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/RAI_blog-2023Jan_hero_1400x788-655x368.jpg 655w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/RAI_blog-2023Jan_hero_1400x788-343x193.jpg 343w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/RAI_blog-2023Jan_hero_1400x788-240x135.jpg 240w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/RAI_blog-2023Jan_hero_1400x788-640x360.jpg 640w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/RAI_blog-2023Jan_hero_1400x788-960x540.jpg 960w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/RAI_blog-2023Jan_hero_1400x788-1280x720.jpg 1280w\" sizes=\"auto, (max-width: 1400px) 100vw, 1400px\" \/><\/figure>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"margin-callout\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 annotations__list--left\">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t\t<a href=\"https:\/\/youtu.be\/tpRO1Hpyb4U\" target=\"_self\" aria-label=\"A human-centered approach to AI\" data-bi-type=\"annotated-link\" data-bi-cN=\"A human-centered approach to AI\" class=\"annotations__list-thumbnail\" >\n\t\t\t\t\t<img loading=\"lazy\" decoding=\"async\" width=\"172\" height=\"96\" src=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/human-centered-philosophy-240x135.png\" class=\"mb-2\" alt=\"Video frame with an image of hands reaching upward and overlaid with the question \"What do you mean by human-centered?\u201d and a brief explanation.\" srcset=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/human-centered-philosophy-240x135.png 240w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/human-centered-philosophy-300x169.png 300w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/human-centered-philosophy-1024x577.png 1024w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/human-centered-philosophy-768x432.png 768w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/human-centered-philosophy-1066x600.png 1066w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/human-centered-philosophy-655x368.png 655w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/human-centered-philosophy-343x193.png 343w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/human-centered-philosophy-640x360.png 640w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/human-centered-philosophy-960x540.png 960w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/human-centered-philosophy.png 1275w\" sizes=\"auto, (max-width: 172px) 100vw, 172px\" \/>\t\t\t\t<\/a>\n\t\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Video<\/span>\n\t\t\t<a href=\"https:\/\/youtu.be\/tpRO1Hpyb4U\" data-bi-cN=\"A human-centered approach to AI\" data-external-link=\"false\" data-bi-aN=\"margin-callout\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>A human-centered approach to AI<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t\t\t<p class=\"annotations__caption text-neutral-400 mt-2\">Learn how considering potential benefits and harms to people and society helps create better AI in the keynote \u201cChallenges and opportunities in responsible AI\u201d (2022 ACM SIGIR Conference on Human Information Interaction and Retrieval).<\/p>\n\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<p>Artificial intelligence, like all tools we build, is an expression of human creativity. As with all creative expression, AI manifests the perspectives and values of its creators. A stance that encourages reflexivity among AI practitioners is a step toward ensuring that AI systems are <em>human-centered<\/em>, developed and deployed with the interests and well-being of individuals and society front and center. This is the focus of research scientists and engineers affiliated with <a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/ai\/our-approach?activetab=pivot1:primaryr5#coreui-banner-srtl0v6\" target=\"_blank\" rel=\"noreferrer noopener\">Aether<\/a>, the advisory body for Microsoft leadership on AI ethics and effects. Central to Aether\u2019s work is the question of who we\u2019re creating AI for\u2014and whether we\u2019re creating AI to solve real problems with responsible solutions. With AI capabilities accelerating, our researchers work to understand the sociotechnical implications and find ways to help on-the-ground practitioners envision and realize these capabilities in line with <a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/ai\/our-approach?activetab=pivot1%3aprimaryr5\" target=\"_blank\" rel=\"noreferrer noopener\">Microsoft AI principles<\/a>.<\/p>\n\n\n\n<p>The following is a glimpse into the past year\u2019s research for advancing responsible AI with authors from Aether. Throughout this work are repeated calls for reflexivity in AI practitioners\u2019 processes\u2014that is, self-reflection to help us achieve clarity about who we\u2019re developing AI systems for, who benefits, and who may potentially be harmed\u2014and for tools that help practitioners with the hard work of uncovering assumptions that may hinder the potential of human-centered AI. The research discussed here also explores critical components of responsible AI, such as being transparent about technology limitations, honoring the values of the people using the technology, enabling human agency for optimal human-AI teamwork, improving effective interaction with AI, and developing appropriate evaluation and risk-mitigation techniques for multimodal machine learning (ML) models.<\/p>\n\n\n\n<h2 id=\"considering-who-ai-systems-are-for\">Considering who AI systems are for<\/h2>\n\n\n\n<p>The need to cultivate broader perspectives and, for society\u2019s benefit, reflect on why and for whom we\u2019re creating AI is not only the responsibility of AI development teams but also of the AI research community. In the paper \u201c<a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/publication\/real-ml-recognizing-exploring-and-articulating-limitations-in-machine-learning-research\/\" target=\"_blank\" rel=\"noreferrer noopener\">REAL ML: Recognizing, Exploring, and Articulating Limitations of Machine Learning Research<\/a><em>,\u201d <\/em>the authors point out that machine learning publishing often exhibits a bias toward emphasizing exciting progress, which tends to propagate misleading expectations about AI. They urge reflexivity on the limitations of ML research to promote transparency about findings\u2019 generalizability and potential impact on society\u2014ultimately, an exercise in reflecting on who we\u2019re creating AI for. The paper offers a set of <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/github.com\/jesmith14\/REAL-ML\" target=\"_blank\" rel=\"noopener noreferrer\">guided activities designed to help articulate research limitations<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, encouraging the machine learning research community toward a standard practice of transparency about the scope and impact of their work. <\/p>\n\n\n\n<div class=\"wp-block-media-text has-vertical-margin-small  has-vertical-padding-none  is-stacked-on-mobile has-gray-color has-text-color is-style-border\"><figure class=\"wp-block-media-text__media\"><a href=\"https:\/\/github.com\/jesmith14\/REAL-ML\" target=\"_blank\" rel=\"noreferrer noopener\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"577\" src=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_Real_ML-1024x577.jpg\" alt=\"Graphic incorporating photos of a researcher sitting with a laptop and using the REAL ML tool, reflecting on research limitations to foster scientific progress, and a bird\u2019s eye view of a cityscape at night.\" class=\"wp-image-911250 size-full\" srcset=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_Real_ML-1024x577.jpg 1024w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_Real_ML-300x169.jpg 300w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_Real_ML-768x432.jpg 768w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_Real_ML-1536x865.jpg 1536w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_Real_ML-2048x1153.jpg 2048w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_Real_ML-1066x600.jpg 1066w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_Real_ML-655x368.jpg 655w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_Real_ML-343x193.jpg 343w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_Real_ML-240x135.jpg 240w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_Real_ML-scaled-640x360.jpg 640w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_Real_ML-scaled-960x540.jpg 960w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_Real_ML-1280x720.jpg 1280w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_Real_ML-1920x1080.jpg 1920w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/a><\/figure><div class=\"wp-block-media-text__content\">\n<p>Walk through REAL ML\u2019s instructional guide and worksheet that help researchers with defining the limitations of their research and identifying societal implications these limitations may have in the practical use of their work.<\/p>\n\n\n\n<div class=\"wp-block-buttons is-layout-flex wp-block-buttons-is-layout-flex\">\n<div class=\"wp-block-button is-style-fill-github\"><a data-bi-type=\"button\" class=\"wp-block-button__link\" href=\"https:\/\/github.com\/jesmith14\/REAL-ML\" target=\"_blank\" rel=\"noreferrer noopener\">Explore REAL ML<\/a><\/div>\n<\/div>\n<\/div><\/div>\n\n\n\n<p>Despite many organizations formulating principles to guide the responsible development and deployment of AI, a recent survey highlights that there\u2019s a <a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/publication\/how-different-groups-prioritize-ethical-values-for-responsible-ai\/\" target=\"_blank\" rel=\"noreferrer noopener\">gap between the values prioritized by AI practitioners and those of the general public<\/a>. The survey, which included a representative sample of the US population, found AI practitioners often gave less weight than the general public to values associated with responsible AI. This raises the question of whose values should inform AI systems and shifts attention toward considering the values of the people we\u2019re designing for, aiming for AI systems that are better aligned with people\u2019s needs.<\/p>\n\n\n\n<h4 id=\"related-papers\">Related papers<\/h4>\n\n\n\n<ul class=\"wp-block-list\"><li><a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/publication\/real-ml-recognizing-exploring-and-articulating-limitations-in-machine-learning-research\/\" target=\"_blank\" rel=\"noreferrer noopener\">REAL ML: Recognizing, Exploring, and Articulating Limitations of Machine Learning Research<\/a><\/li><li><a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/publication\/how-different-groups-prioritize-ethical-values-for-responsible-ai\/\" target=\"_blank\" rel=\"noreferrer noopener\">How Different Groups Prioritize Ethical Values for Responsible AI<\/a><\/li><\/ul>\n\n\n\n<h2 id=\"creating-ai-that-empowers-human-agency\">Creating AI that empowers human agency<\/h2>\n\n\n\n<p>Supporting human agency and emphasizing transparency in AI systems are proven approaches to building <a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/blog\/advancing-ai-trustworthiness-updates-on-responsible-ai-research\/\" target=\"_blank\" rel=\"noreferrer noopener\">appropriate trust<\/a> with the people systems are designed to help. In human-AI teamwork, interactive visualization tools can enable people to capitalize on their own domain expertise and let them easily edit state-of-the-art models. For example, physicians <a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/publication\/interpretability-then-what-editing-machine-learning-models-to-reflect-human-knowledge-and-values\/\" target=\"_blank\" rel=\"noreferrer noopener\">using GAM Changer can edit risk prediction models<\/a> for pneumonia and sepsis to incorporate their own clinical knowledge and make better treatment decisions for patients.<\/p>\n\n\n\n<p>A study examining <a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/publication\/a-new-workflow-for-human-ai-collaboration-in-citizen-science\/\" target=\"_blank\" rel=\"noreferrer noopener\">how AI can improve the value of rapidly growing citizen-science contributions<\/a> found that emphasizing human agency and transparency increased productivity in an online workflow where volunteers provide valuable information to help AI classify galaxies. When choosing to opt in to using the new workflow and receiving messages that stressed human assistance was necessary for difficult classification tasks, participants were more productive without sacrificing the quality of their input and they returned to volunteer more often.<\/p>\n\n\n\n<p>Failures are inevitable in AI because no model that interacts with the ever-changing physical world can be complete. Human input and feedback are essential to reducing risks. Investigating reliability and safety mitigations for systems such as robotic box pushing and autonomous driving, researchers <a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/publication\/avoiding-negative-side-effects-of-autonomous-systems-in-the-open-world\/\" target=\"_blank\" rel=\"noreferrer noopener\">formalize the problem of negative side effects<\/a> (NSEs), the undesirable behavior of these systems. The researchers experimented with a framework in which the AI system uses immediate human assistance in the form of feedback\u2014either about the user\u2019s tolerance for an NSE occurrence or their decision to modify the environment. Results demonstrate that AI systems can adapt to successfully mitigate NSEs from feedback, but among future considerations, there remains the challenge of developing techniques for collecting accurate feedback from individuals using the system.<\/p>\n\n\n\n<p>The goal of optimizing human-AI complementarity highlights the importance of engaging human agency. In a <a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/publication\/investigations-of-performance-and-bias-in-human-ai-teamwork-in-hiring\/\" target=\"_blank\" rel=\"noreferrer noopener\">large-scale study examining how bias in models influences humans\u2019 decisions<\/a> in a job recruiting task, researchers made a surprising discovery: when working with a black-box deep neural network (DNN) recommender system, people made significantly fewer gender-biased decisions than when working with a bag-of-words (BOW) model, which is perceived as more interpretable. This suggests that people tend to reflect and rely on their own judgment before accepting a recommendation from a system for which they can\u2019t comfortably form a mental model of how its outputs are derived. Researchers call for exploring techniques to better engage human reflexivity when working with advanced algorithms, which can be a means for improving hybrid human-AI decision-making and mitigating bias.&nbsp;<\/p>\n\n\n\n<p>How we design human-AI interaction is key to complementarity and empowering human agency. We need to carefully plan how people will interact with AI systems that are stochastic in nature and present inherently different challenges than deterministic systems. Designing and testing human interaction with AI systems as early as possible in the development process, even before teams invest in engineering, can help avoid costly failures and redesign. Toward this goal, researchers propose <a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/publication\/assessing-human-ai-interaction-early-through-factorial-surveys-a-study-on-the-guidelines-for-human-ai-interaction\/\" target=\"_blank\" rel=\"noreferrer noopener\">early testing of human-AI interaction through factorial surveys<\/a>, a method from the social sciences that uses short narratives for deriving insights about people\u2019s perceptions.<\/p>\n\n\n\n<p>But testing for optimal user experience before teams invest in engineering can be challenging for AI-based features that change over time. The ongoing nature of a person adapting to a constantly updating AI feature makes it difficult to observe user behavior patterns that can inform design improvements before deploying a system. However, experiments demonstrate the potential of <a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/publication\/hint-integration-testing-for-ai-based-features-with-humans-in-the-loop\/\" target=\"_blank\" rel=\"noreferrer noopener\">HINT (Human-AI INtegration Testing)<\/a>, a framework for uncovering over-time patterns in user behavior during pre-deployment testing. Using HINT, practitioners can design test setup, collect data via a crowdsourced workflow, and generate reports of user-centered and offline metrics.<\/p>\n\n\n\n<div class=\"wp-block-media-text has-vertical-margin-small  has-vertical-padding-none  is-stacked-on-mobile has-gray-color has-text-color is-style-border\"><figure class=\"wp-block-media-text__media\"><a href=\"https:\/\/aclanthology.org\/2022.hcinlp-1.pdf\" target=\"_blank\" rel=\"noreferrer noopener\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"576\" src=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_HCI_NLP-1024x576.jpg\" alt=\"Graphic of bridging HCI and NLP for empowering human agency with images of people using chatbots.\" class=\"wp-image-911244 size-full\" srcset=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_HCI_NLP-1024x576.jpg 1024w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_HCI_NLP-300x169.jpg 300w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_HCI_NLP-768x432.jpg 768w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_HCI_NLP-1536x865.jpg 1536w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_HCI_NLP-2048x1153.jpg 2048w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_HCI_NLP-1066x600.jpg 1066w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_HCI_NLP-655x368.jpg 655w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_HCI_NLP-343x193.jpg 343w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_HCI_NLP-240x135.jpg 240w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_HCI_NLP-640x360.jpg 640w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_HCI_NLP-960x540.jpg 960w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_HCI_NLP-1280x720.jpg 1280w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_HCI_NLP-1920x1080.jpg 1920w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/a><\/figure><div class=\"wp-block-media-text__content\">\n<p>Check out the 2022 anthology of this annual workshop that brings human-computer interaction (HCI) and natural language processing (NLP) research together for improving how people can benefit from NLP apps they use daily.<\/p>\n\n\n\n<div class=\"wp-block-buttons is-layout-flex wp-block-buttons-is-layout-flex\">\n<div class=\"wp-block-button\"><a data-bi-type=\"button\" class=\"wp-block-button__link\" href=\"https:\/\/aclanthology.org\/2022.hcinlp-1.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">HCI + NLP 2022 anthology<\/a><\/div>\n<\/div>\n<\/div><\/div>\n\n\n\n<h4 id=\"related-papers\">Related papers<\/h4>\n\n\n\n<ul class=\"wp-block-list\"><li><a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/publication\/assessing-human-ai-interaction-early-through-factorial-surveys-a-study-on-the-guidelines-for-human-ai-interaction\/\" target=\"_blank\" rel=\"noreferrer noopener\">Assessing Human-AI Interaction Early through Factorial Surveys: A Study on the Guidelines for Human-AI Interaction<\/a><\/li><li><a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/publication\/avoiding-negative-side-effects-of-autonomous-systems-in-the-open-world\/\" target=\"_blank\" rel=\"noreferrer noopener\">Avoiding Negative Side Effects of Autonomous Systems in the Open World<\/a><\/li><li><a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/publication\/interpretability-then-what-editing-machine-learning-models-to-reflect-human-knowledge-and-values\/\" target=\"_blank\" rel=\"noreferrer noopener\">Interpretability, Then What? Editing Machine Learning Models to Reflect Human Knowledge and Values<\/a><\/li><li><a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/publication\/investigations-of-performance-and-bias-in-human-ai-teamwork-in-hiring\/\" target=\"_blank\" rel=\"noreferrer noopener\">Investigations of Performance and Bias in Human-AI Teamwork in Hiring<\/a><\/li><li><a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/publication\/a-new-workflow-for-human-ai-collaboration-in-citizen-science\/\" target=\"_blank\" rel=\"noreferrer noopener\">A new Workflow for Human-AI Collaboration in Citizen Science<\/a><\/li><li><a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/publication\/hint-integration-testing-for-ai-based-features-with-humans-in-the-loop\/\" target=\"_blank\" rel=\"noreferrer noopener\">HINT: Integration Testing for AI-based Features with Humans in the Loop<\/a><\/li><\/ul>\n\n\n\n<h2 id=\"building-responsible-ai-tools-for-foundation-models\">Building responsible AI tools for foundation models<\/h2>\n\n\n\n<p>Although we\u2019re still in the early stages of understanding how to responsibly harness the potential of large language and multimodal models that can be used as foundations for building a variety of AI-based systems, researchers are developing promising tools and evaluation techniques to help on-the-ground practitioners deliver responsible AI. The reflexivity and resources required for deploying these new capabilities with a human-centered approach are fundamentally compatible with business goals of robust services and products.<\/p>\n\n\n\n<p>Natural language generation with open-ended vocabulary has sparked a lot of imagination in product teams. Challenges persist, however, including for improving toxic language detection; content moderation tools often over-flag content that mentions minority groups without respect to context while missing implicit toxicity. To help address this, a <a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/publication\/toxigen-a-large-scale-machine-generated-dataset-for-adversarial-and-implicit-hate-speech-detection\/\" target=\"_blank\" rel=\"noreferrer noopener\">new large-scale machine-generated dataset, ToxiGen<\/a>, enables practitioners to fine-tune pretrained hate classifiers for improving detection of implicit toxicity for 13 minority groups in both human- and machine-generated text.<\/p>\n\n\n\n<div class=\"wp-block-media-text has-vertical-margin-small  has-vertical-padding-none  is-stacked-on-mobile has-gray-color has-text-color is-style-border\"><figure class=\"wp-block-media-text__media\"><a href=\"https:\/\/github.com\/microsoft\/TOXIGEN\/\" target=\"_blank\" rel=\"noreferrer noopener\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"576\" src=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_Toxigen-1024x576.jpg\" alt=\"Graphic for ToxiGen dataset for improving toxic language detection with images of diverse demographic groups of people in discussion and on smartphone.\" class=\"wp-image-911253 size-full\" srcset=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_Toxigen-1024x576.jpg 1024w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_Toxigen-300x169.jpg 300w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_Toxigen-768x432.jpg 768w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_Toxigen-1536x865.jpg 1536w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_Toxigen-2048x1153.jpg 2048w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_Toxigen-1066x600.jpg 1066w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_Toxigen-655x368.jpg 655w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_Toxigen-343x193.jpg 343w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_Toxigen-240x135.jpg 240w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_Toxigen-640x360.jpg 640w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_Toxigen-960x540.jpg 960w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_Toxigen-1280x720.jpg 1280w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_Toxigen-1920x1080.jpg 1920w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/a><\/figure><div class=\"wp-block-media-text__content\">\n<p>Download the large-scale machine-generated ToxiGen dataset and install source code for fine-tuning toxic language detection systems for adversarial and implicit hate speech for 13 demographic minority groups. Intended for research purposes.<\/p>\n\n\n\n<div class=\"wp-block-buttons is-layout-flex wp-block-buttons-is-layout-flex\">\n<div class=\"wp-block-button is-style-fill-github\"><a data-bi-type=\"button\" class=\"wp-block-button__link\" href=\"https:\/\/github.com\/microsoft\/TOXIGEN\/\" target=\"_blank\" rel=\"noreferrer noopener\">ToxiGen dataset<\/a><\/div>\n<\/div>\n<\/div><\/div>\n\n\n\n<p>Multimodal models are proliferating, such as those that combine natural language generation with computer vision for services like image captioning. These complex systems can surface harmful societal biases in their output and are challenging to evaluate for mitigating harms. Using a state-of-the-art image captioning service with two popular image-captioning datasets, researchers isolate where in the system fairness-related harms originate and <a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/publication\/measuring-representational-harms-in-image-captioning\/\" target=\"_blank\" rel=\"noreferrer noopener\">present multiple measurement techniques for five specific types of representational harm<\/a>: denying people the opportunity to self-identify, reifying social groups, stereotyping, erasing, and demeaning.<\/p>\n\n\n\n<p>The commercial advent of AI-powered code generators has introduced novice developers alongside professionals to large language model (LLM)-assisted programming. <a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/publication\/what-is-it-like-to-program-with-artificial-intelligence\/\" target=\"_blank\" rel=\"noreferrer noopener\">An overview of the LLM-assisted programming experience<\/a> reveals unique considerations. Programming with LLMs invites comparison to related ways of programming, such as search, compilation, and pair programming. While there are indeed similarities, the empirical reports suggest it is a distinct way of programming with its own unique blend of behaviors. For example, additional effort is required to craft prompts that generate the desired code, and programmers must check the suggested code for correctness, reliability, safety, and security. Still, <a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/publication\/aligning-offline-metrics-and-human-judgments-of-value-of-ai-pair-programmers\/\" target=\"_blank\" rel=\"noreferrer noopener\">a user study examining what programmers value in AI code generation<\/a> shows that programmers <em>do <\/em>find value in suggested code because it\u2019s easy to <em>edit<\/em>, increasing productivity. Researchers propose a hybrid metric that combines functional correctness and similarity-based metrics to best capture what programmers value in LLM-assisted programming, because human judgment should determine how a technology can best serve us.<\/p>\n\n\n\n<h4 id=\"related-papers\">Related papers<\/h4>\n\n\n\n<ul class=\"wp-block-list\"><li><a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/publication\/what-is-it-like-to-program-with-artificial-intelligence\/\" target=\"_blank\" rel=\"noreferrer noopener\">What is it like to program with artificial intelligence?<\/a><\/li><li><a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/publication\/toxigen-a-large-scale-machine-generated-dataset-for-adversarial-and-implicit-hate-speech-detection\/\" target=\"_blank\" rel=\"noreferrer noopener\">ToxiGen: A Large-Scale Machine-Generated Dataset for Adversarial and Implicit Hate Speech Detection<\/a><\/li><li><a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/publication\/aligning-offline-metrics-and-human-judgments-of-value-of-ai-pair-programmers\/\" target=\"_blank\" rel=\"noreferrer noopener\">Aligning Offline Metrics and Human Judgments of Value of AI-Pair Programmers<\/a><\/li><li><a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/publication\/measuring-representational-harms-in-image-captioning\/\" target=\"_blank\" rel=\"noreferrer noopener\">Measuring Representational Harms in Image Captioning<\/a><\/li><\/ul>\n\n\n\n<h2 id=\"understanding-and-supporting-ai-practitioners\">Understanding and supporting AI practitioners<\/h2>\n\n\n\n<p>Organizational culture and business goals can often be at odds with what practitioners need for mitigating fairness and other responsible AI issues when their systems are deployed at scale. Responsible, human-centered AI requires a thoughtful approach: just because a technology is technically feasible does not mean it <em>should <\/em>be created.<\/p>\n\n\n\n<p>Similarly, just because a dataset is available doesn&#8217;t mean it\u2019s appropriate to use. Knowing why and how a dataset was created is crucial for helping AI practitioners decide on whether it <em>should <\/em>be used for their purposes and what its implications are for fairness, reliability, safety, and privacy. A study focusing on how AI practitioners approach datasets and documentation reveals current practices are informal and inconsistent. It points to the <a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/publication\/understanding-machine-learning-practitioners-data-documentation-perceptions-needs-challenges-and-desiderata\/\" target=\"_blank\" rel=\"noreferrer noopener\">need for data documentation frameworks designed to fit within practitioners\u2019 existing workflows<\/a> and that make clear the responsible AI implications of using a dataset. Based on these findings, researchers iterated on <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/query.prod.cms.rt.microsoft.com\/cms\/api\/am\/binary\/RE4t8QB\" target=\"_blank\" rel=\"noopener noreferrer\">Datasheets for Datasets<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> and proposed the revised <a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2022\/07\/aether-datadoc-082522.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Aether Data Documentation Template<\/a>.<\/p>\n\n\n\n<div class=\"wp-block-media-text has-vertical-margin-small  has-vertical-padding-none  is-stacked-on-mobile has-gray-color has-text-color is-style-border\"><figure class=\"wp-block-media-text__media\"><a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2022\/07\/aether-datadoc-082522.pdf\" target=\"_blank\" rel=\"noreferrer noopener\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"576\" src=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_Aether_data-1024x576.jpg\" alt=\"Graphic for the Aether Data Documentation Template for promoting reflexivity and transparency with bird\u2019s eye view of pedestrians at busy crosswalks and a close-up of hands typing on a computer keyboard.\" class=\"wp-image-911238 size-full\" srcset=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_Aether_data-1024x576.jpg 1024w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_Aether_data-300x169.jpg 300w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_Aether_data-768x432.jpg 768w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_Aether_data-1536x865.jpg 1536w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_Aether_data-2048x1153.jpg 2048w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_Aether_data-1066x600.jpg 1066w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_Aether_data-655x368.jpg 655w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_Aether_data-343x193.jpg 343w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_Aether_data-240x135.jpg 240w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_Aether_data-scaled-640x360.jpg 640w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_Aether_data-scaled-960x540.jpg 960w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_Aether_data-1280x720.jpg 1280w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/1400x788_RAI_Sketch_Aether_data-1920x1080.jpg 1920w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/a><\/figure><div class=\"wp-block-media-text__content\">\n<p>Use this flexible template to reflect and help document underlying assumptions, potential risks, and implications of using your dataset.<\/p>\n\n\n\n<div class=\"wp-block-buttons is-layout-flex wp-block-buttons-is-layout-flex\">\n<div class=\"wp-block-button\"><a data-bi-type=\"button\" class=\"wp-block-button__link\" href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2022\/07\/aether-datadoc-082522.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Document your data<\/a><\/div>\n<\/div>\n<\/div><\/div>\n\n\n\n<p>AI practitioners find themselves balancing the pressures of delivering to meet business goals and the time requirements necessary for the responsible development and evaluation of AI systems. Examining these tensions across three technology companies, researchers conducted interviews and workshops to learn <a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/publication\/assessing-the-fairness-of-ai-systems-ai-practitioners-processes-challenges-and-needs-for-support\/\" target=\"_blank\" rel=\"noreferrer noopener\">what practitioners need for measuring and mitigating AI fairness issues amid time pressure to release AI-infused products to wider geographic markets<\/a> and for more diverse groups of people. Participants disclosed challenges in collecting appropriate datasets and finding the right metrics for evaluating how fairly their system will perform when they can\u2019t identify direct stakeholders and demographic groups who will be affected by the AI system in rapidly broadening markets. For example, hate speech detection may not be adequate across cultures or languages. A look at <a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/publication\/deconstructing-nlg-evaluation-evaluation-practices-assumptions-and-their-implications\/\" target=\"_blank\" rel=\"noreferrer noopener\">what goes into AI practitioners\u2019 decisions around what, when, and how to evaluate AI systems that use natural language generation<\/a> (NLG) further emphasizes that when practitioners don\u2019t have clarity about deployment settings, they\u2019re limited in projecting failures that could cause individual or societal harm. Beyond concerns for detecting toxic speech, other issues of fairness and inclusiveness\u2014for example, erasure of minority groups\u2019 distinctive linguistic expression\u2014are rarely a consideration in practitioners\u2019 evaluations.<\/p>\n\n\n\n<p>Coping with time constraints and competing business objectives is a reality for teams deploying AI systems. There are many opportunities for developing integrated tools that can prompt AI practitioners to think through potential risks and mitigations for sociotechnical systems.<\/p>\n\n\n\n<h4 id=\"related-papers\">Related papers<\/h4>\n\n\n\n<ul class=\"wp-block-list\"><li><a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/publication\/deconstructing-nlg-evaluation-evaluation-practices-assumptions-and-their-implications\/\" target=\"_blank\" rel=\"noreferrer noopener\">Deconstructing NLG Evaluation: Evaluation Practices, Assumptions, and Their Implications<\/a><\/li><li><a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/publication\/understanding-machine-learning-practitioners-data-documentation-perceptions-needs-challenges-and-desiderata\/\" target=\"_blank\" rel=\"noreferrer noopener\">Understanding Machine Learning Practitioners\u2019 Data Documentation Perceptions, Needs, Challenges, and Desiderata<\/a><\/li><li><a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/publication\/assessing-the-fairness-of-ai-systems-ai-practitioners-processes-challenges-and-needs-for-support\/\" target=\"_blank\" rel=\"noreferrer noopener\">Assessing the Fairness of AI Systems: AI Practitioners\u2019 Processes, Challenges, and Needs for Support<\/a><\/li><\/ul>\n\n\n\n<h2 id=\"thinking-about-it-reflexivity-as-an-essential-for-society-and-industry-goals\">Thinking about it: Reflexivity as an essential for society and industry goals<\/h2>\n\n\n\n<p>As we continue to envision what all is possible with AI\u2019s potential, one thing is clear: developing AI designed with the needs of people in mind requires reflexivity. We have been thinking about human-centered AI as being focused on users and stakeholders. Understanding who we are designing for, empowering human agency, improving human-AI interaction, and developing harm mitigation tools and techniques are as important as ever. But we also need to turn a mirror toward ourselves as AI creators. What values and assumptions do we bring to the table? Whose values get to be included and whose are left out? How do these values and assumptions influence what we build, how we build, and for whom? How can we navigate complex and demanding organizational pressures as we endeavor to create responsible AI? With technologies as powerful as AI, we can\u2019t afford to be focused solely on progress for its own sake. While we work to evolve AI technologies at a fast pace, we need to pause and reflect on what it is that we are advancing\u2014and for whom.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Artificial intelligence, like all tools we build, is an expression of human creativity. As with all creative expression, AI manifests the perspectives and values of its creators. A stance that encourages reflexivity among AI practitioners is a step toward ensuring that AI systems are human-centered, developed, and deployed with the interests and well-being of individuals and society front and center. This is the focus of research scientists and engineers affiliated with Aether, the advisory council for Microsoft leadership on AI ethics and effects. Central to Aether\u2019s work is the question of who we\u2019re creating AI for\u2014and whether we\u2019re creating AI to solve real problems with responsible solutions. With AI capabilities accelerating, our researchers work to understand the sociotechnical implications and find ways to help on-the-ground practitioners envision and realize these capabilities in line with Microsoft AI principles.<\/p>\n","protected":false},"author":42183,"featured_media":911742,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr-author-ordering":[{"type":"user_nicename","value":"Mihaela Vorvoreanu","user_id":"36804"},{"type":"user_nicename","value":"Kathy Walker","user_id":"41386"}],"msr_hide_image_in_river":0,"footnotes":""},"categories":[1],"tags":[],"research-area":[13556,13562,13545,13554],"msr-region":[],"msr-event-type":[],"msr-locale":[268875],"msr-post-option":[243984],"msr-impact-theme":[],"msr-promo-type":[],"msr-podcast-series":[],"class_list":["post-911151","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-research-blog","msr-research-area-artificial-intelligence","msr-research-area-computer-vision","msr-research-area-human-language-technologies","msr-research-area-human-computer-interaction","msr-locale-en_us","msr-post-option-blog-homepage-featured"],"msr_event_details":{"start":"","end":"","location":""},"podcast_url":"","podcast_episode":"","msr_research_lab":[],"msr_impact_theme":[],"related-publications":[],"related-downloads":[],"related-videos":[],"related-academic-programs":[],"related-groups":[],"related-projects":[],"related-events":[],"related-researchers":[{"type":"user_nicename","value":"Mihaela Vorvoreanu","user_id":36804,"display_name":"Mihaela Vorvoreanu","author_link":"<a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/people\/mivorvor\/\" aria-label=\"Visit the profile page for Mihaela Vorvoreanu\">Mihaela Vorvoreanu<\/a>","is_active":false,"last_first":"Vorvoreanu, Mihaela","people_section":0,"alias":"mivorvor"}],"msr_type":"Post","featured_image_thumbnail":"<img width=\"960\" height=\"540\" src=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/RAI_blog-2023Jan_hero_1400x788-960x540.jpg\" class=\"img-object-cover\" alt=\"illustration of a lightbulb shape with different icons surrounding it on a purple background\" decoding=\"async\" loading=\"lazy\" srcset=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/RAI_blog-2023Jan_hero_1400x788-960x540.jpg 960w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/RAI_blog-2023Jan_hero_1400x788-300x169.jpg 300w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/RAI_blog-2023Jan_hero_1400x788-1024x576.jpg 1024w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/RAI_blog-2023Jan_hero_1400x788-768x432.jpg 768w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/RAI_blog-2023Jan_hero_1400x788-1066x600.jpg 1066w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/RAI_blog-2023Jan_hero_1400x788-655x368.jpg 655w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/RAI_blog-2023Jan_hero_1400x788-343x193.jpg 343w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/RAI_blog-2023Jan_hero_1400x788-240x135.jpg 240w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/RAI_blog-2023Jan_hero_1400x788-640x360.jpg 640w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/RAI_blog-2023Jan_hero_1400x788-1280x720.jpg 1280w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2023\/01\/RAI_blog-2023Jan_hero_1400x788.jpg 1400w\" sizes=\"auto, (max-width: 960px) 100vw, 960px\" \/>","byline":"<a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/people\/mivorvor\/\" title=\"Go to researcher profile for Mihaela Vorvoreanu\" aria-label=\"Go to researcher profile for Mihaela Vorvoreanu\" data-bi-type=\"byline author\" data-bi-cN=\"Mihaela Vorvoreanu\">Mihaela Vorvoreanu<\/a> and Kathy Walker","formattedDate":"January 12, 2023","formattedExcerpt":"Artificial intelligence, like all tools we build, is an expression of human creativity. As with all creative expression, AI manifests the perspectives and values of its creators. A stance that encourages reflexivity among AI practitioners is a step toward ensuring that AI systems are human-centered,&hellip;","locale":{"slug":"en_us","name":"English","native":"","english":"English"},"_links":{"self":[{"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/posts\/911151","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/users\/42183"}],"replies":[{"embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/comments?post=911151"}],"version-history":[{"count":66,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/posts\/911151\/revisions"}],"predecessor-version":[{"id":912864,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/posts\/911151\/revisions\/912864"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/media\/911742"}],"wp:attachment":[{"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/media?parent=911151"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/categories?post=911151"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/tags?post=911151"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=911151"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=911151"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=911151"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=911151"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=911151"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=911151"},{"taxonomy":"msr-promo-type","embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/msr-promo-type?post=911151"},{"taxonomy":"msr-podcast-series","embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/msr-podcast-series?post=911151"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}