{"id":823765,"date":"2022-03-14T01:00:00","date_gmt":"2022-03-14T08:00:00","guid":{"rendered":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/?p=823765"},"modified":"2022-10-19T07:58:56","modified_gmt":"2022-10-19T14:58:56","slug":"peoplelens-using-ai-to-support-social-interaction-between-children-who-are-blind-and-their-peers","status":"publish","type":"post","link":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/blog\/peoplelens-using-ai-to-support-social-interaction-between-children-who-are-blind-and-their-peers\/","title":{"rendered":"PeopleLens: Using AI to support social interaction between children who are blind and their peers"},"content":{"rendered":"\n<figure class=\"wp-block-image alignwide size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"2560\" height=\"1441\" src=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2022\/03\/1400x788_Project_Tokyo_hero_image_still-scaled.jpg\" alt=\"A young boy wearing the PeopleLens sits on the floor of a playroom holding a blind tennis ball in his hands. His attention is directed toward a woman sitting on the floor in front of him holding her hands out. The PeopleLens looks like small goggles that sit on the forehead. The image is marked with visual annotations to indicate what the PeopleLens is seeing and what sounds are being heard.\" class=\"wp-image-825280\" srcset=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2022\/03\/1400x788_Project_Tokyo_hero_image_still-scaled.jpg 2560w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2022\/03\/1400x788_Project_Tokyo_hero_image_still-300x169.jpg 300w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2022\/03\/1400x788_Project_Tokyo_hero_image_still-1024x576.jpg 1024w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2022\/03\/1400x788_Project_Tokyo_hero_image_still-768x432.jpg 768w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2022\/03\/1400x788_Project_Tokyo_hero_image_still-1536x864.jpg 1536w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2022\/03\/1400x788_Project_Tokyo_hero_image_still-2048x1152.jpg 2048w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2022\/03\/1400x788_Project_Tokyo_hero_image_still-1066x600.jpg 1066w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2022\/03\/1400x788_Project_Tokyo_hero_image_still-655x368.jpg 655w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2022\/03\/1400x788_Project_Tokyo_hero_image_still-343x193.jpg 343w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2022\/03\/1400x788_Project_Tokyo_hero_image_still-240x135.jpg 240w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2022\/03\/1400x788_Project_Tokyo_hero_image_still-640x360.jpg 640w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2022\/03\/1400x788_Project_Tokyo_hero_image_still-scaled-960x540.jpg 960w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2022\/03\/1400x788_Project_Tokyo_hero_image_still-1280x720.jpg 1280w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2022\/03\/1400x788_Project_Tokyo_hero_image_still-1920x1080.jpg 1920w\" sizes=\"auto, (max-width: 2560px) 100vw, 2560px\" \/><figcaption><center>The PeopleLens is a new research technology designed to help people who are blind or have low vision better understand their immediate social environments by locating and identifying people in the space. Coupled with a scheme of work based on research and practices from psychology and speech and language therapy, the system can help children and young people who are blind more easily forge social connections with their peers.<\/center><\/figcaption><\/figure>\n\n\n\n<p>For children born blind, social interaction can be particularly challenging. A child may have difficulty aiming their voice at the person they\u2019re talking to and put their head on their desk instead. Linguistically advanced young people may struggle with maintaining a topic of conversation, talking only about something of interest to them. Most noticeably, many children and young people who are blind struggle with engaging and befriending those in their age group despite a strong desire to do so. This is often deeply frustrating for the child or young person and can be equally so for their support network of family members and teachers who want to help them forge these important connections.<\/p>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"margin-callout\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 annotations__list--left\">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">PUBLICATION<\/span>\n\t\t\t<a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/publication\/peoplelens\/\" data-bi-cN=\"PeopleLens\" data-external-link=\"false\" data-bi-aN=\"margin-callout\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>PeopleLens<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t\t\t<p class=\"annotations__caption text-neutral-400 mt-2\">The PeopleLens is an open-ended AI system that offers people who are blind or who have low vision further resources to make sense of and engage with their immediate social surroundings.<\/p>\n\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<p>The <a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/publication\/peoplelens\/\" target=\"_blank\" rel=\"noreferrer noopener\">PeopleLens<\/a> is a new research technology that we\u2019ve created to help young people who are blind (referred to as <em>learners<\/em> in our work) and their peers interact more easily. A head-worn device, the PeopleLens reads aloud in spatialized audio the names of known individuals when the learner looks at them. That means the sound comes from the direction of the person, assisting the learner in understanding both the relative position and distance of their peers. The PeopleLens helps learners build a <em>People Map<\/em>, a mental map of those around them needed to effectively signal communicative intent. The technology, in turn, indicates to the learner\u2019s peers when the peers have been \u201cseen\u201d and can interact\u2014a replacement for the eye contact that usually initiates interaction between people.<\/p>\n\n\n\n<p>For children and young people who are blind, the PeopleLens is a way to find their friends; however, for teachers and parents, it\u2019s a way for these children and young people to develop competence and confidence in social interaction. An accompanying scheme of work aims to guide the development of spatial attention skills believed to underpin social interaction through a series of games that learners using the PeopleLens can play with peers. It also sets up situations in which learners can experience agency in social interaction. A child\u2019s realization that they can choose to initiate a conversation because they spot someone first or that they can stop a talkative brother from speaking by looking away is a powerful moment, motivating them to delve deeper into directing their own and others\u2019 attention.<\/p>\n\n\n\n<p>The PeopleLens is an advanced research prototype that works on Nreal Light augmented reality glasses tethered to a phone. While it\u2019s not available for purchase, we are recruiting learners in the United Kingdom aged 5 to 11 who have the support of a teacher to explore the technology as part of a multistage research study. For the study, led by the University of Bristol, learners will be asked to use the PeopleLens for a three-month period beginning in September 2022.<\/p>\n\n\n\n<figure class=\"wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<iframe loading=\"lazy\" title=\"PeopleLens: Using AI to support social interaction between children who are blind and their peers\" width=\"500\" height=\"281\" src=\"https:\/\/www.youtube-nocookie.com\/embed\/astmNfJHT4A?feature=oembed&rel=0\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe>\n<\/div><\/figure>\n\n\n\n<h2 id=\"research-foundation\">Research foundation&nbsp;<\/h2>\n\n\n\n<p>The scheme of work, coauthored by collaborators <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.gold.ac.uk\/psychology\/staff\/pring\/\" target=\"_blank\" rel=\"noopener noreferrer\">Professor Linda Pring<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> and <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/www.linkedin.com\/in\/dr-vasiliki-kladouchou-a1b19661\/\">Dr. Vasiliki Kladouchou<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, draws on research and practice from psychology and speech and language therapy in providing activities to do with the technology. The PeopleLens builds on the hypothesis that many social interaction difficulties for children who are blind stem from differences in the ways children with and without vision acquire fundamental attentional processes as babies and young children. For example, growing up, children with vision learn to internalize a joint visual dialogue of attention. A young child points at something in the sky, and the parent says, \u201cBird.\u201d Through these dialogues, young children learn how to direct the attention of others. However, there isn\u2019t enough research to understand how joint attention manifests in children who are blind. A review of the literature suggests that most research doesn\u2019t account for a missing sense and that research specific to visual impairment doesn\u2019t provide a framework for joint attention beyond the age of 3. We\u2019re carrying out research to better understand how the development of joint attention can be improved in early education and augmented with technology.<\/p>\n\n\n\n\t<div class=\"border-bottom border-top border-gray-300 mt-5 mb-5 msr-promo text-center text-md-left alignwide\" data-bi-aN=\"promo\" data-bi-id=\"1144028\">\n\t\t\n\n\t\t<p class=\"msr-promo__label text-gray-800 text-center text-uppercase\">\n\t\t<span class=\"px-4 bg-white display-inline-block font-weight-semibold small\">PODCAST SERIES<\/span>\n\t<\/p>\n\t\n\t<div class=\"row pt-3 pb-4 align-items-center\">\n\t\t\t\t\t\t<div class=\"msr-promo__media col-12 col-md-5\">\n\t\t\t\t<a class=\"bg-gray-300 display-block\" href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/story\/the-ai-revolution-in-medicine-revisited\/\" aria-label=\"The AI Revolution in Medicine, Revisited\" data-bi-cN=\"The AI Revolution in Medicine, Revisited\" target=\"_blank\">\n\t\t\t\t\t<img decoding=\"async\" class=\"w-100 display-block\" src=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2025\/06\/Episode7-PeterBillSebastien-AIRevolution_Hero_Feature_River_No_Text_1400x788.jpg\" alt=\"Illustrated headshot of Bill Gates, Peter Lee, and S\u00e9bastien Bubeck\" \/>\n\t\t\t\t<\/a>\n\t\t\t<\/div>\n\t\t\t\n\t\t\t<div class=\"msr-promo__content p-3 px-5 col-12 col-md\">\n\n\t\t\t\t\t\t\t\t\t<h2 class=\"h4\">The AI Revolution in Medicine, Revisited<\/h2>\n\t\t\t\t\n\t\t\t\t\t\t\t\t<p id=\"the-ai-revolution-in-medicine-revisited\" class=\"large\">Join Microsoft\u2019s Peter Lee on a journey to discover how AI is impacting healthcare and what it means for the future of medicine.<\/p>\n\t\t\t\t\n\t\t\t\t\t\t\t\t<div class=\"wp-block-buttons justify-content-center justify-content-md-start\">\n\t\t\t\t\t<div class=\"wp-block-button\">\n\t\t\t\t\t\t<a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/story\/the-ai-revolution-in-medicine-revisited\/\" aria-describedby=\"the-ai-revolution-in-medicine-revisited\" class=\"btn btn-brand glyph-append glyph-append-chevron-right\" data-bi-cN=\"The AI Revolution in Medicine, Revisited\" target=\"_blank\">\n\t\t\t\t\t\t\tListen now\t\t\t\t\t\t<\/a>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t<\/div><!--\/.msr-promo__content-->\n\t<\/div><!--\/.msr-promo__inner-wrap-->\n\t<\/div><!--\/.msr-promo-->\n\t\n\n\n<h2 id=\"how-does-the-peoplelens-work\">How does the PeopleLens work?&nbsp;<\/h2>\n\n\n\n<p>The PeopleLens is a sophisticated AI prototype system that is intended to provide people who are blind or have low vision with a better understanding of their immediate social environment. It uses a head-mounted augmented reality device in combination with four state-of-the-art computer vision algorithms to <em>continuously<\/em> locate, identify, track, and capture the gaze directions of people in the vicinity. It then presents this information to the wearer through spatialized audio\u2014sound that comes from the direction of the person. The real-time nature of the system gives a sense of immersion in the People Map.<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full\"><a data-bi-bhvr=\"14\"  data-bi-cn=\"A graphic overview of the PeopleLens system describes its functionality and experience features with accompanying icons. \" href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2022\/03\/tokyo_studio_anon.png\"><img loading=\"lazy\" decoding=\"async\" width=\"1542\" height=\"1316\" src=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2022\/03\/tokyo_studio_anon.png\" alt=\"A graphic overview of the PeopleLens system describes its functionality and experience features with accompanying icons. \" class=\"wp-image-825295\" srcset=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2022\/03\/tokyo_studio_anon.png 1542w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2022\/03\/tokyo_studio_anon-300x256.png 300w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2022\/03\/tokyo_studio_anon-1024x874.png 1024w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2022\/03\/tokyo_studio_anon-768x655.png 768w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2022\/03\/tokyo_studio_anon-1536x1311.png 1536w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2022\/03\/tokyo_studio_anon-211x180.png 211w\" sizes=\"auto, (max-width: 1542px) 100vw, 1542px\" \/><\/a><figcaption>The PeopleLens helps the child wearing it build a mental map of those in their immediate social environment. Because the PeopleLens reads aloud the names of identified people in spatialized audio, the child is able to get a sense of the respective positions and distances of their peers. The system receives images and processes them with computer vision algorithms, as shown by the overlays on the top images in this screenshot of the PeopleLens development environment. The system then stiches together a world map that\u2019s used to drive the experiences, as shown at the bottom right.<\/figcaption><\/figure>\n\n\n\n<p>The PeopleLens is a ground-breaking technology that has also been designed to protect privacy. Among the algorithms underpinning the system is facial recognition of people who\u2019ve been registered in the system. A person registers by taking several photographs of themselves with the phone attached to the PeopleLens. Photographs aren\u2019t stored, instead converted into a vector of numbers that represent a face. These differ from any vectors used in other systems, so recognition by the PeopleLens doesn\u2019t lead to recognition by any other system. No video or identifying information is captured by the system, ensuring that the images can\u2019t be maliciously used.<\/p>\n\n\n\n<p>The system employs a series of sounds to assist the wearer in placing people in the surrounding space: A percussive bump indicates when their gaze has crossed a person up to 10 meters away. The bump is followed by the person\u2019s name if the person is registered in the system, is within 4 meters of the wearer, and both the person\u2019s ears can be detected. The sound of woodblocks guides the wearer in finding and centering the face of a person the system has seen for 1 second but hasn\u2019t identified, changing in pitch to help the wearer adjust their gaze accordingly. (Those people who are unregistered are acknowledged with a click sound.) Gaze notification can alert the wearer to when they\u2019re being looked at.&nbsp;<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full\"><a data-bi-bhvr=\"14\"  data-bi-cn=\"A graphic overview of the PeopleLens system describes its functionality and experience features with accompanying icons.\" href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2022\/03\/TokyoSystem_1400x788.png\"><img loading=\"lazy\" decoding=\"async\" width=\"1400\" height=\"788\" src=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2022\/03\/TokyoSystem_1400x788.png\" alt=\"A graphic overview of the PeopleLens system describes its functionality and experience features with accompanying icons.\" class=\"wp-image-825307\" srcset=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2022\/03\/TokyoSystem_1400x788.png 1400w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2022\/03\/TokyoSystem_1400x788-300x169.png 300w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2022\/03\/TokyoSystem_1400x788-1024x576.png 1024w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2022\/03\/TokyoSystem_1400x788-768x432.png 768w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2022\/03\/TokyoSystem_1400x788-1066x600.png 1066w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2022\/03\/TokyoSystem_1400x788-655x368.png 655w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2022\/03\/TokyoSystem_1400x788-343x193.png 343w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2022\/03\/TokyoSystem_1400x788-240x135.png 240w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2022\/03\/TokyoSystem_1400x788-640x360.png 640w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2022\/03\/TokyoSystem_1400x788-960x540.png 960w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2022\/03\/TokyoSystem_1400x788-1280x720.png 1280w\" sizes=\"auto, (max-width: 1400px) 100vw, 1400px\" \/><\/a><figcaption>The functionality of the PeopleLens system includes experience features such as recognizing a person in front of the wearer; attention notifications from the direction of those who look at the wearer; the ability to follow someone; and an orientation guide to help wearers find people and faces.<\/figcaption><\/figure>\n\n\n\n<h2 id=\"community-collaboration\">Community collaboration<\/h2>\n\n\n\n<p>The success of the PeopleLens, as well as systems like it, is dependent on a prototyping process that includes close collaboration with the people it is intended to serve. Our work with children who are blind and their support systems has put us on a path toward building a tool that can have practical value and empower those using it. We encourage those interested in the PeopleLens to reach out about participating in our study and help us further evolve the technology.&nbsp;<\/p>\n\n\n\n<p><em>To learn more about the PeopleLens and its development, check out the <\/em><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/news.microsoft.com\/innovation-stories\/project-tokyo\/\"><em>Innovation Stories blog<\/em><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><em> about the technology.<\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>For children born blind, social interaction can be particularly challenging. A child may have difficulty aiming their voice at the person they\u2019re talking to and put their head on their desk instead. Linguistically advanced young people may struggle with maintaining a topic of conversation, talking only about something of interest to them. Most noticeably, many [&hellip;]<\/p>\n","protected":false},"author":37583,"featured_media":825280,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr-author-ordering":null,"msr_hide_image_in_river":0,"footnotes":""},"categories":[1],"tags":[],"research-area":[13562,13554,13559],"msr-region":[],"msr-event-type":[],"msr-locale":[268875],"msr-post-option":[243984],"msr-impact-theme":[],"msr-promo-type":[],"msr-podcast-series":[],"class_list":["post-823765","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-research-blog","msr-research-area-computer-vision","msr-research-area-human-computer-interaction","msr-research-area-social-sciences","msr-locale-en_us","msr-post-option-blog-homepage-featured"],"msr_event_details":{"start":"","end":"","location":""},"podcast_url":"","podcast_episode":"","msr_research_lab":[199561],"msr_impact_theme":[],"related-publications":[],"related-downloads":[],"related-videos":[],"related-academic-programs":[],"related-groups":[],"related-projects":[295553],"related-events":[],"related-researchers":[{"type":"user_nicename","value":"Cecily Morrison","user_id":31356,"display_name":"Cecily Morrison","author_link":"<a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/people\/cecilym\/?lang=ko-kr\" aria-label=\"Cecily Morrison \ub300\ud55c \ud504\ub85c\ud544 \ud398\uc774\uc9c0 \ubc29\ubb38\">Cecily Morrison<\/a>","is_active":false,"last_first":"Morrison, Cecily","people_section":0,"alias":"cecilym"},{"type":"guest","value":"katherine-jones","user_id":"825271","display_name":"Katherine Jones","author_link":"<a href=\"https:\/\/research-information.bris.ac.uk\/en\/persons\/katherine-jones\" aria-label=\"Katherine Jones \ub300\ud55c \ud504\ub85c\ud544 \ud398\uc774\uc9c0 \ubc29\ubb38\">Katherine Jones<\/a>","is_active":true,"last_first":"Jones, Katherine","people_section":0,"alias":"katherine-jones"},{"type":"user_nicename","value":"Martin Grayson","user_id":32893,"display_name":"Martin Grayson","author_link":"<a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/people\/mgrayson\/?lang=ko-kr\" aria-label=\"Martin Grayson \ub300\ud55c \ud504\ub85c\ud544 \ud398\uc774\uc9c0 \ubc29\ubb38\">Martin Grayson<\/a>","is_active":false,"last_first":"Grayson, Martin","people_section":0,"alias":"mgrayson"},{"type":"user_nicename","value":"Ed Cutrell","user_id":31490,"display_name":"Ed Cutrell","author_link":"<a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/people\/cutrell\/?lang=ko-kr\" aria-label=\"Ed Cutrell \ub300\ud55c \ud504\ub85c\ud544 \ud398\uc774\uc9c0 \ubc29\ubb38\">Ed Cutrell<\/a>","is_active":false,"last_first":"Cutrell, Ed","people_section":0,"alias":"cutrell"}],"msr_type":"Post","featured_image_thumbnail":"<img width=\"960\" height=\"540\" src=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2022\/03\/1400x788_Project_Tokyo_hero_image_still-scaled-960x540.jpg\" class=\"img-object-cover\" alt=\"A young boy wearing the PeopleLens sits on the floor of a playroom holding a blind tennis ball in his hands. His attention is directed toward a woman sitting on the floor in front of him holding her hands out. The PeopleLens looks like small goggles that sit on the forehead. The image is marked with visual annotations to indicate what the PeopleLens is seeing and what sounds are being heard.\" decoding=\"async\" loading=\"lazy\" srcset=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2022\/03\/1400x788_Project_Tokyo_hero_image_still-scaled-960x540.jpg 960w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2022\/03\/1400x788_Project_Tokyo_hero_image_still-300x169.jpg 300w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2022\/03\/1400x788_Project_Tokyo_hero_image_still-1024x576.jpg 1024w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2022\/03\/1400x788_Project_Tokyo_hero_image_still-768x432.jpg 768w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2022\/03\/1400x788_Project_Tokyo_hero_image_still-1536x864.jpg 1536w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2022\/03\/1400x788_Project_Tokyo_hero_image_still-2048x1152.jpg 2048w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2022\/03\/1400x788_Project_Tokyo_hero_image_still-1066x600.jpg 1066w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2022\/03\/1400x788_Project_Tokyo_hero_image_still-655x368.jpg 655w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2022\/03\/1400x788_Project_Tokyo_hero_image_still-343x193.jpg 343w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2022\/03\/1400x788_Project_Tokyo_hero_image_still-240x135.jpg 240w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2022\/03\/1400x788_Project_Tokyo_hero_image_still-640x360.jpg 640w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2022\/03\/1400x788_Project_Tokyo_hero_image_still-1280x720.jpg 1280w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2022\/03\/1400x788_Project_Tokyo_hero_image_still-1920x1080.jpg 1920w\" sizes=\"auto, (max-width: 960px) 100vw, 960px\" \/>","byline":"<a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/people\/cecilym\/\" title=\"Go to researcher profile for Cecily Morrison\" aria-label=\"Go to researcher profile for Cecily Morrison\" data-bi-type=\"byline author\" data-bi-cN=\"Cecily Morrison\">Cecily Morrison<\/a>, <a href=\"https:\/\/research-information.bris.ac.uk\/en\/persons\/katherine-jones\" title=\"Go to researcher profile for Katherine Jones\" aria-label=\"Go to researcher profile for Katherine Jones\" data-bi-type=\"byline author\" data-bi-cN=\"Katherine Jones\">Katherine Jones<\/a>, <a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/people\/mgrayson\/\" title=\"Go to researcher profile for Martin Grayson\" aria-label=\"Go to researcher profile for Martin Grayson\" data-bi-type=\"byline author\" data-bi-cN=\"Martin Grayson\">Martin Grayson<\/a>, and <a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/people\/cutrell\/\" title=\"Go to researcher profile for Ed Cutrell\" aria-label=\"Go to researcher profile for Ed Cutrell\" data-bi-type=\"byline author\" data-bi-cN=\"Ed Cutrell\">Ed Cutrell<\/a>","formattedDate":"March 14, 2022","formattedExcerpt":"For children born blind, social interaction can be particularly challenging. A child may have difficulty aiming their voice at the person they\u2019re talking to and put their head on their desk instead. Linguistically advanced young people may struggle with maintaining a topic of conversation, talking&hellip;","locale":{"slug":"en_us","name":"English","native":"","english":"English"},"_links":{"self":[{"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/posts\/823765","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/users\/37583"}],"replies":[{"embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/comments?post=823765"}],"version-history":[{"count":17,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/posts\/823765\/revisions"}],"predecessor-version":[{"id":888861,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/posts\/823765\/revisions\/888861"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/media\/825280"}],"wp:attachment":[{"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/media?parent=823765"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/categories?post=823765"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/tags?post=823765"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=823765"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=823765"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=823765"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=823765"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=823765"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=823765"},{"taxonomy":"msr-promo-type","embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/msr-promo-type?post=823765"},{"taxonomy":"msr-podcast-series","embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/msr-podcast-series?post=823765"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}