{"id":632136,"date":"2020-01-22T11:00:49","date_gmt":"2020-01-22T19:00:49","guid":{"rendered":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/?p=632136"},"modified":"2020-01-28T10:27:18","modified_gmt":"2020-01-28T18:27:18","slug":"project-rocket-platform-designed-for-easy-customizable-live-video-analytics-is-open-source","status":"publish","type":"post","link":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/blog\/project-rocket-platform-designed-for-easy-customizable-live-video-analytics-is-open-source\/","title":{"rendered":"Project Rocket platform\u2014designed for easy, customizable live video analytics\u2014is open source"},"content":{"rendered":"<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-632529 size-full\" src=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2020\/01\/MSR_20200121_ProjectRocket_GIF_1400x788-1_display.gif\" alt=\"code lines\" width=\"1400\" height=\"788\" \/><\/p>\n<p>Thanks to advances in computer vision and deep neural networks (DNNs) in what can arguably be described as the golden age of vision, AI, and machine learning, video analytics systems\u2014systems performing analytics on live camera streams\u2014are becoming more accurate. This accuracy offers opportunities to support individuals and society in exciting ways, like informing homeowners when a package has been delivered outside their door, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/www.wired.com\/story\/best-pet-cameras-2018\/\">allowing people to give their pets the attention they need when out for the day<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, and detecting high-traffic areas so cities can consider adding a stop light.<\/p>\n<p>While DNN advancements and DNN inference are enablers, they alone are not enough when it comes to extracting valuable insights from live videos. Live video analytics requires keeping up with video frame rates, which can be as fast as 60 frames per second, making it crucial to effectively filter frames and avoid the costly processing of each frame. <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"http:\/\/aka.ms\/rocket\">Project Rocket<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> provides a framework to do exactly that.<\/p>\n<h3>Rocket-powered<\/h3>\n<p>Rocket\u2014which we\u2019re glad to announce is now <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/github.com\/microsoft\/Microsoft-Rocket-Video-Analytics-Platform\">open source on GitHub<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>\u2014enables the easy construction of video pipelines for efficiently processing live video streams. You can build, for example, a video pipeline that includes a cascade of DNNs in which a decoded frame is first passed through a relatively inexpensive \u201clight\u201d DNN like <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"http:\/\/openaccess.thecvf.com\/content_cvpr_2016\/papers\/He_Deep_Residual_Learning_CVPR_2016_paper.pdf\">ResNet-18<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> or <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/pjreddie.com\/darknet\/yolo\/\">Tiny YOLO<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> and a \u201cheavy\u201d DNN such as <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"http:\/\/openaccess.thecvf.com\/content_cvpr_2016\/papers\/He_Deep_Residual_Learning_CVPR_2016_paper.pdf\">ResNet-152<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> or <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/pjreddie.com\/darknet\/yolo\/\">YOLOv3<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> is invoked only when required. With Rocket, you can plug in any <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/www.tensorflow.org\/\">TensorFlow<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> or <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/pjreddie.com\/darknet\/\">Darknet<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> DNN model. You can also augment the above pipeline with, let\u2019s say, a simpler motion filter based on <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/docs.opencv.org\/3.4\/d1\/dc5\/tutorial_background_subtraction.html\">OpenCV background subtraction<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, as shown in the figure below.<\/p>\n<div id=\"attachment_632547\" style=\"width: 1034px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-632547\" class=\"wp-image-632547 size-large\" src=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2020\/01\/MSR_20200122_ProjectRocket_Diagram1_r4t4_final--1024x319.png\" alt=\"diagram\" width=\"1024\" height=\"319\" srcset=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2020\/01\/MSR_20200122_ProjectRocket_Diagram1_r4t4_final--1024x319.png 1024w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2020\/01\/MSR_20200122_ProjectRocket_Diagram1_r4t4_final--300x94.png 300w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2020\/01\/MSR_20200122_ProjectRocket_Diagram1_r4t4_final--768x239.png 768w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2020\/01\/MSR_20200122_ProjectRocket_Diagram1_r4t4_final-.png 1187w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><p id=\"caption-attachment-632547\" class=\"wp-caption-text\">The above figure represents one of several video pipelines that can be built for efficient, customizable live video analytics with the Project Rocket platform. In this pipeline, decoded video frames are filtered first using background subtraction detection and then low-resource DNN detection. Frames requiring further processing are passed through a heavy DNN detector.<\/p><\/div>\n<p>Cascaded pipelines, like the one above, allow for very efficient processing of live video streams by filtering out frames with limited relevant information and being judicious about invoking resource-intensive operations. Plus, Rocket also makes it easy to ship the outputs of the video analytics, such as the number of relevant objects in an object-counting application, to a database for after-the-fact review.<\/p>\n<div id=\"attachment_632346\" style=\"width: 1034px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-632346\" class=\"wp-image-632346 size-large\" src=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2020\/01\/MSR_20200122_ProjectRocket_Diagram2_r4t4V2-1024x536.png\" alt=\"diagram\" width=\"1024\" height=\"536\" srcset=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2020\/01\/MSR_20200122_ProjectRocket_Diagram2_r4t4V2-1024x536.png 1024w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2020\/01\/MSR_20200122_ProjectRocket_Diagram2_r4t4V2-300x157.png 300w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2020\/01\/MSR_20200122_ProjectRocket_Diagram2_r4t4V2-768x402.png 768w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2020\/01\/MSR_20200122_ProjectRocket_Diagram2_r4t4V2.png 1200w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><p id=\"caption-attachment-632346\" class=\"wp-caption-text\">The Project Rocket video analytics platform (above) is self-contained and allows people to plug in TensorFlow and Darknet DNN models to create pipelines for object detection, object counting, and the like to drive higher-level applications such as traffic prediction analysis and smart homes.<\/p><\/div>\n<h3>Making streets safer<\/h3>\n<p>Project Rocket has been focusing on smart cities as its driving application. In <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/www.youtube.com\/watch?v=rTAOwvU6Yj8\">partnership with the city of Bellevue, Washington<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, we used the framework to help make the city\u2019s street system safer for drivers, riders, and pedestrians as part of its Vision Zero initiative to reduce traffic-related fatalities. With aggregate car and bicycle counts provided by a system built on the framework, for example, the city was able to <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/bellevuewa.gov\/sites\/default\/files\/media\/pdf_document\/2020\/Video Analytics Towards Vision Zero-Traffic Video Analytics-12262019.pdf\">assess the value of adding a bike lane<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> to its downtown area.<\/p>\n<div class=\"yt-consent-placeholder\" role=\"region\" aria-label=\"Video playback requires cookie consent\" data-video-id=\"boGqK7BDZ00\" data-poster=\"https:\/\/img.youtube.com\/vi\/boGqK7BDZ00\/maxresdefault.jpg\"><iframe aria-hidden=\"true\" tabindex=\"-1\" title=\"Smart Crosswalk Application with Project Rocket\" width=\"500\" height=\"281\" data-src=\"https:\/\/www.youtube-nocookie.com\/embed\/boGqK7BDZ00?feature=oembed&rel=0&enablejsapi=1\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/p>\n<div class=\"yt-consent-placeholder__overlay\"><button class=\"yt-consent-placeholder__play\"><svg width=\"42\" height=\"42\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" aria-hidden=\"true\" focusable=\"false\"><g fill=\"none\" fill-rule=\"evenodd\"><circle fill=\"#000\" opacity=\".556\" cx=\"21\" cy=\"21\" r=\"21\"\/><path stroke=\"#FFF\" d=\"M27.5 22l-12 8.5v-17z\"\/><\/g><\/svg><span class=\"yt-consent-placeholder__label\">Video playback requires cookie consent<\/span><\/button><\/div>\n<\/div>\n<p>One exciting traffic safety\u2013related application we recently used it for, separate of our work with Bellevue, is a smart crosswalk. Using a live camera feed, the <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/www.youtube.com\/watch?v=boGqK7BDZ00&feature=youtu.be\">smart crosswalk<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, which is in the prototype stage, is able to detect when a person in a wheelchair is in the middle of the crosswalk and extend the timer to allow the person to safely finish crossing.<\/p>\n<p>Throughout our research and as we continue to develop Rocket, we\u2019re devising privacy-protecting tools, including a \u201cprivacy protector\u201d technique in which only those elements relevant to an application\u2014for example, cars in a traffic-counting system\u2014will be made available; background elements and other details, such as people, homes, businesses, and license plate numbers in the traffic-counting example, will be blacked out. Additionally, Rocket leverages the edge, and for all its benefits in enabling efficiency, we also see it as a means for keeping data in a trusted space\u2014that is, on users\u2019 premises.<\/p>\n<h3>Get the code and get to work!<\/h3>\n<p>The Rocket platform is Linux friendly. The code is written in .NET Core, which is compatible on Windows, as well as Linux. The Rocket repository also has simple instructions to create Docker containers, allowing for easy deployment using orchestration frameworks like Kubernetes. Docker containers are also readily compatible with appliances that bring computing to the edge, such as <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/azure.microsoft.com\/en-us\/services\/databox\/edge\/\">Microsoft Azure Stack Edge<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>. Additionally, Rocket has easy-to-use code for optionally invoking customized <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/azure.microsoft.com\/en-us\/services\/machine-learning\/\">Azure Machine Learning<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> models in the cloud.<\/p>\n<p>Check out the <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/github.com\/microsoft\/Microsoft-Rocket-Video-Analytics-Platform\">code<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> and give it a spin!<\/p>\n<p>For more information, including a tutorial on how to get started building your own video analytics applications atop the platform, check out our <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/note.microsoft.com\/MSR-Webinar-Microsoft-Rocket-Registration-On-Demand.html?wt.mc_id=blog_MSR-WBNR_link_v1\">Project Rocket webinar<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, available on demand now.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Thanks to advances in computer vision and deep neural networks (DNNs) in what can arguably be described as the golden age of vision, AI, and machine learning, video analytics systems\u2014systems performing analytics on live camera streams\u2014are becoming more accurate. This accuracy offers opportunities to support individuals and society in exciting ways, like informing homeowners when [&hellip;]<\/p>\n","protected":false},"author":38838,"featured_media":632541,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr-author-ordering":null,"msr_hide_image_in_river":0,"footnotes":""},"categories":[194467,194455,194457],"tags":[],"research-area":[13556,13563],"msr-region":[],"msr-event-type":[],"msr-locale":[268875],"msr-post-option":[],"msr-impact-theme":[],"msr-promo-type":[],"msr-podcast-series":[],"class_list":["post-632136","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-artifical-intelligence","category-machine-learning","category-open-source","msr-research-area-artificial-intelligence","msr-research-area-data-platform-analytics","msr-locale-en_us"],"msr_event_details":{"start":"","end":"","location":""},"podcast_url":"","podcast_episode":"","msr_research_lab":[],"msr_impact_theme":[],"related-publications":[],"related-downloads":[],"related-videos":[],"related-academic-programs":[],"related-groups":[],"related-projects":[382664],"related-events":[],"related-researchers":[{"type":"guest","value":"ganesh-ananthanarayanan","user_id":"632355","display_name":"Ganesh  Ananthanarayanan","author_link":"<a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/people\/ga\/\" aria-label=\"Visit the profile page for Ganesh  Ananthanarayanan\">Ganesh  Ananthanarayanan<\/a>","is_active":true,"last_first":"Ananthanarayanan, Ganesh ","people_section":0,"alias":"ganesh-ananthanarayanan"},{"type":"guest","value":"landon-cox","user_id":"632481","display_name":"Landon Cox","author_link":"<a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/people\/lacox\/\" aria-label=\"Visit the profile page for Landon Cox\">Landon Cox<\/a>","is_active":true,"last_first":"Cox, Landon","people_section":0,"alias":"landon-cox"},{"type":"user_nicename","value":"Victor Bahl","user_id":31167,"display_name":"Victor Bahl","author_link":"<a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/people\/bahl\/\" aria-label=\"Visit the profile page for Victor Bahl\">Victor Bahl<\/a>","is_active":false,"last_first":"Bahl, Victor","people_section":0,"alias":"bahl"}],"msr_type":"Post","featured_image_thumbnail":"<img width=\"960\" height=\"540\" src=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2020\/01\/Rocket_Still2-960x540.jpg\" class=\"img-object-cover\" alt=\"\" decoding=\"async\" loading=\"lazy\" srcset=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2020\/01\/Rocket_Still2-960x540.jpg 960w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2020\/01\/Rocket_Still2-300x168.jpg 300w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2020\/01\/Rocket_Still2-1024x575.jpg 1024w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2020\/01\/Rocket_Still2-768x431.jpg 768w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2020\/01\/Rocket_Still2-1066x600.jpg 1066w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2020\/01\/Rocket_Still2-655x368.jpg 655w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2020\/01\/Rocket_Still2-343x193.jpg 343w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2020\/01\/Rocket_Still2-640x360.jpg 640w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2020\/01\/Rocket_Still2.jpg 1161w\" sizes=\"auto, (max-width: 960px) 100vw, 960px\" \/>","byline":"<a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/people\/ga\/\" title=\"Go to researcher profile for Ganesh  Ananthanarayanan\" aria-label=\"Go to researcher profile for Ganesh  Ananthanarayanan\" data-bi-type=\"byline author\" data-bi-cN=\"Ganesh  Ananthanarayanan\">Ganesh  Ananthanarayanan<\/a>, Yuanchao Shu, <a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/people\/lacox\/\" title=\"Go to researcher profile for Landon Cox\" aria-label=\"Go to researcher profile for Landon Cox\" data-bi-type=\"byline author\" data-bi-cN=\"Landon Cox\">Landon Cox<\/a>, and <a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/people\/bahl\/\" title=\"Go to researcher profile for Victor Bahl\" aria-label=\"Go to researcher profile for Victor Bahl\" data-bi-type=\"byline author\" data-bi-cN=\"Victor Bahl\">Victor Bahl<\/a>","formattedDate":"January 22, 2020","formattedExcerpt":"Thanks to advances in computer vision and deep neural networks (DNNs) in what can arguably be described as the golden age of vision, AI, and machine learning, video analytics systems\u2014systems performing analytics on live camera streams\u2014are becoming more accurate. This accuracy offers opportunities to support&hellip;","locale":{"slug":"en_us","name":"English","native":"","english":"English"},"_links":{"self":[{"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/posts\/632136","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/users\/38838"}],"replies":[{"embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/comments?post=632136"}],"version-history":[{"count":25,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/posts\/632136\/revisions"}],"predecessor-version":[{"id":632556,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/posts\/632136\/revisions\/632556"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/media\/632541"}],"wp:attachment":[{"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/media?parent=632136"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/categories?post=632136"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/tags?post=632136"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=632136"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=632136"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=632136"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=632136"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=632136"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=632136"},{"taxonomy":"msr-promo-type","embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/msr-promo-type?post=632136"},{"taxonomy":"msr-podcast-series","embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/msr-podcast-series?post=632136"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}