{"id":694401,"date":"2020-10-01T09:39:47","date_gmt":"2020-10-01T16:39:47","guid":{"rendered":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/?p=694401"},"modified":"2020-10-06T16:28:44","modified_gmt":"2020-10-06T23:28:44","slug":"archai-can-design-your-neural-network-with-state-of-the-art-neural-architecture-search-nas","status":"publish","type":"post","link":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/blog\/archai-can-design-your-neural-network-with-state-of-the-art-neural-architecture-search-nas\/","title":{"rendered":"Archai can design your neural network with state-of-the-art neural architecture search (NAS)"},"content":{"rendered":"\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" src=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2020\/10\/1400x788_Archai_NoLogo-2.gif\" alt=\"\"\/><\/figure>\n\n\n\n<p>The goal of <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/abs\/1808.05377\">neural architecture search (NAS)<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> is to have computers automatically search for the best-performing neural networks. Recent advances in NAS methods have made it possible to build problem-specific networks that are faster, more compact, and less power hungry than their handcrafted counterparts. Unfortunately, many NAS methods rely on an array of tricks that aren\u2019t always documented in a way that\u2019s easy to discover. While these tricks result in neural networks with greater accuracy, they often cloud the performance of the search algorithm themselves. Since different NAS methods use different enhancements and some none at all, NAS techniques have become difficult for researchers to compare. The use of a variety of enhancements has also made NAS methods <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/abs\/1902.07638\">difficult to reproduce<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>. Once-promising methods may disappoint when an attempt is made to transfer them to other datasets. Additionally, engineers trying to use NAS often find it challenging to understand the implications of advertised advances because of a deluge of research claims, an inability to fairly compare methods side by side, fragmented code bases in research repos, hyperparameters that aren\u2019t carefully managed, and a lack of plug-and-play for individual techniques.<\/p>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"margin-callout\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 annotations__list--right\">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">SOURCE CODE<\/span>\n\t\t\t<a href=\"https:\/\/github.com\/microsoft\/archai\" data-bi-cN=\"GitHub: Archai\" data-external-link=\"false\" data-bi-aN=\"margin-callout\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>GitHub: Archai<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<p>We\u2019ve sought to address many of these concerns with a goal of making state-of-the-art NAS research more widely usable. We\u2019ve asked, can we find the right abstractions to unify many of these methods? A unified NAS framework would help enable the adoption of NAS algorithms in industry and support reproducibility, as well as fair evaluation, in research. Such a framework would also accelerate algorithmic innovation by allowing the research community to pursue even higher ambitions in its application of NAS, as well as to conduct searches in novel spaces that might yield architectures we haven\u2019t yet imagined. With this goal in mind, we\u2019ve developed <a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/project\/archai-platform-for-neural-architecture-search\/\">Archai<\/a>, an-open source project now available on GitHub. <em>Archai<\/em>, short for <em>Architecture A<\/em>I, means \u201cfirst principles,\u201d which captures the spirit of the work we\u2019re doing.<\/p>\n\n\n\n<p>Archai enables execution of standard NAS algorithms with a single command line. Currently, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/abs\/1806.09055\">Differentiable Architecture Search (DARTS)<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, <a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/publication\/efficient-forward-architecture-search\/\">Petridish<\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/papers.nips.cc\/paper\/8374-data-differentiable-architecture-approximation\">Differentiable ArchiTecture Approximation (DATA)<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, and <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/papers.nips.cc\/paper\/8472-xnas-neural-architecture-search-with-expert-advice\">eXperts Neural Architecture Search (XNAS)<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> are implemented. Archai makes it easy to add new algorithms, experiment with many well-known datasets, and add new datasets through unified interfaces. Additionally, Archai enables the isolation of hyperparameters via a configuration system that makes assumptions and settings explicit. The behaviors of architecture search systems are sensitive to these hyperparameters. With unified hyperparameter configuration controls, different algorithms can be tested on the same playing field.<\/p>\n\n\n\n\t<div class=\"border-bottom border-top border-gray-300 mt-5 mb-5 msr-promo text-center text-md-left alignwide\" data-bi-aN=\"promo\" data-bi-id=\"999693\">\n\t\t\n\n\t\t<p class=\"msr-promo__label text-gray-800 text-center text-uppercase\">\n\t\t<span class=\"px-4 bg-white display-inline-block font-weight-semibold small\">Spotlight: Event Series<\/span>\n\t<\/p>\n\t\n\t<div class=\"row pt-3 pb-4 align-items-center\">\n\t\t\t\t\t\t<div class=\"msr-promo__media col-12 col-md-5\">\n\t\t\t\t<a class=\"bg-gray-300 display-block\" href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/event\/microsoft-research-forum\/?OCID=msr_researchforum_MCR_Blog_Promo\" aria-label=\"Microsoft Research Forum\" data-bi-cN=\"Microsoft Research Forum\" target=\"_blank\">\n\t\t\t\t\t<img decoding=\"async\" class=\"w-100 display-block\" src=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2025\/05\/Research-Forum-hero_1400x788.jpg\" alt=\"Research Forum | abstract background with colorful hexagons\" \/>\n\t\t\t\t<\/a>\n\t\t\t<\/div>\n\t\t\t\n\t\t\t<div class=\"msr-promo__content p-3 px-5 col-12 col-md\">\n\n\t\t\t\t\t\t\t\t\t<h2 class=\"h4\">Microsoft Research Forum<\/h2>\n\t\t\t\t\n\t\t\t\t\t\t\t\t<p id=\"microsoft-research-forum\" class=\"large\">Join us for a continuous exchange of ideas about research in the era of general AI. Watch the first four episodes on demand.<\/p>\n\t\t\t\t\n\t\t\t\t\t\t\t\t<div class=\"wp-block-buttons justify-content-center justify-content-md-start\">\n\t\t\t\t\t<div class=\"wp-block-button\">\n\t\t\t\t\t\t<a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/event\/microsoft-research-forum\/?OCID=msr_researchforum_MCR_Blog_Promo\" aria-describedby=\"microsoft-research-forum\" class=\"btn btn-brand glyph-append glyph-append-chevron-right\" data-bi-cN=\"Microsoft Research Forum\" target=\"_blank\">\n\t\t\t\t\t\t\tWatch on-demand\t\t\t\t\t\t<\/a>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t<\/div><!--\/.msr-promo__content-->\n\t<\/div><!--\/.msr-promo__inner-wrap-->\n\t<\/div><!--\/.msr-promo-->\n\t\n\n\n<p>At the core of Archai are several interfaces that provide abstractions for common components of NAS algorithms. This reduces the code duplication, making new algorithm development faster and easier. Archai also uses a common model description language based on YAML that is extensible and \u201ccompilable\u201d to a PyTorch model. Because all the algorithms share exactly the same components, including the ones for training and evaluation, they can be written more compactly. Having common components also sets the stage for fairer comparison and easier reproducibility.<\/p>\n\n\n\n<h3 id=\"key-features-of-archai\">Key features of Archai<\/h3>\n\n\n\n<ul class=\"wp-block-list\"><li><strong>Declarative approach and reproducibility:<\/strong> Many research works employ a variety of enhancements that, while seemingly small, could make a world of difference to neural network performance. For example, some works use only 600 epochs for final architecture training, while others use 1,500. Some may exploit <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/research.google\/pubs\/pub47890\/\">AutoAugment<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> for data augmentation during training, while others may only use <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/pdf\/1708.04552.pdf\">Cutout<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>. We pored over various research codebases to extract bags of tricks. With Archai, these tricks can now be switched on or off by simple configuration that applies to all algorithms. Extracting these tricks has also allowed us to make Archai a general-purpose framework to train manually designed neural networks efficiently. <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/abs\/1912.12522v3\">Recent work<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> has shown that judiciously using these training tricks is usually more important than small differences in architectures themselves.<\/li><li><strong>Search-space abstractions: <\/strong>A significant amount of current NAS research focuses on rather small search spaces made popular by a few early efforts. In Archai, we offer abstractions that can significantly expand the search spaces in a more generalized fashion and are available to all algorithms. It\u2019s our hope that the research community will find it useful to push the envelope with these expanded search spaces that haven\u2019t been fully explored yet.<\/li><li><strong>Mixing and matching of different techniques: <\/strong>There are several exciting questions we can explore through mixing and matching different techniques. What if we want to apply the growth method proposed by Petridish to DARTS? Can we apply L1 regularization over architecture weights to other algorithms with just a flip of a configuration switch? What if a researcher wanted to run the online-learning motivated update rules as proposed in XNAS or <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/abs\/2004.07802\">Geometric NAS<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> in new search spaces or use them inside a new algorithm? Archai offers modularized components of different NAS algorithms so they can be easily mixed and matched.<\/li><li><strong>Generalized Pareto frontier search<\/strong>: A crucial use case in which NAS becomes a necessity is deploying neural networks on compute-constrained platforms such as smartphones or embedded devices. In these scenarios, one can expect budget constraints for power consumption, latency, memory usage, available flops, and other factors. A model must work within this budget even if it means sacrificing some accuracy. It\u2019s difficult to manually design optimal networks with a wide range of specified constraints. Given the difficulty, current NAS algorithms will almost always outperform manual designs. Archai can generate a gallery of architectures with specified compute characteristics. Our NAS method, Petridish, was designed with this primary intention. Petridish is available through Archai, now with higher-performing, distributed implementation. We plan to generalize the Pareto front generation for all algorithms so that almost any algorithm can leverage this technique to produce a similar gallery of models. &nbsp;<\/li><\/ul>\n\n\n\n<p>Archai offers several other desirable features, including logging, publication-ready experiment reports, mixed-precision training, distributed training, faster and more general implementation of algorithms such as bilevel optimization, training and evaluation code incorporating several best practices, support for NVIDIA Data Loading Library (DALI) and Apex, \u201cmini-mode\u201d for development on a laptop, support for <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/github.com\/microsoft\/tensorwatch\">TensorWatch<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, as well as <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/www.tensorflow.org\/tensorboard\">TensorBoard<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, architecture visualization, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/abs\/1902.09635\">NASBench-101<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>\/<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/abs\/2001.00326\">201<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> (with <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/abs\/2008.09777\">NASBench-301<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> coming soon), and cross-platform code that works for Linux and OS X, as well as Windows.<\/p>\n\n\n\n<p>For full list of features, please visit our <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/github.com\/microsoft\/archai\/blob\/master\/docs\/features.md\">GitHub page<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>.<\/p>\n\n\n\n<h3 id=\"join-the-archai-community\">Join the Archai community<\/h3>\n\n\n\n<p>We hope that researchers and engineers will find Archai useful and contribute to these efforts to accelerate NAS adoption, as well as future research. We formally invite the broader community to join us in this journey with contributions, pull requests, and algorithm implementations. Check out our <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/github.com\/microsoft\/archai\">Archai GitHub repository<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> for more information and join the <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/www.facebook.com\/groups\/1133660130366735\">Archai group<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> to stay up to date.<\/p>\n\n\n\n<p><em>Acknowledgments: We thank <\/em><a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/people\/jcl\/\"><em>Partner Research Manager John Langford<\/em><\/a><em>, <\/em><a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/people\/horvitz\/\"><em>Technical Fellow and Chief Scientific Officer Eric Horvitz<\/em><\/a><em>, <\/em><a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/people\/rcaruana\/\"><em>Senior Principal Researcher Rich Caruana<\/em><\/a><em>, and <\/em><a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/people\/alekha\/\"><em>Principal Research Manager Alekh Agarwal<\/em><\/a><em> for providing valuable guidance and rich discussions.<\/em><\/p>\n\n\n\n<div class=\"wp-block-group is-layout-flow wp-block-group-is-layout-flow\"><div class=\"wp-block-group__inner-container\"><\/div><\/div>\n\n\n\n<div class=\"wp-block-group is-layout-flow wp-block-group-is-layout-flow\"><div class=\"wp-block-group__inner-container\"><\/div><\/div>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>The goal of neural architecture search (NAS) (opens in new tab) is to have computers automatically search for the best-performing neural networks. Recent advances in NAS methods have made it possible to build problem-specific networks that are faster, more compact, and less power hungry than their handcrafted counterparts. Unfortunately, many NAS methods rely on an [&hellip;]<\/p>\n","protected":false},"author":38838,"featured_media":695679,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr-author-ordering":[{"type":"user_nicename","value":"Shital Shah","user_id":"35435"},{"type":"user_nicename","value":"Debadeepta Dey","user_id":"31594"}],"msr_hide_image_in_river":0,"footnotes":""},"categories":[1],"tags":[],"research-area":[13556],"msr-region":[],"msr-event-type":[],"msr-locale":[268875],"msr-post-option":[243984],"msr-impact-theme":[],"msr-promo-type":[],"msr-podcast-series":[],"class_list":["post-694401","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-research-blog","msr-research-area-artificial-intelligence","msr-locale-en_us","msr-post-option-blog-homepage-featured"],"msr_event_details":{"start":"","end":"","location":""},"podcast_url":"","podcast_episode":"","msr_research_lab":[],"msr_impact_theme":[],"related-publications":[],"related-downloads":[],"related-videos":[],"related-academic-programs":[],"related-groups":[],"related-projects":[804847,692739],"related-events":[],"related-researchers":[{"type":"user_nicename","value":"Shital Shah","user_id":35435,"display_name":"Shital Shah","author_link":"<a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/people\/shitals\/\" aria-label=\"Visit the profile page for Shital Shah\">Shital Shah<\/a>","is_active":false,"last_first":"Shah, Shital","people_section":0,"alias":"shitals"}],"msr_type":"Post","featured_image_thumbnail":"<img width=\"960\" height=\"540\" src=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2020\/10\/1400x788_ArchaiStill_No_Logo-960x540.png\" class=\"img-object-cover\" alt=\"illustration of the neural architecture search platform Archai automatically identifying neural network architectures for a given dataset.\" decoding=\"async\" loading=\"lazy\" srcset=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2020\/10\/1400x788_ArchaiStill_No_Logo-960x540.png 960w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2020\/10\/1400x788_ArchaiStill_No_Logo-300x169.png 300w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2020\/10\/1400x788_ArchaiStill_No_Logo-1024x576.png 1024w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2020\/10\/1400x788_ArchaiStill_No_Logo-768x432.png 768w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2020\/10\/1400x788_ArchaiStill_No_Logo-1536x865.png 1536w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2020\/10\/1400x788_ArchaiStill_No_Logo-2048x1153.png 2048w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2020\/10\/1400x788_ArchaiStill_No_Logo-1066x600.png 1066w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2020\/10\/1400x788_ArchaiStill_No_Logo-655x368.png 655w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2020\/10\/1400x788_ArchaiStill_No_Logo-343x193.png 343w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2020\/10\/1400x788_ArchaiStill_No_Logo-640x360.png 640w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2020\/10\/1400x788_ArchaiStill_No_Logo-1280x720.png 1280w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2020\/10\/1400x788_ArchaiStill_No_Logo-1920x1080.png 1920w\" sizes=\"auto, (max-width: 960px) 100vw, 960px\" \/>","byline":"<a href=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/people\/shitals\/\" title=\"Go to researcher profile for Shital Shah\" aria-label=\"Go to researcher profile for Shital Shah\" data-bi-type=\"byline author\" data-bi-cN=\"Shital Shah\">Shital Shah<\/a> and Debadeepta Dey","formattedDate":"October 1, 2020","formattedExcerpt":"The goal of neural architecture search (NAS) (opens in new tab) is to have computers automatically search for the best-performing neural networks. Recent advances in NAS methods have made it possible to build problem-specific networks that are faster, more compact, and less power hungry than&hellip;","locale":{"slug":"en_us","name":"English","native":"","english":"English"},"_links":{"self":[{"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/posts\/694401","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/users\/38838"}],"replies":[{"embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/comments?post=694401"}],"version-history":[{"count":7,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/posts\/694401\/revisions"}],"predecessor-version":[{"id":696489,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/posts\/694401\/revisions\/696489"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/media\/695679"}],"wp:attachment":[{"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/media?parent=694401"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/categories?post=694401"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/tags?post=694401"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=694401"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=694401"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=694401"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=694401"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=694401"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=694401"},{"taxonomy":"msr-promo-type","embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/msr-promo-type?post=694401"},{"taxonomy":"msr-podcast-series","embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/msr-podcast-series?post=694401"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}