{"id":1071219,"date":"2023-03-16T01:27:00","date_gmt":"2023-03-16T08:27:00","guid":{"rendered":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/?post_type=msr-blog-post&#038;p=1071219"},"modified":"2024-09-25T03:32:14","modified_gmt":"2024-09-25T10:32:14","slug":"aigc-audio-image-data-generation-paper-list","status":"publish","type":"msr-blog-post","link":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/articles\/aigc-audio-image-data-generation-paper-list\/","title":{"rendered":"\u4e86\u89e3AIGC\u97f3\u9891\/\u56fe\u50cf\u6570\u636e\u751f\u6210\uff0c\u8fd9\u51e0\u7bc7\u8bba\u6587\u7ed9\u4f60\u5212\u597d\u4e86\u91cd\u70b9\uff01"},"content":{"rendered":"\n<p><em>\u4f5c\u8005\uff1a\u8c2d\u65ed<\/em><\/p>\n\n\n\n<p>\u4f5c\u4e3a\u8fd1\u671f\u4eba\u5de5\u667a\u80fd\u9886\u57df\u5185\u7684\u9876\u6d41\u4e4b\u4e00\uff0cAIGC\uff08AI-Generated Content \u6216 Generative AI\uff09\u65e9\u5df2\u706b\u7206\u51fa\u5708\uff0c\u9891\u767b\u5404\u5927\u4e92\u8054\u7f51\u5e73\u53f0\u70ed\u641c\u3002\u57fa\u4e8e\u6df1\u5ea6\u5b66\u4e60\u7684\u5185\u5bb9\u751f\u6210\u5728\u56fe\u50cf\u3001\u89c6\u9891\u3001\u8bed\u97f3\u3001\u97f3\u4e50\u3001\u6587\u672c\u7b49\u751f\u6210\u9886\u57df\u90fd\u53d6\u5f97\u4e86\u4ee4\u4eba\u77a9\u76ee\u7684\u6210\u679c\u3002<\/p>\n\n\n\n<p>\u7531\u4e8e\u73b0\u5b9e\u4e16\u754c\u4e2d\u7684\u4fe1\u606f\u5728\u591a\u6570\u60c5\u51b5\u4e0b\u5448\u73b0\u6587\u672c\u3001\u56fe\u50cf\u548c\u8bed\u97f3\u7b49\u591a\u79cd\u6a21\u6001\uff0c\u4eba\u7c7b\u4f1a\u901a\u8fc7\u7efc\u5408\u8fd0\u7528\u591a\u79cd\u611f\u5b98\u6765\u611f\u77e5\u548c\u7406\u89e3\u73b0\u5b9e\u4e16\u754c\uff0c\u56e0\u6b64\uff0c\u5982\u4f55\u8d4b\u4e88\u8ba1\u7b97\u673a\u8fd9\u79cd\u7efc\u5408\u7406\u89e3\u591a\u79cd\u6a21\u6001\u7684\u80fd\u529b\u4e5f\u6210\u4e3a\u4e86\u5b66\u672f\u754c\u7684\u7814\u7a76\u70ed\u70b9\u3002<\/p>\n\n\n\n<p>\u4e0e\u6587\u672c\u751f\u6210\u66f4\u52a0\u5173\u6ce8\u62bd\u8c61\u8bed\u4e49\u4e0d\u540c\uff0c\u58f0\u97f3\u548c\u89c6\u89c9\u6a21\u6001\u8fd8\u9700\u8981\u751f\u6210\u66f4\u591a\u7684\u7ec6\u8282\u4fe1\u606f\u3002\u6240\u4ee5\uff0c\u58f0\u97f3\u548c\u89c6\u89c9\u5185\u5bb9\uff08\u8bed\u97f3\u3001\u97f3\u6548\u3001\u97f3\u4e50\u3001\u56fe\u50cf\u3001\u89c6\u9891\u7b49\uff09\u7684\u751f\u6210\u9762\u4e34\u7740\u4e00\u7cfb\u5217\u6311\u6218\uff1a\u5982\u4f55\u523b\u753b\u58f0\u97f3\u89c6\u89c9\u5185\u5bb9\u4e2d\u590d\u6742\u4e14\u9ad8\u9891\u7684\u6570\u636e\u5206\u5e03\uff1b\u5982\u4f55\u5efa\u6a21\u751f\u6210\u8fc7\u7a0b\u4e2d\u7684\u4e00\u5bf9\u591a\u6620\u5c04\u95ee\u9898\uff1b\u5982\u4f55\u5229\u7528\u5927\u89c4\u6a21\u65e0\u6807\u6ce8\u6570\u636e\u89e3\u51b3\u6570\u636e\u7a00\u758f\u6027\u95ee\u9898\uff1b\u5728\u57fa\u4e8e\u5176\u5b83\u6a21\u6001\u751f\u6210\u65f6\uff0c\u5982\u4f55\u89e3\u51b3\u8de8\u6a21\u6001\u5bf9\u9f50\u95ee\u9898\u7b49\u3002<\/p>\n\n\n\n<p>\u4eca\u5929\u9001\u4e0a\u4e00\u4e2a\u53ef\u4ee5\u51fb\u7834 AIGC \u6570\u636e\u751f\u6210\u4e2d\u8fd9\u4e9b\u96be\u9898\u7684\u8bba\u6587\u9526\u56ca\uff01\u5e0c\u671b\u5927\u5bb6\u53ef\u4ee5\u5728\u5165\u5751 AIGC \u9886\u57df\u4e4b\u521d\u80fd\u6709\u6240\u542f\u53d1\u3002<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"\u5b66\u4e60\u8303\u5f0f-learning-paradigm-\u9ad8\u5c4b\u5efa\u74f4\">\u5b66\u4e60\u8303\u5f0f\uff08Learning Paradigm\uff09\u2014\u2014 \u9ad8\u5c4b\u5efa\u74f4<\/h2>\n\n\n\n<p>\u4e00\u4e2a\u597d\u7684\u5b66\u4e60\u8303\u5f0f\u80fd\u4e3a\u7814\u7a76\u8005\u5728\u63a2\u7d22\u590d\u6742\u7684\u6df1\u5ea6\u5b66\u4e60\u95ee\u9898\u65f6\uff0c\u6307\u5bfc\u8bbe\u8ba1\u65b9\u6cd5\u548c\u6a21\u578b\u3002\u5728\u4f20\u7edf\u7684\u6570\u636e\u7406\u89e3\u4efb\u52a1\u4e2d\uff0c\u6df1\u5ea6\u5b66\u4e60\u5148\u9a71 Yoshua Bengio \u7b49\u4eba\u5021\u5bfc\u7684\u8868\u5f81\u5b66\u4e60 Representation Learning \u975e\u5e38\u503c\u5f97\u53c2\u8003\u3002\u8868\u5f81\u5b66\u4e60\u53ef\u4ee5\u6307\u5bfc\u6df1\u5ea6\u5b66\u4e60\u6a21\u578b\u63d0\u9ad8\u5b66\u4e60\u6570\u636e\u8868\u5f81\u7684\u80fd\u529b\uff0c\u4ee5\u589e\u5f3a\u5bf9\u6570\u636e\u7684\u7406\u89e3\u3002<\/p>\n\n\n\n<p><strong><em>[1] Bengio, Y., Courville, A., & Vincent, P. (2013). Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8), 1798-1828.<\/em><\/strong><br><em>https:\/\/arxiv.org\/abs\/1206.5538<\/em><\/p>\n\n\n\n<p>\u800c\u5728 AIGC \u7684\u6570\u636e\u751f\u6210\u4efb\u52a1\u4e2d\uff0c\u5fae\u8f6f\u7814\u7a76\u9662\u7684\u7814\u7a76\u5458\u4eec\u540c Yoshua Bengio \u63d0\u51fa\u7684 Regeneration Learning \u7684\u5b66\u4e60\u8303\u5f0f\u80fd\u4e3a\u5404\u4e2a\u6570\u636e\u751f\u6210\u4efb\u52a1\u63d0\u4f9b\u6307\u5bfc\u3002\u5b83\u5c06\u590d\u6742\u7684\u5e26\u6761\u4ef6\u7684\u6570\u636e\u751f\u6210\u4efb\u52a1 X\u2014>Y \u5206\u89e3\u6210\u4e24\u4e2a\u9636\u6bb5\uff0cX\u2014>Y\u2019 \u548c Y\u2019\u2014>Y\uff0c\u5176\u4e2dX\u662f\u6761\u4ef6\u4fe1\u606f\uff0cY \u662f\u76ee\u6807\u6570\u636e\uff0c\u800c Y\u2019 \u662f Y \u7684\u62bd\u8c61\u8868\u5f81\uff0c\u901a\u8fc7\u81ea\u76d1\u7763\u7684\u65b9\u6cd5\u6bd4\u5982\u81ea\u7f16\u7801\u5668\u5b66\u5230\u3002<\/p>\n\n\n\n<p>Regeneration Leaming \u6709\u51e0\u4e2a\u597d\u5904\uff1a1) X\u2014>Y\u2019\u76f8\u6bd4\u4e8e X\u2014>Y \u7684\u4e00\u5bf9\u591a\u6620\u5c04\u548c\u865a\u5047\u6620\u5c04\u95ee\u9898\u4f1a\u5927\u5927\u51cf\u8f7b\uff1b2)Y\u2019\u2014>Y\u7684\u6620\u5c04\u53ef\u4ee5\u901a\u8fc7\u81ea\u76d1\u7763\u5b66\u4e60\u5229\u7528\u5927\u89c4\u6a21\u7684\u65e0\u6807\u6ce8\u6570\u636e\u8fdb\u884c\u9884\u8bad\u7ec3\u3002<\/p>\n\n\n\n<p><strong><em>[2] Tan, X., Qin, T., Bian, J., Liu, T. Y., & Bengio, Y. (2023). Regeneration Learning: A Learning Paradigm for Data Generation. arXiv preprint arXiv:2301.08846.<\/em><\/strong><br><em>https:\/\/arxiv.org\/abs\/2301.08846<\/em><\/p>\n\n\n\n<p><\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"559\" height=\"204\" src=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2024\/08\/aigc-audio-image-data-generation-paper-list-1.png\" alt=\"\u56fe1\uff1aRegeneration Learning \u548c Representation Learning \u7684\u5bf9\u6bd4\" class=\"wp-image-1071297\" srcset=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2024\/08\/aigc-audio-image-data-generation-paper-list-1.png 559w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2024\/08\/aigc-audio-image-data-generation-paper-list-1-300x109.png 300w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2024\/08\/aigc-audio-image-data-generation-paper-list-1-240x88.png 240w\" sizes=\"auto, (max-width: 559px) 100vw, 559px\" \/><figcaption class=\"wp-element-caption\">\u56fe1\uff1aRegeneration Learning \u548c Representation Learning \u7684\u5bf9\u6bd4<\/figcaption><\/figure>\n\n\n\n<p id=\"caption-attachment-44369\"><\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"\u7f16\u89e3\u7801\u5668-codec-\u5316\u7e41\u4e3a\u7b80\">\u7f16\u89e3\u7801\u5668 (Codec)\u2014\u2014\u5316\u7e41\u4e3a\u7b80<\/h2>\n\n\n\n<p>\u58f0\u97f3\u548c\u89c6\u89c9\u5185\u5bb9\uff08\u8bed\u97f3\u3001\u97f3\u6548\u3001\u97f3\u4e50\u3001\u56fe\u50cf\u3001\u89c6\u9891\u7b49\uff09\u5f80\u5f80\u542b\u6709\u590d\u6742\u7684\u9ad8\u9891\u7ec6\u8282\u4fe1\u606f\uff0c\u56e0\u6b64\u79d1\u7814\u4eba\u5458\u4eec\u5229\u7528 Codec\uff08\u7f16\u89e3\u7801\u5668\uff09\u7b49\u65b9\u6cd5\uff0c\u5c06\u627f\u8f7d\u9ad8\u9891\u7ec6\u8282\u7684\u58f0\u97f3\u548c\u89c6\u89c9\u5185\u5bb9\u8f6c\u5316\u4e3a\u62bd\u8c61\u7d27\u81f4\u7684\u8868\u5f81\uff08\u79bb\u6563 Token \u6216\u8005\u8fde\u7eed\u5411\u91cf\uff09\uff0c\u4ee5\u964d\u4f4e\u540e\u7eed\u6570\u636e\u751f\u6210\u7684\u96be\u5ea6\u3002\u76f8\u5173\u8bba\u6587\uff0c\u5305\u62ec\u56fe\u50cf\u91cc\u7684 Codec [3][4][5]\u4ee5\u53ca\u58f0\u97f3\u91cc\u7684 Codec [6]\u3002<\/p>\n\n\n\n<p>\u8bba\u6587[3]\u662f\u8f83\u65e9\u7684\u4e00\u7bc7\u5c06\u8fde\u7eed\u56fe\u50cf\u97f3\u9891\u6570\u636e\u901a\u8fc7 VQ-VAE\uff08\u5411\u91cf\u91cf\u5316\u81ea\u7f16\u7801\u5668\uff09\u8f6c\u6210\u79bb\u6563 Token \u7684\u5de5\u4f5c\uff0c\u800c\u540e\u7eed\u8bba\u6587[4]\u5c06 VQ-VAE \u548c GAN \u7ed3\u5408\u8fdb\u4e00\u6b65\u63d0\u5347\u6548\u679c\u3002<\/p>\n\n\n\n<p><strong><em>[3] Van Den Oord, A., & Vinyals, O. (2017). Neural discrete representation learning. Advances in Neural Information Processing Systems, 30.<\/em><\/strong><br><em>https:\/\/arxiv.org\/abs\/1711.00937<\/em><\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1010\" height=\"316\" src=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2024\/08\/aigc-audio-image-data-generation-paper-list-2.png\" alt=\"\u56fe2\uff1aVQ-VAE\" class=\"wp-image-1071300\" srcset=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2024\/08\/aigc-audio-image-data-generation-paper-list-2.png 1010w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2024\/08\/aigc-audio-image-data-generation-paper-list-2-300x94.png 300w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2024\/08\/aigc-audio-image-data-generation-paper-list-2-768x240.png 768w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2024\/08\/aigc-audio-image-data-generation-paper-list-2-240x75.png 240w\" sizes=\"auto, (max-width: 1010px) 100vw, 1010px\" \/><figcaption class=\"wp-element-caption\">\u56fe2\uff1aVQ-VAE<\/figcaption><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<p id=\"caption-attachment-44370\"><\/p>\n\n\n\n<p><strong><em>[4] Esser, P., Rombach, R., & Ommer, B. (2021). Taming transformers for high-resolution image synthesis. In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (pp. 12873-12883).<\/em><\/strong><br><em>https:\/\/arxiv.org\/abs\/2012.09841<\/em><\/p>\n\n\n\n<p>\u8bba\u6587[5]\u662f\u6587\u672c\u5230\u56fe\u50cf\u751f\u6210\u5927\u706b\u7684 Stable Diffusion\uff0c\u548c VQ-VAE \u548c VQ-GAN \u4e0d\u540c\u7684\u662f\uff0c\u5b83\u66f4\u52a0\u504f\u5411\u5229\u7528 VAE \u5c06\u56fe\u50cf\u8f6c\u4e3a\u8fde\u7eed\u5411\u91cf\u5f62\u5f0f\u7684\u62bd\u8c61\u8868\u5f81\u3002<\/p>\n\n\n\n<p><strong><em>[5] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., & Ommer, B. (2022). High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (pp. 10684-10695).<\/em><\/strong><br><em>https:\/\/arxiv.org\/abs\/2112.10752<\/em><\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"773\" height=\"379\" src=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2024\/08\/aigc-audio-image-data-generation-paper-list-3.png\" alt=\"\u56fe3\uff1aStable Diffusion\" class=\"wp-image-1071303\" srcset=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2024\/08\/aigc-audio-image-data-generation-paper-list-3.png 773w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2024\/08\/aigc-audio-image-data-generation-paper-list-3-300x147.png 300w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2024\/08\/aigc-audio-image-data-generation-paper-list-3-768x377.png 768w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2024\/08\/aigc-audio-image-data-generation-paper-list-3-240x118.png 240w\" sizes=\"auto, (max-width: 773px) 100vw, 773px\" \/><figcaption class=\"wp-element-caption\">\u56fe3\uff1aStable Diffusion<\/figcaption><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<p>\u8bba\u6587[6]\u5219\u5229\u7528 VQ-VAE \u5c06\u8bed\u97f3\u6ce2\u5f62\u8f6c\u6210\u79bb\u6563 Token\uff0c\u4e3a\u4e86\u589e\u52a0\u91cd\u5efa\u8d28\u91cf\uff0c\u5b83\u91c7\u7528\u4e86Residual Vector Quantizers\uff08\u6b8b\u5dee\u5411\u91cf\u91cf\u5316\u5668\uff09\u5c06\u4e00\u5e27\u8bed\u97f3\u91cf\u5316\u6210\u591a\u4e2a\u6b8b\u5dee Token\u3002<\/p>\n\n\n\n<p><strong><em>[6] Zeghidour, N., Luebs, A., Omran, A., Skoglund, J., & Tagliasacchi, M. (2021). SoundStream: An end-to-end neural audio codec. IEEE\/ACM Transactions on Audio, Speech, and Language Processing, 30, 495-507.<\/em><\/strong><br><em>https:\/\/arxiv.org\/abs\/2107.03312<\/em><\/p>\n\n\n\n<p><\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"698\" height=\"222\" src=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2024\/08\/aigc-audio-image-data-generation-paper-list-4.png\" alt=\"\u56fe4\uff1aSoundStream\" class=\"wp-image-1071306\" srcset=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2024\/08\/aigc-audio-image-data-generation-paper-list-4.png 698w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2024\/08\/aigc-audio-image-data-generation-paper-list-4-300x95.png 300w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2024\/08\/aigc-audio-image-data-generation-paper-list-4-240x76.png 240w\" sizes=\"auto, (max-width: 698px) 100vw, 698px\" \/><figcaption class=\"wp-element-caption\">\u56fe4\uff1aSoundStream<\/figcaption><\/figure>\n\n\n\n<p id=\"caption-attachment-44372\"><\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"\u751f\u6210\u6a21\u578b-generative-model-\u65e0\u4e2d\u751f\u6709\">\u751f\u6210\u6a21\u578b\uff08Generative Model\uff09\u2014\u2014\u65e0\u4e2d\u751f\u6709<\/h2>\n\n\n\n<p>\u5f3a\u5927\u7684\u751f\u6210\u6a21\u578b\u80fd\u7ec6\u81f4\u800c\u7cbe\u51c6\u5730\u523b\u753b\u6570\u636e\u4e2d\u7684\u590d\u6742\u5206\u5e03\uff0c\u8ba9\u6a21\u578b\u80fd\u66f4\u597d\u5730\u4ece\u5b66\u4e60\u5230\u7684\u5206\u5e03\u4e2d\u91c7\u6837\uff0c\u4ee5\u5b9e\u73b0\u6570\u636e\u7684\u4ece\u65e0\u5230\u6709\u751f\u6210\u3002<\/p>\n\n\n\n<p>\u5728\u5f53\u524d\u6d41\u884c\u7684\u6570\u636e\u751f\u6210\u6a21\u578b\u4e2d\uff0c\u6587\u672c\u751f\u6210 GPT \u7cfb\u5217\u6bd4\u5982 GPT 1\/2\/3 \u4ee5\u53ca ChatGPT \u91c7\u7528\u7684\u662f Transformer \u81ea\u56de\u5f52\u6a21\u578b\uff0c\u800c\u5728\u56fe\u50cf\u548c\u97f3\u9891\u751f\u6210\u4e2d\uff0c\u6709\u4e9b\u91c7\u7528\u7684\u662f\u6269\u6563\u6a21\u578b\uff08\u6bd4\u5982 DALL-E 2\uff0cImagen\uff0cStable Diffusion\uff0c\u4ee5\u53ca DiffWave\/ WaveGrad\/ GradTTS\uff09\uff0c\u4e5f\u6709\u4e9b\u91c7\u7528\u7684\u662f\u81ea\u56de\u5f52\u6a21\u578b\uff08\u6bd4\u5982 DALL-E\uff0cParti\uff0cAudioLM\uff09\u3002\u5173\u4e8e\u5404\u79cd\u751f\u6210\u6a21\u578b\u7684\u6bd4\u8f83\u5206\u6790\uff0c\u5927\u5bb6\u53ef\u53c2\u8003\u8fd9\u7bc7\u6587\u7ae0https:\/\/zhuanlan.zhihu.com\/p\/591881660\u3002<\/p>\n\n\n\n<p>\u4ee5\u4e0b\u8bba\u6587\u603b\u7ed3\u4e86\u5178\u578b\u7684\u751f\u6210\u6a21\u578b\uff0c\u5305\u62ec\u53d8\u5206\u81ea\u7f16\u7801\u5668 VAE [7]\uff0c\u751f\u6210\u5bf9\u6297\u7f51\u7edc GAN [8]\uff0c\u6807\u51c6\u5316\u6d41 Flow [9]\uff0c\u6269\u6563\u6a21\u578b Diffusion [10][11]\uff0c\u4ee5\u53ca\u81ea\u56de\u5f52\u6a21\u578b AR [12]\u3002<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"576\" height=\"604\" src=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2024\/08\/aigc-audio-image-data-generation-paper-list-5.jpg\" alt=\"\u56fe5\uff1a\u751f\u6210\u6a21\u578b\" class=\"wp-image-1071309\" srcset=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2024\/08\/aigc-audio-image-data-generation-paper-list-5.jpg 576w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2024\/08\/aigc-audio-image-data-generation-paper-list-5-286x300.jpg 286w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2024\/08\/aigc-audio-image-data-generation-paper-list-5-172x180.jpg 172w\" sizes=\"auto, (max-width: 576px) 100vw, 576px\" \/><figcaption class=\"wp-element-caption\">\u56fe5\uff1a\u751f\u6210\u6a21\u578b<\/figcaption><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<p id=\"caption-attachment-44373\"><\/p>\n\n\n\n<p><strong><em>[7] Kingma, D. P., & Welling, M. (2013). Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114.<\/em><\/strong><br><em>https:\/\/arxiv.org\/abs\/1312.6114<\/em><\/p>\n\n\n\n<p><strong><em>[8] Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., &#8230; & Bengio, Y. (2014). Generative adversarial networks. Advances in Neural Information Processing Systems.<\/em><\/strong><br><em>https:\/\/arxiv.org\/abs\/1406.2661<\/em><\/p>\n\n\n\n<p><strong><em>[9] Dinh, L., Krueger, D., & Bengio, Y. (2014). Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516.<\/em><\/strong><br><em>https:\/\/arxiv.org\/abs\/1410.8516<\/em><\/p>\n\n\n\n<p><strong><em>[10] Sohl-Dickstein, J., Weiss, E., Maheswaranathan, N., & Ganguli, S. (2015, June). Deep unsupervised learning using nonequilibrium thermodynamics. In International Conference on Machine Learning (pp. 2256-2265). PMLR.<\/em><\/strong><br><em>https:\/\/arxiv.org\/abs\/1503.03585<\/em><\/p>\n\n\n\n<p><strong><em>[11] Ho, J., Jain, A., & Abbeel, P. (2020). Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems, 33, 6840-6851.<\/em><\/strong><br><em>https:\/\/arxiv.org\/abs\/2006.11239<\/em><\/p>\n\n\n\n<p><strong><em>[12] Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., &#8230; & Amodei, D. (2020). Language models are few-shot learners. Advances in Neural Information Processing Systems, 33, 1877-1901.<\/em><\/strong><br><em>https:\/\/arxiv.org\/abs\/2005.14165<\/em><\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"\u8de8\u6a21\u6001\u5bf9\u9f50-cross-modal-alignment-\u7275\u7ebf\u642d\u6865\">\u8de8\u6a21\u6001\u5bf9\u9f50\uff08Cross-Modal Alignment\uff09\u2014\u2014\u7275\u7ebf\u642d\u6865<\/h2>\n\n\n\n<p>\u5f53\u5229\u7528\u6761\u4ef6\u4fe1\u606f\u4f5c\u4e3a\u8f93\u5165\u6765\u751f\u6210\u6570\u636e\u7684\u65f6\u5019\uff0c\u6761\u4ef6\u4fe1\u606f\u5f80\u5f80\u548c\u751f\u6210\u6570\u636e\u7684\u6a21\u6001\u4e0d\u4e00\u81f4\u3002\u56e0\u6b64\u9700\u8981\u4e00\u4e2a\u8de8\u6a21\u6001\u5bf9\u9f50\u6a21\u578b\u6765\u62c9\u8fd1\u4e24\u4e2a\u6a21\u6001\u4e4b\u95f4\u7684\u5173\u7cfb\u3002<\/p>\n\n\n\n<p>\u6587\u672c\u5230\u56fe\u50cf\u751f\u6210\u6a21\u578b DALL-E 2 [13]\uff0c\u901a\u8fc7\u6587\u672c-\u56fe\u50cf\u5bf9\u9f50\u6a21\u578b CLIP [14]\u6765\u62c9\u8fd1\u56fe\u6587\u4e4b\u95f4\u7684\u8ddd\u79bb\uff1b\u6587\u672c\u5230\u97f3\u4e50\u97f3\u9891\u751f\u6210\u6a21\u578b MusicLM [15]\uff0c\u5219\u901a\u8fc7\u6587\u672c-\u97f3\u4e50\u97f3\u9891\u5bf9\u9f50\u6a21\u578b MuLan [16]\u6765\u62c9\u8fd1\u97f3\u4e50\u548c\u6587\u5b57\u4e4b\u95f4\u7684\u8ddd\u79bb\u3002<\/p>\n\n\n\n<p>\u901a\u8fc7\u5229\u7528\u5bf9\u9f50\u6a21\u578b\u5c06\u8f93\u5165\u6a21\u6001\u8f6c\u4e3a\u5171\u4eab\u7684\u8868\u5f81\u4f5c\u4e3a\u751f\u6210\u6a21\u578b\u7684\u6761\u4ef6\u8f93\u5165\uff0c\u53ef\u5927\u5927\u964d\u4f4e\u751f\u6210\u6a21\u578b\u5904\u7406\u4e0d\u540c\u6a21\u6001\u8f93\u5165\u7684\u6210\u672c\uff0c\u4f7f\u5176\u4e13\u6ce8\u4e8e\u6570\u636e\u751f\u6210\uff0c\u63d0\u9ad8\u751f\u6210\u6548\u679c\u3002\u4e0b\u5217\u8bba\u6587\u91c7\u96c6\u4e86 DALL-E 2 \u90fd\u5728\u7528\u7684\u6587\u672c-\u56fe\u50cf\u5bf9\u9f50\u6a21\u578b CLIP [14]\u4ee5\u53ca MusicLM \u5728\u7528\u7684\u6587\u672c-\u97f3\u9891\u5bf9\u9f50\u6a21\u578b MuLan [16]\uff0c\u8fd9\u4e9b\u65b9\u6cd5\u503c\u5f97\u4e00\u8bd5\uff01<\/p>\n\n\n\n<p><strong><em>[13] Ramesh, A., Dhariwal, P., Nichol, A., Chu, C., & Chen, M. (2022). Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125.<\/em><\/strong><br><em>https:\/\/arxiv.org\/abs\/2204.06125<\/em><\/p>\n\n\n\n<p><strong><em>[14] Radford, A., Kim, J. W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., &#8230; & Sutskever, I. (2021, July). Learning transferable visual models from natural language supervision. In International Conference on Machine Learning (pp. 8748-8763). PMLR.<\/em><\/strong><br><em>https:\/\/arxiv.org\/abs\/2103.00020<\/em><\/p>\n\n\n\n<p><\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"819\" height=\"297\" src=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2024\/08\/aigc-audio-image-data-generation-paper-list-6.png\" alt=\"\u56fe6\uff1aCLIP\" class=\"wp-image-1071312\" srcset=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2024\/08\/aigc-audio-image-data-generation-paper-list-6.png 819w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2024\/08\/aigc-audio-image-data-generation-paper-list-6-300x109.png 300w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2024\/08\/aigc-audio-image-data-generation-paper-list-6-768x279.png 768w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2024\/08\/aigc-audio-image-data-generation-paper-list-6-240x87.png 240w\" sizes=\"auto, (max-width: 819px) 100vw, 819px\" \/><figcaption class=\"wp-element-caption\">\u56fe6\uff1aCLIP<\/figcaption><\/figure>\n\n\n\n<p id=\"caption-attachment-44374\"><\/p>\n\n\n\n<p><strong><em>[15] Agostinelli, A., Denk, T. I., Borsos, Z., Engel, J., Verzetti, M., Caillon, A., &#8230; & Frank, C. (2023). Musiclm: Generating music from text. arXiv preprint arXiv:2301.11325.<\/em><\/strong><br><em>https:\/\/arxiv.org\/abs\/2301.11325<\/em><\/p>\n\n\n\n<p><strong><em>[16] Huang, Q., Jansen, A., Lee, J., Ganti, R., Li, J. Y., & Ellis, D. P. (2022). MuLan: A joint embedding of music audio and natural language. arXiv preprint arXiv:2208.12415.<\/em><\/strong><br><em>https:\/\/arxiv.org\/abs\/2208.12415<\/em><\/p>\n\n\n\n<p><\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"631\" height=\"370\" src=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2024\/08\/aigc-audio-image-data-generation-paper-list-7.png\" alt=\"\u56fe7\uff1aMuLan\" class=\"wp-image-1071315\" srcset=\"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2024\/08\/aigc-audio-image-data-generation-paper-list-7.png 631w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2024\/08\/aigc-audio-image-data-generation-paper-list-7-300x176.png 300w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2024\/08\/aigc-audio-image-data-generation-paper-list-7-480x280.png 480w, https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-content\/uploads\/2024\/08\/aigc-audio-image-data-generation-paper-list-7-240x141.png 240w\" sizes=\"auto, (max-width: 631px) 100vw, 631px\" \/><figcaption class=\"wp-element-caption\">\u56fe7\uff1aMuLan<\/figcaption><\/figure>\n\n\n\n<p id=\"caption-attachment-44375\"><\/p>\n","protected":false},"excerpt":{"rendered":"<p>\u4f5c\u8005\uff1a\u8c2d\u65ed \u4f5c\u4e3a\u8fd1\u671f\u4eba\u5de5\u667a\u80fd\u9886\u57df\u5185\u7684\u9876\u6d41\u4e4b\u4e00\uff0cAIGC\uff08AI-Generated Content \u6216 Generative AI\uff09\u65e9\u5df2\u706b\u7206\u51fa\u5708\uff0c\u9891\u767b\u5404\u5927\u4e92\u8054\u7f51\u5e73\u53f0\u70ed\u641c\u3002\u57fa\u4e8e\u6df1\u5ea6\u5b66\u4e60\u7684\u5185\u5bb9\u751f\u6210\u5728\u56fe\u50cf\u3001\u89c6\u9891\u3001\u8bed\u97f3\u3001\u97f3\u4e50\u3001\u6587\u672c\u7b49\u751f\u6210\u9886\u57df\u90fd\u53d6\u5f97\u4e86\u4ee4\u4eba\u77a9\u76ee\u7684\u6210\u679c\u3002 \u7531\u4e8e\u73b0\u5b9e\u4e16\u754c\u4e2d\u7684\u4fe1\u606f\u5728\u591a\u6570\u60c5\u51b5\u4e0b\u5448\u73b0\u6587\u672c\u3001\u56fe\u50cf\u548c\u8bed\u97f3\u7b49\u591a\u79cd\u6a21\u6001\uff0c\u4eba\u7c7b\u4f1a\u901a\u8fc7\u7efc\u5408\u8fd0\u7528\u591a\u79cd\u611f\u5b98\u6765\u611f\u77e5\u548c\u7406\u89e3\u73b0\u5b9e\u4e16\u754c\uff0c\u56e0\u6b64\uff0c\u5982\u4f55\u8d4b\u4e88\u8ba1\u7b97\u673a\u8fd9\u79cd\u7efc\u5408\u7406\u89e3\u591a\u79cd\u6a21\u6001\u7684\u80fd\u529b\u4e5f\u6210\u4e3a\u4e86\u5b66\u672f\u754c\u7684\u7814\u7a76\u70ed\u70b9\u3002 \u4e0e\u6587\u672c\u751f\u6210\u66f4\u52a0\u5173\u6ce8\u62bd\u8c61\u8bed\u4e49\u4e0d\u540c\uff0c\u58f0\u97f3\u548c\u89c6\u89c9\u6a21\u6001\u8fd8\u9700\u8981\u751f\u6210\u66f4\u591a\u7684\u7ec6\u8282\u4fe1\u606f\u3002\u6240\u4ee5\uff0c\u58f0\u97f3\u548c\u89c6\u89c9\u5185\u5bb9\uff08\u8bed\u97f3\u3001\u97f3\u6548\u3001\u97f3\u4e50\u3001\u56fe\u50cf\u3001\u89c6\u9891\u7b49\uff09\u7684\u751f\u6210\u9762\u4e34\u7740\u4e00\u7cfb\u5217\u6311\u6218\uff1a\u5982\u4f55\u523b\u753b\u58f0\u97f3\u89c6\u89c9\u5185\u5bb9\u4e2d\u590d\u6742\u4e14\u9ad8\u9891\u7684\u6570\u636e\u5206\u5e03\uff1b\u5982\u4f55\u5efa\u6a21\u751f\u6210\u8fc7\u7a0b\u4e2d\u7684\u4e00\u5bf9\u591a\u6620\u5c04\u95ee\u9898\uff1b\u5982\u4f55\u5229\u7528\u5927\u89c4\u6a21\u65e0\u6807\u6ce8\u6570\u636e\u89e3\u51b3\u6570\u636e\u7a00\u758f\u6027\u95ee\u9898\uff1b\u5728\u57fa\u4e8e\u5176\u5b83\u6a21\u6001\u751f\u6210\u65f6\uff0c\u5982\u4f55\u89e3\u51b3\u8de8\u6a21\u6001\u5bf9\u9f50\u95ee\u9898\u7b49\u3002 \u4eca\u5929\u9001\u4e0a\u4e00\u4e2a\u53ef\u4ee5\u51fb\u7834 AIGC \u6570\u636e\u751f\u6210\u4e2d\u8fd9\u4e9b\u96be\u9898\u7684\u8bba\u6587\u9526\u56ca\uff01\u5e0c\u671b\u5927\u5bb6\u53ef\u4ee5\u5728\u5165\u5751 AIGC \u9886\u57df\u4e4b\u521d\u80fd\u6709\u6240\u542f\u53d1\u3002 \u4e00\u4e2a\u597d\u7684\u5b66\u4e60\u8303\u5f0f\u80fd\u4e3a\u7814\u7a76\u8005\u5728\u63a2\u7d22\u590d\u6742\u7684\u6df1\u5ea6\u5b66\u4e60\u95ee\u9898\u65f6\uff0c\u6307\u5bfc\u8bbe\u8ba1\u65b9\u6cd5\u548c\u6a21\u578b\u3002\u5728\u4f20\u7edf\u7684\u6570\u636e\u7406\u89e3\u4efb\u52a1\u4e2d\uff0c\u6df1\u5ea6\u5b66\u4e60\u5148\u9a71 Yoshua Bengio \u7b49\u4eba\u5021\u5bfc\u7684\u8868\u5f81\u5b66\u4e60 Representation Learning \u975e\u5e38\u503c\u5f97\u53c2\u8003\u3002\u8868\u5f81\u5b66\u4e60\u53ef\u4ee5\u6307\u5bfc\u6df1\u5ea6\u5b66\u4e60\u6a21\u578b\u63d0\u9ad8\u5b66\u4e60\u6570\u636e\u8868\u5f81\u7684\u80fd\u529b\uff0c\u4ee5\u589e\u5f3a\u5bf9\u6570\u636e\u7684\u7406\u89e3\u3002 [1] Bengio, Y., Courville, A., & Vincent, P. (2013). Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8), 1798-1828.https:\/\/arxiv.org\/abs\/1206.5538 \u800c\u5728 AIGC \u7684\u6570\u636e\u751f\u6210\u4efb\u52a1\u4e2d\uff0c\u5fae\u8f6f\u7814\u7a76\u9662\u7684\u7814\u7a76\u5458\u4eec\u540c Yoshua Bengio \u63d0\u51fa\u7684 Regeneration Learning \u7684\u5b66\u4e60\u8303\u5f0f\u80fd\u4e3a\u5404\u4e2a\u6570\u636e\u751f\u6210\u4efb\u52a1\u63d0\u4f9b\u6307\u5bfc\u3002\u5b83\u5c06\u590d\u6742\u7684\u5e26\u6761\u4ef6\u7684\u6570\u636e\u751f\u6210\u4efb\u52a1 [&hellip;]<\/p>\n","protected":false},"author":34512,"featured_media":1071294,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr-content-parent":1012650,"msr_hide_image_in_river":0,"footnotes":""},"research-area":[],"msr-locale":[268881],"msr-post-option":[],"class_list":["post-1071219","msr-blog-post","type-msr-blog-post","status-publish","has-post-thumbnail","hentry","msr-locale-zh_cn"],"msr_assoc_parent":{"id":1012650,"type":"lab"},"_links":{"self":[{"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post\/1071219","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post"}],"about":[{"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-blog-post"}],"author":[{"embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/users\/34512"}],"version-history":[{"count":3,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post\/1071219\/revisions"}],"predecessor-version":[{"id":1079988,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post\/1071219\/revisions\/1079988"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/media\/1071294"}],"wp:attachment":[{"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/media?parent=1071219"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=1071219"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=1071219"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/newed.any0.dpdns.org\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=1071219"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}