{"id":1812,"date":"2022-04-09T13:57:26","date_gmt":"2022-04-09T05:57:26","guid":{"rendered":"https:\/\/tongwing.woon.sg\/blog\/?p=1812"},"modified":"2022-04-09T13:57:26","modified_gmt":"2022-04-09T05:57:26","slug":"dall%c2%b7e-2","status":"publish","type":"post","link":"https:\/\/tongwing.woon.sg\/blog\/dall%c2%b7e-2\/","title":{"rendered":"DALL\u00b7E 2"},"content":{"rendered":"<p>Another ground-breaking work from OpenAI.<\/p>\n<p>We are all familiar with AI models that does image analysis and outputs text description or labels. For instance,<img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-large wp-image-1813\" src=\"https:\/\/tongwing.woon.sg\/blog\/wp-content\/uploads\/2022\/04\/maxresdefault-1024x576-1.jpg\" alt=\"\" width=\"580\" height=\"326\" \/><\/p>\n<p>Dall-E and its successor, <a href=\"https:\/\/openai.com\/dall-e-2\/\">Dall-E 2<\/a>, sort of does the reverse. It <em>produces<\/em> an image based on text description. There&#8217;s some degree of randomization there so it can produce different outputs from the same prompt text.<\/p>\n<p>Here&#8217;s an example generated from &#8220;An astronaut riding a horse in the style of Andy Warhol&#8221;.<img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-1815\" src=\"https:\/\/tongwing.woon.sg\/blog\/wp-content\/uploads\/2022\/04\/0-1.jpg\" alt=\"\" width=\"1024\" height=\"1024\" \/><\/p>\n<p>Someone used Dall-E 2 to <a href=\"https:\/\/twitter.com\/nickcammarata\/status\/1511861061988892675\">generate pictures from Twitter bios<\/a> and the results are just jaw-dropping.<\/p>\n<p>&#8220;<span class=\"css-901oao css-16my406 r-poiln3 r-bcqeeo r-qvutc0\">happy sisyphus<\/span>&#8221;<br \/>\n<img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-medium wp-image-1816\" src=\"https:\/\/tongwing.woon.sg\/blog\/wp-content\/uploads\/2022\/04\/FPs2_7xXoAI-xCp-300x300-1.jpg\" alt=\"\" width=\"300\" height=\"300\" \/><\/p>\n<p>&#8220;<span class=\"css-901oao css-16my406 r-poiln3 r-bcqeeo r-qvutc0\">bookbear<\/span>&#8221;<br \/>\n<img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-medium wp-image-1817\" src=\"https:\/\/tongwing.woon.sg\/blog\/wp-content\/uploads\/2022\/04\/FPs2RYsXsAUeQbw-300x300-1.jpg\" alt=\"\" width=\"300\" height=\"300\" \/><\/p>\n<p>&#8220;<span class=\"css-901oao css-16my406 r-poiln3 r-bcqeeo r-qvutc0\">machine learning researchoor | technology brother | &#8220;prolific Twitter shitposter<\/span>&#8221;<br \/>\n<img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-medium wp-image-1818\" src=\"https:\/\/tongwing.woon.sg\/blog\/wp-content\/uploads\/2022\/04\/FPs8bywXIAs9DlU-300x300-1.jpg\" alt=\"\" width=\"300\" height=\"300\" \/><\/p>\n<p>It&#8217;s currently in private preview but should not be long before it provides a commercial offering.<\/p>\n<blockquote><p>DALL\u00b7E 2 is a new AI system that can create realistic images and art from a description in natural language.<\/p><\/blockquote>\n<p>Source: <em><a href=\"https:\/\/openai.com\/dall-e-2\/\">DALL\u00b7E 2<\/a><\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Another ground-breaking work from OpenAI. We are all familiar with AI models that does image analysis and outputs text description or labels. For instance, Dall-E and its successor, Dall-E 2, sort of does the reverse. It produces an image based on text description. There&#8217;s some degree of randomization there so it can produce different outputs [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[34],"tags":[],"_links":{"self":[{"href":"https:\/\/tongwing.woon.sg\/blog\/wp-json\/wp\/v2\/posts\/1812"}],"collection":[{"href":"https:\/\/tongwing.woon.sg\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/tongwing.woon.sg\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/tongwing.woon.sg\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/tongwing.woon.sg\/blog\/wp-json\/wp\/v2\/comments?post=1812"}],"version-history":[{"count":2,"href":"https:\/\/tongwing.woon.sg\/blog\/wp-json\/wp\/v2\/posts\/1812\/revisions"}],"predecessor-version":[{"id":1825,"href":"https:\/\/tongwing.woon.sg\/blog\/wp-json\/wp\/v2\/posts\/1812\/revisions\/1825"}],"wp:attachment":[{"href":"https:\/\/tongwing.woon.sg\/blog\/wp-json\/wp\/v2\/media?parent=1812"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/tongwing.woon.sg\/blog\/wp-json\/wp\/v2\/categories?post=1812"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/tongwing.woon.sg\/blog\/wp-json\/wp\/v2\/tags?post=1812"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}