- DALL-E, an AI image generator, is now free and available to everyone.
- As of now, the AI generates more than 2 million AI-generated graphics daily.
- As we all know, developers claim they have improved their filters to reject images that imitate sexual, violent, or political content using data and customer input.
- The Washington Post reports that the software may be used to produce protest photographs.
Artificial intelligence-created images are already prevalent in online art and image collections. Now that DALL-E, the AI picture generator that probably began the current artificial image obsession, is free and available to everyone, expect to see even more creative images or images of dubious origins.
Developers of DALL-E OpenAI stated in a blog post on Wednesday that the game already has 1.5 million users and generates more than 2 million AI-generated graphics daily. The company claims they have improved their filters to reject any images intended to imitate sexual, violent, or political content using data and customer input. DALL-E does not currently have an API available, but they are reportedly developing one.
Although there is now a signup page, the DALL-E section of the OpenAI website still requires users to register for a waitlist as of the time of publication. OpenAI claimed that it used an “iterative deployment approach” to responsibly scale DALL-E, which “has allowed us to find ways they may use it as a powerful creative tool,” according to a statement sent by email.
New users receive 50 free credits to go toward image creation during the first month after signing up, followed by 15 free credits each subsequent month.
When OpenAI’s image generator was first announced in April, people rushed to sign up, with some waiting months for their turn. Though DALL-E-named after famed artist Salvador Dali and written akin to Disney Pixar’s WALL-E-was the first system to significantly advance AI image technology, other systems have since caught up, at least in terms of popularity. On its Discord-based platform, Midjourney has hundreds of thousands of members, and StabilityAI, the company behind the AI art generator Stable Diffusion, has been in discussions to raise millions of dollars thanks to its more open-ended, controversial system.
Reactions Include Criticisms
Due to both, the increasing popularity of AI arts and the public backlash against them, OpenAI’s announcement finds itself in a very awkward position. The Washington Post spoke with numerous OpenAI product directors while showing how one may use the software to fabricate protest photographs, which would go against the firm’s guidelines for producing political imagery. By activating content warnings on words like “preteen” and “teenager,” the system reduces user prompts. The Post also pointed out that, while the system should prohibit prompts based on famous people, it still allows users to create images of people like Mark Zuckerberg and Elon Musk.
And the vital question of possession is still open. After entering and winning the top prize in a local art competition with an AI-generated work of art, a tech executive made headlines. The U.S. Copyright Office has stated they do not accept any work that was not created by human hands, thus the question is still up for debate. Last week, an artist claimed she received the first copyright for work created using AI art.
Of course, controversy has touched all of the most well-known image-makers. Some have attributed the creation of child porn to Stable Diffusion, despite the fact that StabilityAI founder Emad Mostaque stated that they were developing tools to prevent it. Even the heads of the companies Stability AI and OpenAI have argued over which of their solutions is the least controversial.
OpenAI last week announced that they were removing the constraints that prevented users from uploading actual human faces for the AI model to try and modify. It also claimed to have developed detection technology to prevent people from abusing the platform to produce violent or pornographic content. Users were allegedly banned from posting pictures of people’s faces online without their permission. OpenAI had previously allowed academics interested in building artificial human faces access to its systems.