Whereas many builders are figuring out how generative AI might be used to provide complete 3D objects from scratch, Adobe is already utilizing its Firefly AI mannequin to streamline present 3D workflows. On the Recreation Builders Convention on Monday, Adobe debuted two new integrations for its Substance 3D design software program suite that enable 3D artists to shortly produce artistic property for his or her initiatives utilizing textual content descriptions.
The primary is a “Textual content to Texture” characteristic for Substance 3D Sampler that Adobe says can generate “photorealistic or stylized textures” from immediate descriptions, equivalent to scaled pores and skin or woven material. These textures can then be utilized on to 3D fashions, sparing designers from needing to seek out acceptable reference supplies.
The second characteristic is a brand new “Generative Background” software for Substance 3D Stager. This permits designers to make use of textual content prompts to generate background pictures for objects they’re composing into 3D scenes. The intelligent factor right here is that each of those options really use 2D imaging know-how, identical to Adobe’s earlier Firefly-powered instruments in Photoshop and Illustrator. Firefly isn’t producing 3D fashions or information — as a substitute, Substance is utilizing 2D pictures produced from textual content descriptions and making use of them in ways in which seem 3D.
The brand new Textual content to Texture and Generative Background options can be found within the beta variations of Substance 3D Sampler 4.4 and Stager 3.0, respectively. Adobe’s head of 3D and metaverse, Sébastien Deguy, informed The Verge that each options are free throughout beta and have been educated on Adobe-owned property, together with reference supplies produced by the corporate and licensed Adobe inventory.