Snap Research has launched a generative AI technology that converts text to images in the shortest time possible.
According to the company, SnapFusion shortens the model runtime from a text input to image generation on mobile to under two seconds.
"Snap Research achieved this breakthrough by optimizing the network architecture and denoising process, making it incredibly efficient while maintaining image quality," the company said in a statement.
Snap Research further notes that it aims to optimise the network architecture and reduce noise while maintaining image quality.
The new model aims to create images based on computed texts and get back crisp clear images in seconds.
“Through the improved Step distillation and Network Architecture develoPment for the difFUSION is how we came up with the name SnapFusion,” Snapchat said.
Different models analysis of stable diffusion include; diffusion models and latent diffusion models.
Diffusion models gradually convert simple distribution of noise to desired real images.
The latent diffusion model reduces the inference computation by performing the denoising process in the latent space.
“We collect an internal dataset with high-resolution images to fine-tune our model for more pleasing visual quality,” Snapchat added.
According to Snapchat, SnapFusion has the potential to supercharge high-quality generative AI experience on mobile devices in the future.