Place : University in China(To be announced later)
April 26, 2025(Tentative)
Generative Adversarial Networks (GANs)
This model is best for image duplication and synthetic data generation.
Transformer-based Models
This model is best for text generation and content/code completion. Common subsets of transformer-based models include generative pre-trained transformer (GPT) and bidirectional encoder representations from transformers (BERT) models.
Diffusion Models
This model is best for image generation and video/image synthesis.
Variational Autoencoders (VAEs)
This model is best for image, audio, and video content creation, especially when synthetic data needs to be photorealistic; designed with an encoder-decoder infrastructure.
Unimodal Models
This model is that are set up to accept only one data input format; most generative AI models today are unimodal models.
Multimodal Models
This model is designed to accept multiple types of inputs and prompts when generating outputs; for example, GPT-4 can accept both text and images as inputs.
Large Language Models
This model is the most popular and well-known type of generative AI model right now,large language models(LLMs) are designed to generate and complete written content at scale.
Neural Radiance Fields (NeRFs)
This model is an emerging neural network technology that can be used to generate 3D imagery based on 2D image inputs.