The GenAI Competition 2025

Place : University in China(To be announced later)
April 26, 2025(Tentative)


Generative Adversarial Networks (GANs)

This model is best for image duplication and synthetic data generation.

https://thesocietyofgenai.com/wp-content/uploads/2023/10/img_ca_1.png
https://thesocietyofgenai.com/wp-content/uploads/2023/10/img_ca_2.png

Transformer-based Models

This model is best for text generation and content/code completion. Common subsets of transformer-based models include generative pre-trained transformer (GPT) and bidirectional encoder representations from transformers (BERT) models.


Diffusion Models

This model is best for image generation and video/image synthesis.

https://thesocietyofgenai.com/wp-content/uploads/2023/10/img_ca_3.png
https://thesocietyofgenai.com/wp-content/uploads/2023/10/img_ca_4.png

Variational Autoencoders (VAEs)

This model is best for image, audio, and video content creation, especially when synthetic data needs to be photorealistic; designed with an encoder-decoder infrastructure.


Unimodal Models

This model is that are set up to accept only one data input format; most generative AI models today are unimodal models.

https://thesocietyofgenai.com/wp-content/uploads/2023/10/img_ca_5.png
https://thesocietyofgenai.com/wp-content/uploads/2023/10/img_ca_6.png

Multimodal Models

This model is designed to accept multiple types of inputs and prompts when generating outputs; for example, GPT-4 can accept both text and images as inputs.


Large Language Models

This model is the most popular and well-known type of generative AI model right now,large language models(LLMs) are designed to generate and complete written content at scale.

https://thesocietyofgenai.com/wp-content/uploads/2023/10/img_ca_7.png
https://thesocietyofgenai.com/wp-content/uploads/2023/10/img_ca_8.png

Neural Radiance Fields (NeRFs)

This model is an emerging neural network technology that can be used to generate 3D imagery based on 2D image inputs.