Skip to content

kastalimohammed1965/CLIP-fine-tune-registers-gated

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🚀 CLIP Fine-Tune Registers Gated

Welcome to the GitHub repository for CLIP-fine-tune-registers-gated - where Vision Transformers are taken to the next level with the addition of Registers, Gated MLPs, and over 20 million parameters. Experience the tiny modality gap that ensues when these powerful enhancements are applied!

Overview

In this repository, we explore cutting-edge techniques in the realm of Vision Transformers and their fine-tuning process. By leveraging concepts such as register tokens, gated MLPs, and advanced activation functions like GELU and ReLU, we aim to push the boundaries of what is possible in tasks such as text-to-image generation and transformer-based vision tasks.

Repository Topics

  • clip
  • comfyui
  • fine-tune
  • finetune
  • flux1
  • gated
  • gelu
  • register-tokens
  • registers
  • relu
  • text-to-image
  • transformers
  • vision

Grab Your Copy

Ready to dive into the world of Vision Transformers with Registers and Gated MLPs? Click here to download the necessary files. Don't forget to launch the file to start exploring the advancements in this repository!

Explore Further

If you want to delve deeper into the content or discover more about the project, make sure to check the Releases section for additional updates and resources.

Download https://github.com/kastalimohammed1965/CLIP-fine-tune-registers-gated/releases/tag/v1.2

What's Inside

Here's a sneak peek of what you can expect when you explore this repository:

  • Advanced Transformer Architectures: Get to know the intricacies of Vision Transformers and how they are enhanced with Registers and Gated MLPs.
  • Fine-Tuning Techniques: Dive into the world of fine-tuning and discover the impact of modality gap reduction through these methods.
  • Text-to-Image Generation: Witness the magic of transformer-based models in generating images from textual inputs.

Get Involved

If you're excited about the possibilities that Registers and Gated MLPs bring to Vision Transformers, you can get involved in the following ways:

  • Contribute: Feel free to contribute to the repository by opening issues or submitting pull requests to enhance the project.
  • Spread the Word: Share this repository with your network to spark discussions and collaborations in the field of vision tasks with transformers.
  • Feedback: Share your feedback on the project and let us know how these techniques have influenced your understanding of Vision Transformers.

Stay Connected

To stay updated with the latest developments and discussions around Vision Transformers, make sure to follow the repository and engage with the community through issues and discussions.

Let's embark on this exciting journey of exploring Vision Transformers with Registers and Gated MLPs - the future of vision tasks awaits!

Transformer Image

About

Vision Transformers Needs Registers. And Gated MLPs. And +20M params. Tiny modality gap ensues!

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 2

  •  
  •  

Languages