Skip to content

prashere/CLIP

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 

Repository files navigation

CLIP From Scratch

This is my implementation of the CLIP (Contrastive Language-Image Pretraining) model.


Project Structure

  • clip_deployment/
    Contains a Gradio app that lets you upload an image and returns the top 5 matching captions from a pre-stored captions list.

  • CLIP_training.ipynb
    A Jupyter notebook where the entire training process of the CLIP model is implemented.


How to Use

  1. Training
    Open and run the CLIP_training.ipynb notebook to train the CLIP model on your dataset.

  2. Deployment
    Inside the clip_deployment folder, launch the Gradio app to interact with the trained model by uploading images and getting the most relevant captions.


Feel free to explore, experiment on this project!

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published