Skip to content

aunraza19/Image-Captioning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Image Captioning using CNN and LSTM

Overview

This project implements an image captioning model using CNNs (DenseNet201) for feature extraction and LSTMs for caption generation. It is trained on the Flickr8k dataset and evaluates performance using BLEU scores.

Features

Uses DenseNet201 for feature extraction.

LSTM-based decoder for caption generation.

Implements preprocessing, tokenization, and batch-wise data generation.

Supports evaluation and inference.

Results

Sample Output: "A dog running through the water."

Issues: Repetitive phrases (e.g., “blue shirt”).

Improvements: Train on larger datasets (Flickr30k/MSCOCO) and implement attention mechanisms.

Enhancements & Future Work

Improve accuracy using attention mechanisms and beam search decoding.

Experiment with transformers for better captioning.

About

IMAGE CAPTIONING USING LSTMs and CNN ON Flickr8k Dataset

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published