Skip to content

Ranjit246/Finetuning-whisper-small-odia-language

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 

Repository files navigation

Finetuning-whisper-small-odia-language

Fine tuned Model can be found here: On huggingface

More Description:

This Model is a fine-tuned version of Ranjit/Whisper_Small_Odia_10k_steps on the mozilla-foundation/common_voice_11_0 or dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4827
  • Wer: 23.4979

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 200
  • training_steps: 5000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.0018 50.0 1000 0.3315 24.0903
0.0 100.0 2000 0.4098 23.7236
0.0 150.0 3000 0.4827 23.4979
0.0 200.0 4000 0.4914 23.8928
0.0 250.0 5000 0.4953 23.7800

About

Speech to Text Fine tuned model of Whisper ASR model for odia language - Transcriptor

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published