Difference between revisions of "Attention-based Neural Networks for Handwriting Recognition"

From CSclasswiki
Jump to: navigation, search
(Summer 2021 Notes)
(Summer 2021 Notes)
Line 20: Line 20:
== Summer 2021 Notes ==
== Summer 2021 Notes ==
Pretraining run notes:
See https://docs.google.com/document/d/19zcvR3Wa9pom84kB1bupQq5zMGBEhaNENAR0_l070fA/edit?usp=sharing
model_2021-06-18_11_53_47 is first successful run, using ReLU on fully connected layers. 4x image subsampling on 32x64 basic unit means that letters don't fill much of the space.
model_2021-06-22_16_58_03 implements random pixel sampling to handle images that don't fit in memory.  2x image subsampling makes letters bigger, but decreases random sampling fraction.  [SVN 2293]
model_2021-06-24_17_28_20 standard size run; added character frequency weighting.
6/24 Ford342-05:  standard res 32x64 unit, x2 subsampling, no character weights.  check_2021-06-24_23_20_34_E09.pt
6/24 Ford342-06:  lo-res 16x32 unit, x4 subsampling, no character weights.  check_2021-06-24_23_25_07_E14_lores.pt
6/25 Ford 354:  standard res 32x64 unit, x2 subsampling, linear character weights, argmax pixels

Latest revision as of 13:41, 30 June 2021


Recording of working system: https://smith.zoom.us/rec/share/841_Ne3snhwP3mSduKZu63ctTFzYvdDdCrwsdPvCQWOAFDxka9tsdDTwGGZM3fWw.n5T9sxD24vdlCBzQ Passcode: iZQc4=5s

Fall 2020

This honors thesis aims to improve current handwriting recognition through refining the use of attention mechanisms in sequence-to-sequence and Transformer models in HTR systems.

Week 1: 09/04 - 09/10


- Install PyTorch and get something running
- Find a good starting point
  - Review more literature (particularly for sequence-to-sequence models)
  - Distill knowledge from currently cited papers in proposal

Summer 2021 Notes

See https://docs.google.com/document/d/19zcvR3Wa9pom84kB1bupQq5zMGBEhaNENAR0_l070fA/edit?usp=sharing