Handwritten word-image retrieval with synthesized typed queries

Research output: Book chapterConference contributionpeer-review

13 Citations (Scopus)

Abstract

We propose a new method for handwritten word-spotting which does not require prior training or gathering examples for querying. More precisely, a model is trained "on the fly" with images rendered from the searched words in one or multiple computer fonts. To reduce the mismatch between the typed-text prototypes and the candidate handwritten images, we make use of: (i) local gradient histogram (LGH) features, which were shown to model word shapes robustly, and (ii) semi-continuous hidden Markov models (SC-HMM), in which the typed-text models are constrained to a "vocabulary" of handwritten shapes, thus learning a link between both types of data. Experiments show that the proposed method is effective in retrieving handwritten words, and the comparison to alternative methods reveals that the contribution of both the LGH features and the SC-HMM is crucial. To the best of the authors' knowledge, this is the first work to address this issue in a non-trivial manner.

Original languageEnglish
Title of host publicationICDAR2009 - 10th International Conference on Document Analysis and Recognition
Pages351-355
Number of pages5
DOIs
Publication statusPublished - 2009
Externally publishedYes
EventICDAR2009 - 10th International Conference on Document Analysis and Recognition - Barcelona, Spain
Duration: 26 Jul 200929 Jul 2009

Publication series

NameProceedings of the International Conference on Document Analysis and Recognition, ICDAR
ISSN (Print)1520-5363

Conference

ConferenceICDAR2009 - 10th International Conference on Document Analysis and Recognition
Country/TerritorySpain
CityBarcelona
Period26/07/0929/07/09

Fingerprint

Dive into the research topics of 'Handwritten word-image retrieval with synthesized typed queries'. Together they form a unique fingerprint.

Cite this