We wanted to round off 2017 by celebrating some fantastic advances in the Handwritten Text Recognition (HTR) and Layout analysis of historical documents.
In the field of computer science, official competitions give researchers the chance to refine new technologies and ensure that the best techniques rise to the fore. In fact, the READ project has its own platform for research competitions (ScriptNet), where computer scientists can participate in or organise competitions.
In the past few months, READ project partners from CITlab at the Universität Rostock and the PRHLT Research Centre at the Universitat Politècnica de València have generated impressive results worthy of competition prizes and conference awards.
The International Conference on Document Analysis and Recognition (ICDAR), which this year took place in Tokyo, is one of the biggest conferences in the field and was the site of two significant successes for the READ project.
Joan Puigcerver (PRHLT Research Centre, Universitat Politècnica de València) won the conference award for ‘Best Student Paper’ , which was entitled ‘Are Multidimensional Recurrent Layers Really Necessary for Handwritten Text Recognition?’. Multidimensional Long Short-Term Memory (MLSTM) units have been widely used for HTR in recent years. Multidimensional LSTM is a powerful form of machine learning which is capable of processing images of any size. However, these units are much slower than other architectures and require a large amount of memory. The paper argued that MLSTM units may not be necessary for HTR after all, and proposed a cheaper architecture which is able to outperform the state-of-the-art MDLSTM model and significantly reduce the amount of time needed to train a model to read and process a set of handwritten documents.
Another achievement at ICDAR 2017 came from Tobias Grüning (CITlab, Universität Rostock) who won the Competition on Layout Analysis for Challenging Medieval Manuscripts. Layout Analysis is an important part of HTR since the latter technology requires lines of text in an image to be accurately matched with lines of transcribed text. This competition was organised by the Document, Image and Voice Analysis (DIVA) research group at the University of Fribourg. The competition required participants to analyse the layout and find the text lines in a challenging dataset of medieval manuscripts with complex layouts which included marginal and interlinear additions and corrections. Grüning and his team focused on the detection of lines of text and won two out of three tasks in this competition. Their effective layout analysis technology is now available in our Transkribus platform (choose ‘CITlab advanced’ in the ‘Layout Analysis’ section of the ‘Tools’ tab). As the below image shows, this technology can cope well with the complications common in medieval documents!
Our last achievement to mention came from Tobias Strauss (CITlab, Universität Rostock). He led his team to win a competition on Information Extraction in Historical Handwritten Records. The task was to extract information from handwritten marriage licenses such as names, locations and occupations and then assign this information to the corresponding persons whether they were husband, wife or father of the bride. The team worked to extract and match this information from entire lines of text. This work was done with the same functionality that is now integrated in Transkribus as part of our new Keyword Spotting tool. Keyword Spotting is a powerful form of keyword searching where the technology analyses images of writing, rather than searching through transcriptions of these words generated either by humans or computers. This tool could therefore facilitate the searching of huge collections that have not yet been transcribed.
These accomplishments demonstrate that the READ project is at the cutting-edge in the developing field of HTR. We are proud to make such innovations available in Transkribus, allowing our users to automatically transcribe and search all kinds of handwritten historical documents.