You can measure the accuracy of your model on specific pages from your Training and Validation Sets with the “Compute Accuracy” functionality in the “Tools” tab. To do so, first, an HTR transcript needs to be generated. To compare text versions, you need “Reference” and “Hypothesis.
Reference
As “Reference”, choose a page version, which was correctly transcribed (Ground Truth: manual transcription as close to the original text as possible). To get out the most significant value it would be best to use pages from a sample set which have not been used in the training and therefore are new to the model. Using pages from the Validation Set is also an option even if not as ideal as the just mentioned. Using pages from the Training Set is not a good idea because this will output lower CER-values as they actually are.
Hypothesis
You can measure the accuracy of your model on specific pages from your Training and Validation Sets with the “Compute Accuracy” functionality in the “Tools” tab. To do so, first, an HTR transcript needs to be generated. To compare text versions, you need “Reference” and “Hypothesis”. As “Hypothesis”, choose the version, which was automatically generated with an HTR-model and on which you would like to see, how good the result is.