Overview of Project Steps
A step-by-step guide on how to make progress in your project
Below we give a high-level overview of the different steps you need to take to get your project up and running. We start with how to get your project from zero to production and end with how to maintain a successful project.
Building a new extraction model from scratch
Process for training the first model from scratch.
- To start, Create a new project in the Overview
- Define the document types and entities you need in Document types & Entities
- Create guidelines, take into account Guidelines to annotate correctly
- #upload-documents-in-training. We recommend uploading at least 500 documents from the start. You don't need to annotate all of them immediately, Duco Adaptive IDP will automatically select which ones are useful to annotate and add them to the suggested annotation tasks.
- Annotate your initial training data from scratch, best practices are explained in The data annotation process. It is enough to only annotate about 10-30 documents (for example one per layout) before triggering the first training.
- Update the annotation guidelines based on your findings, they should not leave room for interpretation.
- Create a review task for training data to make sure it is correct, see Tasks
- Train a model for the first time as described in Model management
After your first model training, you are able to use the suggested tasks in the task module where Duco Adaptive IDP uses automatic misannotation and active learning to further improve your model. Active learning is used to select which documents contain the most value to add to the model so you don't waste time annotating documents that are already well supported.
Process for iteratively improving an existing model until it is accurate enough
- Create a suggested review task for training data, see Tasks
- Create suggested annotation task for training data, see Tasks. We recommend retraining the model after you have added about 50 new documents. That way, the model recalculates which are the optimal selected documents to add next.
- Train the model again
- If accuracy is not OK, go back to 1. and start another iteration, correcting old annotations and adding new documents from scratch.
- Deploy model if the accuracy is fine
An example on how accuracy evolves with each project step
Accuracy evolution on an unstructured document type with no recurring layouts in two languages.
Improving the model in production using human-in-the-loop corrections
To make sure your automation rate stays high and improves over time, it's important to maintain the models you have trained by making them learn from corrections.
A typical production process looks like this
- You upload new documents in the production pipeline. If they are fully automatically processed, typically no action is needed.
- For the documents that could not be automatically processed, go to the Human Validation section and perform validations on predictions to process the production documents.
Documents that required human validation are automatically added as potential training data data with a status Input needed. For the model to learn from them, they need to be validated in a review task in order to be taken into account for training.
If you want to improve your models based on production validations, follow these steps:
- In the Tasks module, create a suggested review task for production data. This will create a task to verify all documents that required human validation to promote them to "golden" training data.
- Verify all annotations and add missing ones (do not forget to label all occurrences of an entity value in the relevant context) and mark the documents as Done. They will be included in the next training.
- After the task has been completed, retrain the model in the Model Management module. Depending on the number of documents and pages, this can take anything from 30 minutes to more than a day.
- After training has been completed, check if the accuracy is okay. Since you are only adding the hardest documents, you might see that the calculated accuracy goes down, but your production accuracy will go up. You can test a model without deploying it by taking a look at the newly created suggested tasks. These tasks contain predictions from the most recently trained model.
- Deploy the model to start using it in production.
- New uploads in production will get better predictions and have learned from past corrections.