Signature Authenticity: Our method against Skilled Forgeries u014
The challenge
Problem description
We are using a public dataset published on:
|
Wen, J., Fang, B., Tang, Y. Y., Zhang, T. P., & Chen, H. X. (2007, November). Offline signature verification based on the gabor transform. In 2007 International Conference on Wavelet Analysis and Pattern Recognition (Vol. 3, pp. 1173-1176). IEEE.
|
https://ieeexplore.ieee.org/abstract/document/4421610
We selected this publication because it includes the dataset with the most challenging skilled forgeries of all the literature.
The collage of signatures on the top is composed only by the authentic, while the one on the bottom image is a compilation of the skilled forgeries. Can you make a distinction between both datasets?
GETTING READY
Preparing the datasets
We converted the photos of signatures into squared B/W images, to use as input for our neural network, which works with sub-images of 256x256 pixels.
The datasets are split as follows: 70% of the authentic signatures were randomly separated for training, 10% for validation, and the remaining 20% were reserved for testing.
Training our neural network
Our neural networks are trained on the images of authentic signatures
The number of available authentic signatures is finite, so we perform data augmentation: This means randomly translating and rotating the shape of the signature in the squared images. On the right column of the figure, the same signature after different augmentations. This procedure increments the variety of inputs to the network, free of biases, and results in an improvement in the accuracy of the classification. Additionally, it is also zoomed in and out a bit.
On the left hand side of this text, you can see on the second column, the signature after augmentation.
- We trained our model using only the authentic samples. For this task, we used two networks, an encoder and a decoder, which is able to reconstruct the signatures. After obtaining training convergence, we saved the trained model and tried it with the test signatures to evaluate the performance.
The solution
Using our trained model to test the signatures
During testing we compute a probability of the test signatures to be authentic. These signatures had not been seen by the network during training.
- Technically, the network measures the similarity of the feature maps obtained with each test signature compared to the ones extracted during training with the authentic ones.
For testing, we used 15 skilled forgeries and 5 authentic signatures, which were not seen by the model during training. As a result, of these, all of the signatures have been correctly classified.
- The image shows the ROC curve. The higher the area under red line (AUROC), the higher the accuracy of the classifier. 1 means that it takes 100% of its area.
The results
Using our trained model to test the signatures
Our model have correctly classified all 20 signatures (5 authentic and 15 fakes).
The images of the signatures are created by independent third party computer scientists, referenced at the top of this page. These signatures are shown clean, with no paper shapes or anything which could induct a bias on the classification. Moreover, we added augmentation for all the degrees of freedom: rotation, translation, zoom.
The positive results with our method are consistent across datasets.
If you need to authenticate a signature, please contact us through our form or email us at [email protected] . We now are offering the first service for free.
[table id=3 /]
