The testing speed reported in the paper is 20ms/image. I modified demo.py and did recognition on the ICDAR 2003 dataset, and print the average recognition time. The following is the modified code (the red boxes indicate my modifications):

I used a single NVIDIA Tesla V100 GPU to do the recognition. I ignore the time comsumption of reading image, so the recognition time has nothing to do with the disk. And the output is:
which is larger than 20ms. I wonder is there something wrong with my modification? I would be appreciated if my question can be answered.
The testing speed reported in the paper is 20ms/image. I modified demo.py and did recognition on the ICDAR 2003 dataset, and print the average recognition time. The following is the modified code (the red boxes indicate my modifications):
I used a single NVIDIA Tesla V100 GPU to do the recognition. I ignore the time comsumption of reading image, so the recognition time has nothing to do with the disk. And the output is:
Avg time = 693.183mswhich is larger than 20ms. I wonder is there something wrong with my modification? I would be appreciated if my question can be answered.