Hi
I do experience this error from version 2.3.0 of openjpeg when I decompress. The images in focus are originally created from a bulk scanner. I am talking about millions of images. Previous version 2.1.0 of openjepeg do not have any problems with those images. I have included reports from both opj_dump and jpylyzer, both telling that the images are fine. The behaviour is:
beva@vax04:~/Bilder$ opj_decompress -i db11217051900674.jp2 -o test.png
[INFO] Start to read j2k main header (111).
[INFO] Main header has been correctly decoded.
[INFO] No decoded area parameters, set the decoded area to the whole image
[ERROR] COD marker already read. No more than one COD marker per tile.
[ERROR] Fail to read the current marker segment (0xff52)
[ERROR] Failed to decode the codestream in the JP2 file
ERROR -> opj_decompress: failed to decode image!
I then downloaded the source (version 2.3.0) and found the part in the code which is responsible for this behaviour. In src/lib/openjp2/j2k.c at about line 2660 you will find this code snippet:
/* Only one COD per tile */
if (l_tcp->cod) {
opj_event_msg(p_manager, EVT_ERROR,
"COD marker already read. No more than one COD marker per tile.\n");
return OPJ_FALSE;
}
l_tcp->cod = 1;
Commenting out this snippet, letting only "l_tcp->cod = 1;" be active, or change "if (l_tcp->cod)" to "if (l_tcp->cod > 1)" and then compile the code, it will decompress my files without any problems.
I am not experienced in C or the JPEG2000 standard, however it seems that the test for multiple COD's may be broken somehow. Could it be that "l_tcp->cod" is not nilled between tiles?
If the test turns out to be logically correct, and thus my files in fact violates the JPEG2000, could it just be better to spawn a warning during decompression, instead of a show stopper action. Further on, in this case, should opj_dump gives a message of violation and not telling that the file is fine.
With very best regards
Bent, Oslo, Norway
Anlyzes_of_jpeg2000_image.txt
Hi
I do experience this error from version 2.3.0 of openjpeg when I decompress. The images in focus are originally created from a bulk scanner. I am talking about millions of images. Previous version 2.1.0 of openjepeg do not have any problems with those images. I have included reports from both opj_dump and jpylyzer, both telling that the images are fine. The behaviour is:
beva@vax04:~/Bilder$ opj_decompress -i db11217051900674.jp2 -o test.png
[INFO] Start to read j2k main header (111).
[INFO] Main header has been correctly decoded.
[INFO] No decoded area parameters, set the decoded area to the whole image
[ERROR] COD marker already read. No more than one COD marker per tile.
[ERROR] Fail to read the current marker segment (0xff52)
[ERROR] Failed to decode the codestream in the JP2 file
ERROR -> opj_decompress: failed to decode image!
I then downloaded the source (version 2.3.0) and found the part in the code which is responsible for this behaviour. In src/lib/openjp2/j2k.c at about line 2660 you will find this code snippet:
Commenting out this snippet, letting only "l_tcp->cod = 1;" be active, or change "if (l_tcp->cod)" to "if (l_tcp->cod > 1)" and then compile the code, it will decompress my files without any problems.
I am not experienced in C or the JPEG2000 standard, however it seems that the test for multiple COD's may be broken somehow. Could it be that "l_tcp->cod" is not nilled between tiles?
If the test turns out to be logically correct, and thus my files in fact violates the JPEG2000, could it just be better to spawn a warning during decompression, instead of a show stopper action. Further on, in this case, should opj_dump gives a message of violation and not telling that the file is fine.
With very best regards
Bent, Oslo, Norway
Anlyzes_of_jpeg2000_image.txt