Description
Is your feature request related to a use case or a problem you are working on? Please describe.
Some user-supplied BMP images use fake alpha. Meaning that even though they use RGB compression,
Describe the solution you'd like
Use a heuristic to determine if a BMP is using fake alpha. In particular, if any of the unused bits are non-zero for any pixel when using 32 bpp and a compression method of RGB (no bitfields), then assume it is used as alpha.
Describe alternatives you've considered
Another alternative would be to unconditionally assume that if there are 32 bpp it is using fake alpha. But that isn't necessarily a good assumption, and I don't even know if it is more common for BMPs with 32 bpp to use fake alpha or not.
** Possible implementations **
This is a little tricky because you can't know if the image is using an alpha channel until you have read the pixels.
I can think of a few ways to do this:
- Have
getRawImageType
do a pass over the pixels if the bitCount is 32 andhasMasks
is false to determine if any unused bits are used - Have
getRawImageType
returnTYPE_INT_ARGB
if the bitCount is 32 andhasMasks
is false, then inread(
we keep track of whether we have encountered any pixels with a non-zero alpha. Then if all pizels have an alpha of zero, change them all to have an alpha of 255. And possibly if we encounter a pizel with an alpha of zero, but non-zero RGB, then convert all previous pixels to be opaque, and switch into a mode where we set all future pixels to be opaque. (or maybe there at the end we just create a new BufferedImage with a different colormodel that ignores the alpha?). - Similar to 2, but reversed. We start assuming that the extra bits aren't used for alpha, and set the alpha to fully opaque, until we find a nonzero value in the extra bits, then we change all previous pixels to have an alpha of 0, and switch into a mode where we assume the unused bits contain the alpha value.
Additional context
See the q/rgb32fakealpha.bmp in the bmp suite