Background
I have COBOL binary files compressed in GZIP format, which can be up to 140GB in size, and I'm having difficulties because I can't use Spark's parallel capabilities for decompression.
Question
Is there a way to read the compressed file and have Spark handle the decompression and reading, or do you plan to implement this at some point?