Assuming the order of the data unimportant, one way to do this -not so much of a faster way- but at least somewhat parallel would be to write a script that does the following.
- Open the zip file.
- Get the first file.
- Read the data out of the file, say in lines.
- For each csv line, write out a new zipfile containing the line.
- Rotate the file selected (say five zipfiles) using the output of one line.
- Once you reach a certain size (say 50GB) create a brand new zip file.
This isn't any faster than a sequential read of the big file, but allows you to split up the file into smaller chunks which can be loaded in parallel whilst the remaining data is completed.
Like most compressed output, its not seekable (you cannot jump X bytes ahead) so the biggest downside you have is if the process aborts for some reason you'd be forced to restart the whole thing from scratch.
Python provides support for doing something like this via the zipfile module.