How would that work for something like S3 range requests? Rather than reading an entire object sequentially (which would work fine with transparent compression) you can also ask to read an arbitrary byte range (give me bytes 1,000,000,000-1,000,001,000 from the original file). I guess you could maybe store the compressed file in chunks with metadata about the original byte range inside each chunk.
For MinIO (an S3 compatible server), we add an index for each part, which contains uncompressed -> compressed offset pairs.
Since we already used a Snappy-derived method, each 1MB block is stored without backreferences. With this we only have to decode at most 1MB-1 extra bytes to respond with a specific range offset.
Generally with filesystem-level compression you don't compress an entire multi-GB file: you compress segments of maybe a few 100k. This gives you a very slightly worse compression ratio but allows random seeks to still be efficient.