Path: utzoo!utgpu!news-server.csri.toronto.edu!cs.utexas.edu!uwm.edu!rpi!bu.edu!orc!decwrl!pacbell.com!pacbell!demo!jgk From: jgk@demo.COM (Joe Keane) Newsgroups: comp.graphics Subject: Re: Compression of multi-bit images Summary: Pre-processors are cool. Message-ID: <2926@demo.COM> Date: 12 Jun 90 21:55:59 GMT References: <8099@b11.ingr.com> <1990Jun7.114905.1714@athena.mit.edu> <27663@pprg.unm.edu> Reply-To: jgk@osc.COM (Joe Keane) Organization: Object Sciences Corp., Menlo Park, CA Lines: 27 In article <27663@pprg.unm.edu> krukar@pprg.unm.edu (Richard Krukar [CHTM]) writes: > I worked with various images compression schemes a few years ago >and I found a very disgusting method. Two steps: > 1) Write up your favorite predictor. > 2) Run compress ( LZW compression ) on the error image. I don't think this is disgusting at all. Compress is a useful, general utility, so there's no point in re-implementing it. Actually, squeeze is a bit better, but it's the same idea. I've had good success with making pre-processors on bitmap files to go before compress or squeeze. You make up a new file format, and convert your images to this format before compression. The important thing to keep in mind is that compress uses bytes as input tokens, so you want to make these meaningful. One simple algorithm is just to express the bits in hex rather than binary. This makes the uncompressed file twice as big, but usually reduces the size of compressed file. A better way is to have the hex digits represent 4 bits stacked vertically, and then run these in the normal order. I wrote filters to convert to and from this HV (hex vertical) format. Simple as it is, this compression method beats overall any other i've seen. Of course, if you're dealing with gray-scale or color images, you want to do some arithmetic coding. If you do, you're better off using characters for tokens than writing your output in binary.