Path: utzoo!utgpu!news-server.csri.toronto.edu!mailrus!tut.cis.ohio-state.edu!cs.utexas.edu!uunet!ingr!hammondp From: hammondp@ingr.com (Paul Hammond) Newsgroups: comp.graphics Subject: Re: Compression of multi-bit images Message-ID: <10776@ingr.com> Date: 12 Jun 90 19:30:06 GMT References: <1990Jun7.220852.5994@laguna.ccsf.caltech.edu> Organization: Intergraph Corp. Huntsville, Al Lines: 84 In article <8099@b11.ingr.com> dan@b11.ingr.com (Dan Webb) writes: >I'm looking for a space-efficient (not necessarily speed-efficient) >algorithm for data compression of multi-bit (color) images. >I would prefer that the technique be fully reversible, but I will >consider anything. > >Thanks in advance. > >Dan Webb >Intergraph Corp. in article <1990Jun7.220852.5994@laguna.ccsf.caltech.edu>, gbrown@tybalt.caltech.edu (Glenn C. Brown) says: > > spencer@eecs.umich.edu (Spencer W. Thomas) writes: > >>The method currently getting all the noise is DCT (discrete cosine >>transform) encoding. The idea is to take a "Fourier transform" >>(discrete cosine transform, actually, since the values are all real) >>of little (typically 8x8, I think) blocks of the image. Only the >>first few coefficients for each block are saved. > > In Scientific American, an short article claimed that some national > committee of something-or-other was trying to set up file standards for > such forms of compression. They also were able to get reasonable > no-loss compression by Huffman encoding the coefficients. (I believe > they got about 4:1 with no loss, and could got 11:1 with very little > visible loss of image quality in their 24-bit images. About a year ago, I read something about the DCT scheme in a "journal" called "DSP Review", dated "winter, 1989". This was published by the manufacturer of some DSP chip (AT&T ?). The author claimed that this scheme described was able to give 32/1 compression. The author of that article was : David M Blaker Member of Technical Staff AT&T European Technology Center A summary, "To summarize, the image is successively subsampled to a one-quarter size image, and to a one-sixteenth size image. Each image is converted from RGB to one luminance and to chrominance components. Since it is well known that the human eye is much less sensitive to high spacial-frequency variations in color than in brightness, each chrominance component is only one-quarter the size of the luminance component. Each component of the lowest resolution image is broken into 8X8 blocks of data that are transformed by the DCT. ..." This then leads to some (unspecified) "quantizing" of the frequency coefficients followed by Huffman encoding, difference images and... Oh well, you'll have to read it yourself. There were several incompatibilities in the equations in the article which I attempted to correct. Experimentation failed to produce the specified compression ratio. Perhaps I misunderstood the algorithm. The article mentioned that an ISO standard for DCT image compression was being prepared. A document, ISO/JTC1/SC2/WG8 N800 titled, "Initial Draft for Adaptive Discrete Cosine Transform Technique for Still Picture Data Compression Standard" was also mentioned. Our "Information Center" (library) was unable to find any information about this from ISO except two names of members of the standards committee. Donald Thelen, AT&T (office location unknown by me) Charles Touchtone (sp?) IBM / Tampa (813) 878-3084 Perhaps this standard has proceeded to the point where a draft is actually available from ISO. Anyone know ? ----------------------- Paul L. Hammond Intergraph Corp. ingr!b17b!ocelot!paul