Path: utzoo!attcan!utgpu!jarvis.csri.toronto.edu!rutgers!cs.utexas.edu!usc!apple!bbn!inmet!ishmael!inmet!rich
From: rich@inmet
Newsgroups: comp.graphics
Subject: Re: SigGraph Fractal Compression
Message-ID: <20400001@inmet>
Date: 12 Aug 89 20:01:00 GMT
References: <444@mit-amt.MEDIA.MIT.EDU>
Lines: 25
Nf-ID: #R:mit-amt.MEDIA.MIT.EDU:-44400:inmet:20400001:000:1539
Nf-From: inmet!rich    Aug 12 16:01:00 1989


I suppose people are referring to the Michael Barnsley's demo at the AT&T Pixel
Machine booth.  Well, His work is well documented in his book "Fractals 
Everywhere" and the book "Science of Fractal Images".  Even Byte has an article
about it on early 88 or 89, I forgot which.  The basic idea is to "compress" an
image by finding the probabilistic coefficients of "attractors" equations that
when "decompress", gives you back a representation of the image.

There are several important points: one is this is not compression in the
traditional sense.  It is more like modeling.  The other one is the decompressed
image can be as good as what the output device is.  It just takes longer to
decompress if the output is larger.  Third, compression ratio can be very high.
For images with recurring parts (like a picture of a fern), it can be described
by 3 equation, each one withe 4 real number coefficient.  Thus you need to store
12 32 bits single precision floating point number.  The images he was displaying
are "real life" pcitures like a face, TV station logo, etc.  He crammed 40 or so
of the compressed data in a high density (1.2 meg) floopy, so each image is 
about 20 - 40K bytes.

Remember decompressing really means applying fractal equations over and over 
again (like in painting a MandelBort).  The amazing thing is that he was 
"decompressing" the pictures at video rate: 22 pics per second.  

His commerical products may not work/sell/ or has a market, but that to us was
certainly one of the high point of the exhibits.