Path: utzoo!utgpu!news-server.csri.toronto.edu!cs.utexas.edu!yale!think.com!sdd.hp.com!spool.mu.edu!uunet!mcsun!hp4nl!charon!dik From: dik@cwi.nl (Dik T. Winter) Newsgroups: comp.arch Subject: sin(pi)=0.0 (was: Re: bizarre instructions) LONG Message-ID: <3009@charon.cwi.nl> Date: 23 Feb 91 00:34:53 GMT References: <9102210042.AA12291@ucbvax.Berkeley.EDU> <2998@charon.cwi.nl> Sender: news@cwi.nl Organization: CWI, Amsterdam Lines: 89 I wrote: > If sin(pi) = 0, the implementation is wrong! If you are already thoroughly bored by this, hit n now. I have gotten already a number of reactions to this statement, so I think it is good to give some explanation. I will give the arguments against sin(pi)=0 and for sin(pi)=0 with some reasoning why one may be preferred over the other. In the sequel I will make some blanket statements that ought to be qualified. I am aware of that, but let it as is in order to be concise. I think further discussion can best be done by mail. In article khb@chiba.Eng.Sun.COM (Keith Bierman fpgroup) writes: > No. Since that is what people expect, many library writers go to > special pains to make sure that this case works out "right". At least the current version of Sun's sin does not do that. I know indeed one implementation that did range reduction by a multiplication by 1/pi, where the actual value of 1/pi used was tweaked in its low order bit to assure that pi*1/pi = 1.0. I will argue that such an implementation is wrong. The first consideration is that on a given machine (calculating in finite precision) the value of pi is not exactly representable. So when we enter pi one way or the other, the internal value will be a close approximation (say pi'). Now we can calculate a very good approximation to the mathematical result of sin(pi'): sin(pi') = -sin(pi' - pi) ~= pi - pi' So the mathematical result is in the order of the machine precision (if the system will correctly convert the value of pi). Given this it is still not clear whether a returned value of 0.0 is allowable or not. Traditionally numerical mathematics uses two forms of error analysis: forward and backward. In forward error analysis, the numerical result of an operation is compared to the exact mathematical result; if the difference is small there is a small absolute or relative error. In backward error analysis (since J. H. Wilkinson in the late 50's and early 60's) inputs to the operation are sought that would give the result as an exact mathematical result. If those inputs are close to the actual inputs it is said that the error is small. Clearly, in terms of backward error analysis sin(pi')=0.0 is allowed, while it is disallowed in forward error analysis (if we require good relative precision). So the question is: should we allow backward error analysis here, or allow forward error analysis only. This depends on the situation. If it is known that the inputs to an operation are approximations, backward error analysis is clearly allowed (the result is the exact mathematical result of one of the possible inputs that were represented by the actual input). If this is not known, backward error analysis is generally frowned upon. If we look at sin (and the other elementary mathematical functions) as black boxes, there is a tendency to disallow backward error analysis. Furthermore, backward error analysis can have surprises: suppose we have a machine that represents floating point numbers with a 48 bit mantissa (that is not a random choice); assume we have two floating point numbers, x=2^48 and y=2^48-1. Is it allowed that x-y delivers 0.0 or 2.0 (as is done on some machines)? Using backward error analysis this is clearly allowed, the result is the exact mathematical result if we flip the low order bit of one of the inputs; and that is a small change. The general consensus is however that such imprecise operations are not allowed; and all basic operations should have a small error in terms of forward error analysis. Traditionally only in the field of numerical algebra backward error analysis is allowable. The reason is simply that forward error analysis can not give satisfactory results for the most important algorithms. (Before Wilkinson it was estimated that the numerical solution of a linear systems with direct methods would not be possible if the order of the system exceeded 10, due to numerical stability. Nowadays order 1000 and more systems are solved routinely using direct methods.) This is also the reason that numerical algebraists are more willing to allow backward error analysis in other fields of numerical mathematics. (The Karlsruhe people disagree here, and always require forward error analysis; they are designing algorithms that allow it.) There is one additional problem if sin(pi') != 0.0. Most uses of the sine function in heavy numeric calculations are of the form sin(pi * x). If we look at the function sinpi(x) == sin(pi * x) as a black-box, implementation in terms of the standard sine function can only be satisfactory if the function sinpi already does perform the range reduction needed. This has lead in the forthcoming Ada standard for elementary functions to the specification of a sine function with two parameters, the first giving the x argument, the second giving the scale used for the measuring (with a default of 2.0*pi %). This allows the user to calculate sinpi(x) [Ada: sin(x,2), because if x=2 we have a complete circle], but also sin(x) with x in degrees [sin(x,360)], grads [sin(x,400)] and bams, but I disremember what a bam is (I think there is more than one definition). -- % Actually there are two functions sin, one with one argument (and cycle 2.0*pi) and one with two arguments. There are some reasons for this. -- dik t. winter, cwi, amsterdam, nederland dik@cwi.nl Brought to you by Super Global Mega Corp .com