tag:blogger.com,1999:blog-71141318914133781.post6108298943045452814..comments2022-11-25T19:42:19.856+13:00Comments on Eyes Above The Waves: Rounding Towards (Or Away From) Zero Considered HarmfulRoberthttp://www.blogger.com/profile/01801341049800948737noreply@blogger.comBlogger10125tag:blogger.com,1999:blog-71141318914133781.post-7874783961941738992014-07-25T19:15:10.976+12:002014-07-25T19:15:10.976+12:00Ha Ha!Ha Ha!Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-71141318914133781.post-34327008479956136972008-04-04T05:07:39.000+13:002008-04-04T05:07:39.000+13:00Choosing to round n.5 in a consistent direction ad...Choosing to round n.5 in a consistent direction addresses the case when the two coordinates are both of form N.5, but fails to address similar cases such as when the two coordinates differ by say 0.98: depending on the coordinate values, the resulting line can have width 0 instead of the usually-desired width 1.<br>This isn't a pathalogical example: it's actually fairly common if the input image was formed by slightly scaling down another image.<br>I've never implemented auto-hinting, but I suppose the solution (subject to CPU constraints) involves choosing hinted coordinates that minimize changes of any width (whether black or white) by a large proportion.<br>This is computationally harder to do well than it sounds, e.g. in the case of two ~parallel lines where the endpoints differ such as in the upward line of the letter K, where the width of this stroke isn't directly available from the corner points of the outline, so merely considering x and y coordinates of corner points independently won't give the desired guarantee for this case, which starts to make the problem look like n² in general instead of n log n (even if it can probably be made fast for most cases). Thus, anyone who wants to do it well should look around (citeseer say) for good algorithms. For anyone too lazy to look for good algorithms, I suppose the n log n approach would still give most of the benefit, and is fairly easy to implement & understand.<br>Peter Mouldernoreply@blogger.comtag:blogger.com,1999:blog-71141318914133781.post-34748994178939316482008-03-02T01:10:31.000+13:002008-03-02T01:10:31.000+13:00Please, stop it with the "$x Considered Harmf...Please, stop it with the "$x Considered Harmful" for every little programming problem you have.<br>Dijkstranoreply@blogger.comtag:blogger.com,1999:blog-71141318914133781.post-26604969169384463572008-02-21T16:57:22.000+13:002008-02-21T16:57:22.000+13:00You meant ceil(X - 0.5), don't you?You meant ceil(X - 0.5), don't you?<br>Stednoreply@blogger.comtag:blogger.com,1999:blog-71141318914133781.post-43780800379856594102008-02-14T01:47:08.000+13:002008-02-14T01:47:08.000+13:00I know that frustration - the standard behavior is...I know that frustration - the standard behavior is clearly documented, but, also stupid. See, as an example, twenty years of strcpy() bugs. It's such a pain having to always work around some poorly thought-out behavior. It just one more place that a programmer will make a mistake. Maybe not this year, maybe not next year, but eventually, no matter how good they are.<br>I've grown to hate the inane number handling of the C inspired languages. Want to add two numbers together? Sure. What if those two numbers are 1.5 billion? Sure. You'll have the wrong answer. And no, we won't tell you. Can't imagine that would cause any problems in your application. Have a nice day!<br>There is an entire raft decisions that were driven by particular hardware platforms some language writer was using in the 1970's, and a perceived need for "performance" back in those long gone days, that are still causing massive problems today. And almost everyone I know just accepts that that is the way the world is, and many even would hazard, the way the world should be.<br>Smalltalk solved these problems, what, 25 or more years ago? But still we soldier on with languages that have limitations driven by decisions make about 30+ year old hardware.<br>Obj-somewhat-on-topic: Smalltalk's ceiling and floor methods in the Number hierarchy round to positive and negative infinity. As they should. "truncated" rounds towards zero.<br>(For some truly perverse numerical problems, you should see what denormalised numbers do on some x86 chips - I had some of those in a real-time thread once, and it quickly became a non-real-time thread, which was "not good".)<br>Edouardhttp://edouardp.livejournal.comnoreply@blogger.comtag:blogger.com,1999:blog-71141318914133781.post-18283806944402329432008-02-13T19:50:26.000+13:002008-02-13T19:50:26.000+13:00No, you'd have it with IEEE 754 as well, since...No, you'd have it with IEEE 754 as well, since IEEE 754 uses round-to-nearest-even, which basically means you round up half the time and down half the time (unless I'm just misremembering and this only applies to arithmetic truncations, not explicit rounding). This is convenient for numerical stability, because multiple errors tend not to accumulate as much, but it means here that the value of N will affect your result (about half the time it won't be what you want).<br>Jeff Waldenhttp://whereswalden.com/noreply@blogger.comtag:blogger.com,1999:blog-71141318914133781.post-83263204106324422592008-02-13T14:51:54.000+13:002008-02-13T14:51:54.000+13:00It seems to me that rounding to zero is a very str...It seems to me that rounding to zero is a very strange default, as you would need special (micro)code to detect the condition.<br>In the common case where the value is between 1 and 2^31-2 then the floor function simply needs to shift off the bits to the right of the binary point. To correctly round away from zero you then only need to add on any carry (the last bit shifted off the end), which will be set for values of the form n+½ ≤ x < n+1.<br>If you can use twos-complement arithmetic then this algorithm will instead round to infinity which makes it a useful rounding algorithm.<br>I'm reminded of the ZX Spectrum which had a bug in its floating-point/integer conversion routines which resulted in INT(-32768) becoming -1.<br>Neilhttp://neil.rashbrook.org/noreply@blogger.comtag:blogger.com,1999:blog-71141318914133781.post-14846743165468532052008-02-13T12:21:31.000+13:002008-02-13T12:21:31.000+13:00Should you have *not* this problem with IEEE-754 d...Should you have *not* this problem with IEEE-754 double precision floating point ?<br>anne ô nymenoreply@blogger.comtag:blogger.com,1999:blog-71141318914133781.post-2177668171375675882008-02-13T12:19:03.000+13:002008-02-13T12:19:03.000+13:00I just hit that modulus issue in the SVG filter co...I just hit that modulus issue in the SVG filter code too!<br>Robert O'Callahannoreply@blogger.comtag:blogger.com,1999:blog-71141318914133781.post-1953341757616521252008-02-13T12:13:13.000+13:002008-02-13T12:13:13.000+13:00A very similar misdesigned CPU-operation is modulo...A very similar misdesigned CPU-operation is modulo:<br>it's generally much simpler to design an algorithm with a modulo variant which guarantees that x mod n is in the range [0..n-1]. Instead, modern CPU's implement modulo to return in the range [-(n-1)..(n-1)], and that's just unhandy.<br>For example, if you encounter a statement setting y to (x div 5) in a loop iterating over x (i.e. increasing x once each loop-step), you'd be reasonable in assuming that y increases once every 5 loop steps... except it doesn't around 0, and your loop might break in unexpected ways.<br>I feel your exasperation. You'd think computers can, well, compute accurately!<br>Eamon Nerbonnehttp://eamon.nerbonne.org/noreply@blogger.com