Welcome to the Treehouse Community

Want to collaborate on code errors? Have bugs you need feedback on? Looking for an extra set of eyes on your latest project? Get support with fellow developers, designers, and programmers of all backgrounds and skill levels here with the Treehouse Community! While you're at it, check out some resources Treehouse students have shared here.

Looking to learn something new?

Treehouse offers a seven day free trial for new students. Get access to thousands of hours of content and join thousands of Treehouse students and alumni in the community today.

Start your free trial

JavaScript

Deeper Explanation of Floating Point Arithmetic Issues

I understand that there can be problems with floating point numbers in JavaScript (and other machine languages); but I don't quite understand how to get around this problem in a best-practice sense. If I can use WolframAlpha and/or a graphing calculator to multiply 0.1 and 0.2 together without getting a weird answer (i.e., 0.020000000000000004), then why can't a computer handle this when using JavaScript? How do I get past this?

2 Answers

Firstly, I think that having a greater scope of what you're trying to do will help others find an answer to this question. Give a specific example of a problem and some context and I'll try to help the best I can.

Let's say that I am comparing variable values in an if(a+b <= 0.3), and, at some point, a = 0.1 and b = 0.2. The condition would result is false (since 0.1 + 0.2 returns 0.30000000000000004), when one would expect (when thinking about arithmetic) the condition to result in true.

There are surely more floating point problems like this, and accuracy of simple math in code is important and shouldn't return semi-erroneous answers.

So, what is the workaround to something like this?

Excellent. That's what I found when I got your response.