Our tests provide us an outline for writing our functions, but they can also help us defend against situations we didn’t expect. We’ll write tests that demonstrate the breaking conditions of our functions.
- An edge case is a radical situation your function might end up in, but it isn’t how your function would normally work
- Edge cases occur at an extreme (maximum or minimum) operating parameter
- Predicting edge cases can be challenging
- Spend a little time thinking about the edge cases that are most likely to come up
Writing tests first as an outline can help us a lot in the development process. 0:00 Sometimes you'll also have to write tests for existing code. 0:04 It might have been written by other people, or 0:08 you might have skipped the testing phase during a chunk of your workflow. 0:10 You should write tests retroactively just the way we have been in BDD. 0:14 Decide what the function does, and 0:18 focus on that part rather than the implementation details. 0:20 Write simpler expectations first and 0:25 get them to pass before you write more involved ones. 0:27 One difference is that you might get to testing for 0:30 an edge case faster than you would during BDD. 0:33 An edge case is a radical situation your function might end up in, but 0:36 it isn't how your function would normally work. 0:41 For example, an email validator might work when users type properly formatted emails. 0:44 But what if they type in nonsense by accident? 0:49 An X-ray machine should accurately produce the amount of radiation a doctor 0:52 asked for. 0:56 But what happens if a doctor accidentally asked for a huge amount? 0:57 So far, in our test suite, 1:02 we've assumed that our functions will get the right kind of input. 1:03 For example, we always call checkForShip using a valid player object and 1:06 an array with two numbers, but what if something changes later or 1:12 someone tries to use our functions in a way we didn't intend? 1:16 To show you what I mean, I've added a new function to our engine. 1:20 To follow along using the latest workspace, 1:23 launch the workspace on this page. 1:25 Or you can download the files if you're working outside of workspaces. 1:27 So players need a way to put their ships on the board. 1:31 For each ship, they should be able to name a starting square and a direction and 1:34 save the ship's location for the game. 1:39 In the latest files, 1:41 I've added a test file in the test directory named player_test.js. 1:43 This new file contains a test suite for each of these new functions. 1:48 So here we have validateLocation, then validateLocations. 1:52 And right below, placeShip. 1:57 Now, since you haven't seen these functions yet, it's a good idea to run 2:00 the tests first and see what information they give us about the new functions. 2:04 So over in the console, I'll run the new tests in the player_test.js file only 2:08 by typing the command mocha test/player_test.js. 2:15 So immediately we get a lot of information here. 2:23 We have the three related functions, validateLocation, 2:26 validateLocations, and placeShip. 2:30 Now, validateLocation sounds like it confirms if a single 2:33 given coordinate is not occupied yet and is on the board. 2:37 And validateLocations sounds like it runs validateLocation on a list of coordinates. 2:41 And placeShip takes a ship and changes its starting location. 2:46 It responds with false if the proposed location would run off the board or 2:51 overlap another ship. 2:55 So without even looking at the code, I have a good general sense of 2:56 what these functions do, how they work, and how they're related. 3:00 So now, let's dig into the test code. 3:03 So this is a new idea. 3:12 This time, 3:15 I wrap the entire test suite inside a describe block that names the suite. 3:15 That makes it easier to see that these tests are grouped together 3:21 when I run npm test. 3:25 And all of my test results are printed to the console at once. 3:26 As you saw earlier, inside the main describe block, I have tests for 3:30 the validateLocation and validateLocations functions. 3:35 Feel free to pause the video and read these carefully, but 3:40 we can skip over them for now. 3:43 Instead let's focus on the main function being tested, placeShip. 3:45 So in this suite, I import the placeShip method, 3:50 then set up a new player object before each test spec. 3:54 And notice that the player's ships have a new property of size. 3:59 And there's one test spec in place here. 4:04 It places the player's first ship at 0, 1, 4:06 and expects the ship to have one location at 0, 1. 4:10 Again, we haven't even looked at the actual code for this function at all, and 4:14 we already have a good understanding of what it does and how it works. 4:18 So we know that it works when everything is used as expected. 4:23 But what happens if it's used in some way we didn't expect? 4:27 For example, the function expects four arguments. 4:30 Now, it's conceivable that someone might forget to use them all in the future. 4:34 For instance, what happens if we don't pass a direction? 4:38 Will the function throw an error and crash the game? 4:42 Will it quietly add some nonsense like undefined or 4:44 not a number into our ship's location? 4:47 Or will it return a request for a new location? 4:49 Excluding an important argument like this is an edge case. 4:52 Ideally, this would never happen, just like ideally, 4:56 people would always enter valid email addresses into forms. 4:58 But as developers, we can imagine it happening pretty easily. 5:01 So, it might be nice to test against this possibility. 5:05 Now, we don't necessarily have to build a handler into this function, but 5:07 we should definitely know what to expect when it happens. 5:12 Having a test in place will demonstrate to other developers or 5:15 our future selves what we're thinking about in this case. 5:19 It might also provide useful guidance to them if they 5:22 do decide to handle this case in the function later. 5:25 It will be like we wrote a BDD outline for them already. 5:28 So inside the placeShip test suite, 5:31 I'll create a new spec right below the existing spec. 5:33 I'll name this spec should throw an error if no direction is specified. 5:41 So now, let's check the Chai docs to see if there are any useful expectations I can 6:03 use here. 6:07 First I'll do a search for error. 6:10 Now, the first result shows up under the method not. 6:14 But I see something that looks useful here. 6:18 Here it says to.not.throw(Error). 6:22 Now, I don't quite understand how to use the throw method yet. 6:25 So I'll do a search for throw too. 6:29 And cool. 6:37 This tells me everything I need to use the throw method with lots of examples. 6:38 So it looks like the throw method requires me to wrap my function in something that 6:42 Chai can handle internally. 6:46 Now, if I didn't do this, then my test spec would always fail, 6:48 because placeShip will throw an error. 6:51 Mocha will think that error counts as a test failure, 6:54 even though the error is really what I expect. 6:57 So inside this new spec I'll trap the error in a new function called handler 7:00 that Chai will manage from within and prevent the error from failing the tests. 7:05 So we'll say var handler. 7:10 And inside this function, we'll call placeShip. 7:17 And I'll also go ahead and save the ship and coordinates variables inside the spec. 7:24 That way the function call below will be a little easier to read. 7:30 So now I'll pass only three arguments to placeShip with no 7:37 direction string specified. 7:40 We'll pass player, ship, and coordinates. 7:42 And now I can expect the handler function to throw an error. 7:49 So right below, we'll say expect(handler).to.throw(Error). 7:53 So now if I save my player_test file and go over to the console and 8:08 run the same test as earlier, we can see that the test fails, 8:13 because the placeShip function doesn't currently throw any errors. 8:17 So I'll open up the player_methods.js file inside the game_logic folder and 8:31 add a line at the top of the placeShip function. 8:37 So we'll say, if there's no direction specified, throw an error. 8:45 It's nice to give errors a useful message, so I'll pass one in here. 8:57 We'll say, you left out the direction, I need that for math. 9:02 So now if anyone ever messes this up in the future, 9:14 they'll get an informative error about the problem. 9:17 And if I go over to the console and 9:21 run the tests now, we can see that everything passes. 9:23 Perfect. 9:27 So the Chai docs for 9:28 throw show me that I can also specify the error message I expect. 9:29 So back inside the new spec, 9:36 in player_test.js, I'll write an extra expectation that specifies that. 9:38 So we'll type expect(handler).to.throw 9:44 Then I’ll write the error message I expect, which is, 9:53 you left out the direction, I need that for math. 9:57 Predicting edge cases can be challenging. 10:06 I could imagine all sorts of ways that our functions so far could fail. 10:09 Web development is hard, so pretty much anything can go wrong. 10:14 Now, I don't want to pollute my test suite with tons of edge cases that don't really 10:18 have to do with the main functionality of my functions, so instead, you should spend 10:22 a little time thinking about the edge cases that are more likely to come up. 10:27 Think about problems that would be really hard for 10:31 another developer to notice or that would break your programs really badly. 10:34 If you can run a test for an edge case or 10:38 two like this, that's often enough at first. 10:40 Pushing yourself to think about the problems that other users or developers 10:43 might have when dealing with your code will also make you a better developer and 10:47 result in better software, so it's good for you too. 10:51
You need to sign up for Treehouse in order to download course files.Sign up