Regression and Configuration Testing5:44 with Ryan Saul
Before new software is released, we need to determine that the software our users are already regularly using is still functioning the way we want it to. To do this, we need to test all of the features that have come out in the past, alongside all the new features we have just build.
Test Plan: RSVP App - Edit Invitee Feature Release
Verify edit invitee feature for RSVP app.
Regression test previous features.
Test on all supported browsers - Chrome, Firefox and Edge on Windows 10.
New Feature testing - 1 Day.
Regression testing - 1 Day.
Supported browser testing - 1 Day.
Total: 3 days.
- Test results.
- Bug report.
- Configuration test matrix
- Release must contain edit feature.
- All critical defects must be fixed.
- 100% of regression tests pass.
- All supported browsers must pass.
Risks and Contingencies:
- Previous client data may not load.
- Rollback to previous version.
- TEST1: Invite Someone new
- TEST2: Confirm someone
- TEST3: Remove someone
- TEST4: Edit someone
Configuration test example
|OS and Browser||Windows 10||Windows 8||OSX - High Sierra|
Before software gets released, we usually want to get some confidence 0:00 around whether the rest of the software that our customers already know and 0:05 love still works or not. 0:09 To do this, we basically need to test all of the features that have come out 0:12 in the past alongside all of these new features. 0:16 The nice thing about testing older features 0:21 is that you've probably already created test cases for them. 0:24 We like to call running these older test cases regression tests 0:28 because we're looking for regressions in the software. 0:32 Or to put it another way, if a test passed in a previous version of the software but 0:35 it is now suddenly failing, that's a regression in the software. 0:41 A regression is when a previous feature 0:46 begins to act in unexpected ways in the new release. 0:48 Compare that to a feature test. 0:53 A feature test is one we test out new functionality in the software, or 0:55 when something has been so heavily modified that it's new to us. 1:00 Feature tests are new and regression tests are old. 1:05 Feature tests are usually run once or twice before being rewritten into 1:10 regression tests because eventually, that new feature will become an old one. 1:14 When I think of a feature test, 1:20 I usually see it as being a one shot test to prove out the new functionality. 1:22 But I'll probably reapproach the feature later on when creating 1:28 a regression test for it. 1:31 Your test plan is going to have a mix of these feature tests and regression tests. 1:33 Assuming you're releasing some new feature with the next version of the software, 1:38 let's take our RSVP app again and the test plan that we've been creating for it. 1:42 So while I may have a feature test that checks the new edit someone functionality, 1:47 I also have a regression test for 1:53 invite someone, confirm someone, and delete someone. 1:55 With all four of these tests, I'm pretty confident that when we release 2:00 the software, it's going to work as intended. 2:03 But it turns out that there's one more thing we haven't considered. 2:06 What about all these other browsers? 2:10 So far we've been testing our software on Google Chrome, but 2:13 what about Firefox and Microsoft Edge? 2:16 To do this, we're going to also need to do some configuration testing. 2:20 Configuration testing is testing out all the systems and 2:25 their configurations that your software should run on reliably. 2:28 As you can imagine, this gets very complicated very quickly. 2:33 Many companies employ a test matrix of different configurations and 2:38 the test that will need to be run on each of them. 2:44 We're not gonna do a lot of that in this video, but you should know 2:47 that a lot of configuration testing depends on the software. 2:50 Some examples of things that you'll need to be testing are the operating systems, 2:55 different versions of the operating systems, drivers for 3:00 your hardware, sometimes even the hardware itself will all need to be tested. 3:03 Mostly, these apply to desktop applications, though. 3:08 For web applications, the main thing we need to worry about are the browsers. 3:11 Typically on a web application, you need to worry about which operating system your 3:16 users are on, the browsers they use, which versions of the browsers they're on, 3:20 and any other mobile devices that you wanna support. 3:25 This can also get fairly complicated. 3:28 It's important to write all these configurations down in a test plan. 3:31 In the test matrix that I have here, I have all the supported operating systems 3:35 at the top and all the supported browsers over on the left. 3:39 And we can tell which ones have passed and 3:44 failed based on our tests within this matrix. 3:45 It's also important to denote which test cases need to be tested 3:50 on which configurations. 3:54 For example, tests for a REST API probably don't need any true configuration testing. 3:56 You just need to make the API calls, and the browser doesn't really matter. 4:04 But for any UI tests, 4:09 we need to make sure that each browser handles the tests correctly. 4:11 Many times, even simple features will break down between Firefox and Chrome. 4:16 Mobile testing is another important area. 4:22 Web apps are often built to work on mobile devices, but 4:24 the different versions of these devices can often have different results. 4:28 I recommend two things to get some confidence around this. 4:33 First, limit the scope of which tests you do on all these different configurations, 4:36 and possibly write mobile-specific tests. 4:42 Second, use a VM-based browser testing tool, such as CrossBrowserTesting or 4:46 BrowserStack, that allows you to choose all sorts of different devices, 4:51 operating systems, browsers, and all of their various versions. 4:55 We'll have links to these tools in the teacher's notes below. 5:00 You'll probably see a huge list of different versions like here 5:05 on CrossBrowserTesting to get the versions that you need. 5:08 In configuration testing, you'll need to basically run a lot 5:13 of the same tests over and over on different configurations. 5:17 It can be tedious, but it builds a lot of confidence in the release. 5:22 Remember to limit the scope of what you are testing to 5:27 more of the bare essential tests when you're planning for configuration tests. 5:30 In our next video, we're also going to talk about ways to test 5:36 smarter with exploration testing and automating your tests. 5:40
You need to sign up for Treehouse in order to download course files.Sign up