My development team at work jokes that bugs “are just features users don’t know they want yet”. 🤪
But as any good development team does, we try to prevent those bugs from happening to our users in the first place. We know that technical systems are not infallible: network requests fail, buttons are clicked multiple times, and users inevitably find that one edge case no one, not the developers, the product managers, the user experience designers and the QA testing team, even with all their powers combined, ever dreamed could happen.
We try to handle those errors gracefully so the application can continue to run, so our users can do what they came there to do and so we test: automated tests, manual tests, load tests, performance tests, smoke tests, chaos tests. Tests, tests, tests, tests, tests. 😫
While automated tests like unit and integration tests are considered standard best-practices, we still have a tendency, even during testing, to only cover the happy paths (the paths where all the API calls return, all the data exists, all the functions work as expected), and ignore the sad paths (the paths where outside services are down, where data doesn’t exist, where errors happen).
I’d argue, however, that those are the scenarios that need to be tested just as much if not more than when everything goes according to plan, because if our applications crash when errors happen, where does that leave our users? Up a creek without a paddle — or, more likely, leaving the app and going somewhere else to try and accomplish whatever task they set out to do.
Recently, I was working on a feature where a user could upload an Excel file to my team’s React application, our web app would parse through the file, validate its contents and then display back all valid data in an interactive table in the browser.
The catch, however, was that because it was an Excel file, we had a lot of validations to set up as guard rails to ensure the data was something our system could handle: we had to validate the products existed, validate the store numbers existed, validate the file headers were correct, and so on and so forth.
validateUploadedFile()) that was imported into the component and it took care of most of the heavy lifting.
While it was very useful to separate out this business logic from the component responsible for initiating the upload, there were a lot of potential error scenarios to test for, and successfully verifying the correct errors were thrown during unit testing with Jest proved challenging. Contrary to what you might expect, there’s not a lot of examples or tutorials demonstrating how to expect asynchronous errors to happen (especially with code employing the newer ES6
But luckily, through trial and error and perseverance, I found the solution I needed, and I want to share it so you can test the correct errors are being thrown when they should be.