Interviewers are looking for:
- Big picture understanding
- how to prioritize tests, some are more important
- Knowing how the pieces fit together
- not just test the product, test the product that the product interacts with. integrations etc
- Organization
- break down tests into sections
- Practicality
- Manual vs. automated testing
- sometimes manual is necessary, human observation can reveal new issues that haven't been physically examined.
- Black box testing vs. white box testing
- how much access do we have into the software?
An approach from start to end:
- Are we doing black box testing or white box testing?
- Who will use it and why?
- What are the use cases?
- What are the bounds of use?
- What are the stress/failure conditions?
- Is it acceptable that the product fails, what should failure mean? It shouldn't crash the computer
- What are the test cases? How would you perform the testing?
- Define test cases
- Normal case: generating the correct output for typical inputs
- Extreme cases: empty inputs, very small, very large inputs
- Nulls and 'illegal' input: if the function expects a number but a string is given
- Strange input: passing an already sorted array to a sort function, or a reverse sorted array
- Define the expected results
- Write test code
How to debug or troubleshoot an existing issue. Instead of reinstalling the software, we can try a systematic approach
- Understand the scenario
- How long has the user been experiencing this issue?
- What version of the browser is it? What OS?
- Does the issue happen consistently, or how often? When does it happen?
- Is there an error report that launches?
- Break down the problem
- Break down the problem into testable units
- Create specific, manageable tests