First Steps: Dice
Warning
I should warn you that this may be a little frustrating to read, especially if you already use TDD (Test Driven Development). It's akin to watching someone fumble around in a computer game when you know what the solution is. You may find yourself thinking, "Press the Red button twice, then the green button! It's so obvious!", or groaning when I start down an obvious dead end. My apologies in advance, and I am open to constructive criticism.
One of my frustrations as a mathematician was knowing that if there were two possible paths to pursue, I invariably choose the dead end path first. (:
Starting Here, Starting Now
Ok, so, some infrastructure work first. Setting up a project, connecting to the website, installing a visual FTP app, etc. Done, hello world page uploaded. Excellent.
Baby Steps
According to TDD, I need to start with a failing test. This can include a test that fails because an object can't be created. I've created a first function called TestAll, that will eventually contain all the tests.
First Test - empty test (always passed)
This is a test Test. Whoo hoo! It passed (: I do a little refactoring (already!), deciding that test functions should take two output parameters (strTestResult, blnTestPassed).
(Yes, I still use Hungarian notation. I find it useful, although I probably simply haven't heard the right argument against it yet. I expect I'll give it up eventually.)
Error checking
A decision must be made - do I disable fatal error checking (e.g. use On Error Resume Next) and catch errors and report them as failing tests? If so, does the test itself need to do this, or should the master function (TestAll) do this before and after each test?
Without thinking too much about it, I'll go with making the TestAll function disable (and reenable) error checking, assuming that it doesn't reset when crossing function boundaries. I'll test that, obviously.
Execute (not Copy.Paste)
There's a chunk of code in the TestAll function that clears errors, calls the function, checks for thrown errors, checks the results of the function, adds the results to the list, and cleans up. I'm a bit tempted to copy/paste this block of 14 lines, and just rename the function to be tested each time. Execute command to the rescue! I can create a generic function for this 14 lines, pass the name of the function in, and use Execute to run it. Cool beans, although I'm a bit wary that any use of Execute is a too-clever solution. Time will tell.
Refactoring again
My first attempt at creating a RunGenericTest(strTestName, strAllTestResults) function doesn't work. Simple error (neglected to change a variable name), which reminds me that I need to specify Option Explicit to throw these errors.
Compilation Errors still are fatal, as they should be. And embarrassing (as they should be).
I'm already enjoying my RunGenericTest function, as I've decided to reformat the results a bit to number the tests. Had I chosen the copy/paste route, that's at least two places that I would have had to change (and would have increase the barrier to making that change). I now have two test - one empty passing test, and one failing test that throws a VB Runtime error (undeclared variable). Excellent. I don't need a failing VBScriptRuntimeError test, so I'll get rid of it.
Implementing RollDieGetValueBetween1And6
I'm currently torn between keeping the class files separate (e.g. CDie.asp, CCharacter.asp, CMap.asp, etc.) and the convenience of intellisense (which only works on classes defined in that file. I'd like ideally to have the header information (function stubs) inline in my test file, and the implementation details separate.
Alternatively, I could define the tests in the class files (where intellisense will work). Perhaps a class.TestAll() function? I'll go down that route. But how will the class file have access to the RunGenericTest function? I'm sure there's an answer, but not sure what it is just yet.
For now, I can kludge together something like the following - copy/paste the contents of the CDie class into the testing page for convenience of working on them. Once I'm ready to move onto a different class, I'll "check in" the current inline version, and include the separate file. Not clean, but I think it'll work.
I will probably eventually group functions by class, but I'm not sure yet if I will include them in the class.
Tests for random events
It's a little difficult to write a test where the expected result is in a range. For example, I have two tests which simulate rolling a 6 and 10-sided die, respectively. In each case, I'm testing for results between 1 and 6 (or 10). But if I screw up and the 10-sided die is really only a 6 sided die, I'll never catch that bug.
One answer would be to make the test check that each number in the range appears more or less the same amount of time. But even this test will occasionally fail (for example, if all 10 rolls are a "3", just by pure chance. Is it acceptable to have tests that pass 90% of the time? (:
I should warn you that this may be a little frustrating to read, especially if you already use TDD (Test Driven Development). It's akin to watching someone fumble around in a computer game when you know what the solution is. You may find yourself thinking, "Press the Red button twice, then the green button! It's so obvious!", or groaning when I start down an obvious dead end. My apologies in advance, and I am open to constructive criticism.
One of my frustrations as a mathematician was knowing that if there were two possible paths to pursue, I invariably choose the dead end path first. (:
Starting Here, Starting Now
Ok, so, some infrastructure work first. Setting up a project, connecting to the website, installing a visual FTP app, etc. Done, hello world page uploaded. Excellent.
Baby Steps
According to TDD, I need to start with a failing test. This can include a test that fails because an object can't be created. I've created a first function called TestAll, that will eventually contain all the tests.
First Test - empty test (always passed)
This is a test Test. Whoo hoo! It passed (: I do a little refactoring (already!), deciding that test functions should take two output parameters (strTestResult, blnTestPassed).
(Yes, I still use Hungarian notation. I find it useful, although I probably simply haven't heard the right argument against it yet. I expect I'll give it up eventually.)
Error checking
A decision must be made - do I disable fatal error checking (e.g. use On Error Resume Next) and catch errors and report them as failing tests? If so, does the test itself need to do this, or should the master function (TestAll) do this before and after each test?
Without thinking too much about it, I'll go with making the TestAll function disable (and reenable) error checking, assuming that it doesn't reset when crossing function boundaries. I'll test that, obviously.
Execute (not Copy.Paste)
There's a chunk of code in the TestAll function that clears errors, calls the function, checks for thrown errors, checks the results of the function, adds the results to the list, and cleans up. I'm a bit tempted to copy/paste this block of 14 lines, and just rename the function to be tested each time. Execute command to the rescue! I can create a generic function for this 14 lines, pass the name of the function in, and use Execute to run it. Cool beans, although I'm a bit wary that any use of Execute is a too-clever solution. Time will tell.
Refactoring again
My first attempt at creating a RunGenericTest(strTestName, strAllTestResults) function doesn't work. Simple error (neglected to change a variable name), which reminds me that I need to specify Option Explicit to throw these errors.
Compilation Errors still are fatal, as they should be. And embarrassing (as they should be).
I'm already enjoying my RunGenericTest function, as I've decided to reformat the results a bit to number the tests. Had I chosen the copy/paste route, that's at least two places that I would have had to change (and would have increase the barrier to making that change). I now have two test - one empty passing test, and one failing test that throws a VB Runtime error (undeclared variable). Excellent. I don't need a failing VBScriptRuntimeError test, so I'll get rid of it.
Implementing RollDieGetValueBetween1And6
I'm currently torn between keeping the class files separate (e.g. CDie.asp, CCharacter.asp, CMap.asp, etc.) and the convenience of intellisense (which only works on classes defined in that file. I'd like ideally to have the header information (function stubs) inline in my test file, and the implementation details separate.
Alternatively, I could define the tests in the class files (where intellisense will work). Perhaps a class.TestAll() function? I'll go down that route. But how will the class file have access to the RunGenericTest function? I'm sure there's an answer, but not sure what it is just yet.
For now, I can kludge together something like the following - copy/paste the contents of the CDie class into the testing page for convenience of working on them. Once I'm ready to move onto a different class, I'll "check in" the current inline version, and include the separate file. Not clean, but I think it'll work.
I will probably eventually group functions by class, but I'm not sure yet if I will include them in the class.
Tests for random events
It's a little difficult to write a test where the expected result is in a range. For example, I have two tests which simulate rolling a 6 and 10-sided die, respectively. In each case, I'm testing for results between 1 and 6 (or 10). But if I screw up and the 10-sided die is really only a 6 sided die, I'll never catch that bug.
One answer would be to make the test check that each number in the range appears more or less the same amount of time. But even this test will occasionally fail (for example, if all 10 rolls are a "3", just by pure chance. Is it acceptable to have tests that pass 90% of the time? (:
0 Comments:
Post a Comment
<< Home