Skip to main content

TestBox: Shooting for Test Reuse

For a recent project, I was trying to force myself to create unit and integration testing as part of the scaffolding. I had gotten to the point where my basic tests were moving along well and I was moving up the diagram from unit tests to integration tests. The basic project makeup is here:

In working on the second tier cfcs (the image renderer, document render and the mesage renderer) I found myself starting to repeat some of the same tests again and again. The reason for this was pretty straightforward. All three of the renderers needed to return a struct with the same keys to the Decision Maker* so all three had a set of the same tests applied to them. This got tedious both to type out and the maintain as the needs of the structures morphed over time. Basically, the output from those and other other renderers that we'd develop in the future, needed to comply to an interface. All of them needed to have the same keys in the struct but the values would change based on the type of file submitted, whether it was supported, bad characters in the file name and several other factors. This was getting compounded by the fact that I needed to submit several files of various extensions and formats to each  of the formatters looking for things that will break. The values returned from the renderers would change based on the file formats submitted. 

Just like we've been trained to look for code reuse the desire for "test reuse" was making itself known. I'm calling it test reuse because, "this is getting very tedious and a pain in the neck and I'm making mistakes" makes me sound like I'm lazy and have a bad attitude but I digress. 

When I started to break down the tests I was repeating I found that, even though they were all unit tests, they fell into three categories. 

1. "Interface" tests  - common to all the renderers, these tested that the result complied with what the decision maker was expecting. This included testing for the existence of certain keys in the returned object and also some data integrity checks. 

2. Format Tests - common to all file submissions to a particular renderer

3. Specific submission tests - these were unique tests for each submission. 

The question then was how to set up these tests in a way that they can be easily reused. It turned out to be pretty straightforward but it took a little bit to wrap my head around it. Rather then go through the experimentation process, I'll just post it. 

Here is the interface test level (PDFReneredFileInterface.cfc):

component extends="testbox.system.BaseSpec"{
    function interfaceTests(filename,thisFileResult){
        try{
        describe("#filename# should",function(){
            it("should be a array",
                function()
                {
                     expect(thisFileResult).tobetypeof("array");
                });
            it("should be a array of at least 1 item",
                function()
                {
                     expect(Arraylen(thisFileResult)).tobegte(1);
                });
        });
    }
}

Key points
  1. This extends the BaseSpec as per usual. 
  2. The main function, typically called "run" has been renamed to "interfaceTests". The reason for this is straightforward. If we are going to control what file is being tested, that file path will have to originate from the most exact test suite, i.e. not this one. Therefore, any information would need to be passed in. If there was a run function in this cfc and TestBox tried to run it as if it were a self contained testsuite, it would throw an error, not just fail, because there wouldn't be a filename or anything to test passed into it. 



The "medium" specific test look like this (BaseImageTest.cfc) :

component extends="tests.pdfCreation.pdfReneredFileInterface"{
/*********************************** LIFE CYCLE Methods ***********************************/// executes before all suites+specs in the run() method    function beforeAll(){
    }
// executes after all suites+specs in the run() method    function afterAll(){    }
/*********************************** BDD SUITES ***********************************/
    function imageTests(imagelist,thisFileResult){        
        try {            
             describe("Interface Tests",function(){                
                 interfaceTEsts(imagelist,thisFileResult);            
            });        
        }        
        catch(any err){
        }    
    }
    function getPDF(imagelist){
        pdfObj = createObject("component", "#application.cfcpath#.pdfCreation");
        filepath = "#application.pdfTestSampleFiles#\#imagelist#";
        result = pdfObj.pdfFromImage(filepath);
        thisFileResult = result[1];
        return thisFileResult;    }
    function isPDF(data){
        pdfService = new com.adobe.coldfusion.pdf();
        testPDF=pdfService.read(source=data);
        return testPDF;
    }
}

Key Points
  1. This extends the Interface level test CFC so still has all the functions from TestBox available to it. 
  2. It also has the run function renamed to keep TestBox from calling it directly. 
  3. It has a describe() function which calls the interface tests from its parent. As long as there is a describe function here and a describe function on the Interface tests, this works well. 
  4. It has two other functions which are shared among all the specific image tests. This is simple code reuse not 'test resue' since each of the specific level tests all run this same process to get their testing result. 


And the most specific test is here (pdfcreationImagesBMP.cfc):
component extends="tests.pdfcreation.imagetests.baseImageTest"{
    /*********************************** LIFE CYCLE Methods ***********************************/
    // executes before all suites+specs in the run() method   
    function beforeAll(){   }
    // executes after all suites+specs in the run() method   
   function afterAll(){   }

   /*********************************** BDD SUITES ***********************************/
   function run(){      
       describe("Image Tests BMP",function(){  
         fileName="testImage.bmp";
         thisImageResult=getPDF(filename);
         imageTests(fileName,thisImageResult);
       });
describe("Run the specific Tests",function(){
         it("should pass the fileRendered",function(){
             expect( thisImageResult.fileRendered ).toBeTrue();
         |
         it("should have the failedFiles Array has a length of 0",function(){
             expect( ArrayLen(thisImageResult.failedFiles) ).toBe(0);
            }      
         });   
    }
}

Key Points
  1. This extends the middle level tests so it both has access to the TestBox functions (via its parent's parent) and the common functions needed for all image tests (via its parent).
  2. This level controls the path to the file which is being tested. It runs, submits the filename to the process which returns a result. This results is then passed to the ImageTests which are on BaseImageTest,cfc which in turn runs the interface tests on PDFRenderedFileInterface.
This is probably rough and violating more than a few best practice but I think it shows how to extend test suites to reuse common items. 


* Yes, there are probably more formal names and design patterns for these but since I don't know what they might be at the moment and that isn't the point of this write up, I'm going to not worry about that.

Comments

Popular posts from this blog

Creating Stories and Tasks in Jira: Personas and our Software Development Team

Part of the CI/CD Development Series The next step is developing who is on our hypothetical development team. Given that it has a React front end and ColdFusion as the Server Side language, I came up with the following personas, all of which have their own needs and considerations for our development environment. I've listed all the jobs that need doing, not the people involved since, even on a small team or a team of one, these "hats" are all worn by someone, even if it's the same person. Personas for our Project Dev Ops Coordinator - The person responsible for smooth and accurate deployments CF Developer - The person responsible for the API and fulfillment code development and maintenance. React Developer - The person responsible for the front end development Database Coordinator - The person responsible for the schema, data, up time and, presumably the testing databases used by the developers. Lead Developer - The person responsible for coordinat

The Three Deployment Environments: Production, Testing, Development

Part of the CI/CD Development Series A UML Deployment Diagram is a static picture that shows the different "nodes" that work together to create an environment. Typically these nodes consist of hardware, software or other key points. It's a high level overview, just enough detail to get the idea across of the layout without getting too lost in the details. These are the three deployment diagrams for our project. Production The production deployment is more elaborate than the other two below. Our project has a React front end which means that in addition to images and CSS files, it will also have a largish number of Javascript files. All of these are static and do not need any server side processing. As a result, we don't want them on our ColdFusion server taking up space, memory, bandwidth and other resources when we can use those resources for more efficient processing of ColdFusion files. This allows our server to handle more CF requests since they are not busy

As the Dev Ops Coordinator, I need to set up our git repo into several branches with the appropriate permissions for each one

Part of the CI/CD Development Series The core of every CI/CD process is the code repository whether it be Git, Mercurial, SVN or whatever. The general idea is that it allows multiple developers (or whomever) to access your code in the appropriate way in the appropriate level. This can either be the ability for anyone to pull an open source project but not write to the repo directly or full access to a developer on your team to create branches, push to master or anything that needs doing. For our project, we're using git although the hosting provider was up for discussion between Github, Bitbucket by Atlassian or CodeCommit on AWS. We decided to go with AWS for two reasons. 1. We are going use other tools in AWS as part of the build so we decided to keep it all together. 2. We needed to solidify the ins and outs of using IAM for the process. Basic Steps Create the Repo Create the branches we need Use IAM to apply the appropriate permissions to each branch and to set up