In 2005, my project activities included running unit tests as part of a larger unit test suite every day as part of daily morning and evening build (to identify parallels in code branch) and weekly build (to identify parallels across multiple code branches & unit test failures). Team members received an email with build results and parallels getting created and failures in unit tests.Developers were to look for errors in their unit code code check-ins and for parallels in their code modules and perform code fix or code merge action
In reality, Developers ignored or missed the emails in busy schedule with claim that length of integration build email runs in to pages spanning across all modules. Engineers (includes me) were assigned shared responsibility to ensure unit test errors and parallels gets resolved within the same day and got allotted time to engage/remind developers to result in to regular fix. Never was happy with the explosion of incoming emails (code review, resolution and responses), all of which I did not understand in detail.
Still, I found benefits from this exercise. I gained ability to get bigger picture and visibility in to solution architecture and mind-map the entire code base, which helped me to be helping hand for integration efforts at end of agile sprint and agile milestone demos. Otherwise,this was a boring routine that needs highest amount of attention and needs timely execution. Also gave rise to fun scenarios where follow-up of developers who failed to fix in required time created conflicts with them, in addition to being pulled for gaps by the manager.
In one of my consulting assignment, I found project team where development cycle was not complete and was keep on adding technical debt to development cycle. I set myself to work on the below mission.
Unit tests were executed using Microsoft unit test frameworks(MSTests.exe) with TFS build engine. TFS code check-in allows to run custom operation before check-in. Based on final status of the custom operation, the check-in can be allowed to be added to code repository or the check-in can be rejected. Decided to create experiment to run unit tests prior to check-in to reach my goal.
Started experiment with one team good with unit test success. Team developers liked idea and shared that this approach is more transparent to developer and were able to see through benefits of validation as part of performing check-in. Some of them pointed out that the same tests can be executed to identify potential errors and decrease scenarios where tests passed in isolation and failed in integration runs.
With positive note with one team, when we expanded to other teams, there were new challenges to execute unit tests process as part of check-in. To start with developer check-ins got queued. The code check-in became a long time consuming process and also demanded more resources.
- Developers moving code from old branch to new version have not fixed unit tests failures reporting lack of time in sprints for through tests.
- Some team had no failures in their teams unit tests. As other teams unit tests failed, they were also prevented to do check-in.
- Some teams for testing algorithm performed database operations to retrieve every test data input and to store results of every test in database increasing time span for unit test process. They could have got test data from excel and store results in database in a batch mode.
- Unit tests included simple unit tests specific to class (no external interaction) and complex integration unit tests ( interaction with external databases or queue). The complex tests increased time span for unit test process.
How did we approach to make unit tests run as part of code check-in?
We leveraged support available to run unit tests based on Microsoft unit test framework to resolve unit test mess and get unit tests to happen streamlined as part of code check-in.
First, Test categories was created for each teams. When code got checked-in by team members, only unit tests of the team were executed. Team could specify unit test exempt counter that allow teams to check-ins with unit test failures on short-term basis. Fewer test errors compared to team’s exempt counter allowed check-ins to happen. More test errors compared to team’s exempt counter stopped the check-in. if there was few errors compared to team’s exempt counter, the team’s exempt counter was reset to lower value.
Second, unit tests that belong to team were categorized as unit tests (simple) and integration tests(complex). We decided that unit tests to run as part of code-check-in and the integration tests to run as part of daily integration build.
To run unit tests for code check-ins, entire tests in the current test suite was marked as integration tests. Development teams were to mark unit tests that satisfy simple definition from current test suite, to run as part of team code check-in.
Started with first success check-in, with no unit tests to run as part of code check-in. Teams started to mark integration tests that need to get marked as unit tests. We have to evangelize with teams to have own self goals to increase unit test count every day/sprint, increasing in code coverage.
Developers can add new tests and mark tests as unit test or integration tests. Code Coverage was implemented to additionally track code covered by unit tests and teams were given goal to achieve for code coverage.
Requests were allowed to increase unit test exempt counter to allow code check-in in presence of failed unit tests. Every exemption request got published to the entire team, and this helped team leads to drive team focus to bring unit test failures to zero.
Third, work with team to rewrite unit tests better.