In previously my question, I learned 1 new term to me " Defect density" (DD). So, I gg it and they said that the "Defect density" = Numbers of defect we found / the code size.
My questions are:
1- What're the type of defect we should count on it? The defect that team accepted and need to fix before release or all of the bugs?
2- How can we measure the code size ? We will count based on the line of code or we have another ways?
3- The purpose after we have "Defect density" matrix?
Phuoc, the matrix might help determine which features are stable or not and help in determine the regression test rate you need for them. Or, determine which areas might get risk at the release base on it bug density point.
Regarding your concerns:
1 – You can place the type of bugs yourself, I often base on the severity for each matrix I want. Bugs should be confirmed bugs (the bug triage agrees they are true bugs)
2 – The code size can be defined by unit (for example 1000 lines of code = 1 unit). Then you measure a test area/function worth how many units. But as User Acceptance Tester I get familiar with another way: function point. I determine the complexity of a function by the difficult levels of testing / domain / time effort / scope then mark it how many points
Defect Density point for a function or area = Bug count / Function Points (bug count can be categorized by Severity 1, 2, 3…). For example let say in last 4 sprints you found 2 S1 bugs for Login page worth 1 FP and 6 S1 bugs for Welcome page worth 2 FPs, then:
DDP Login page = 2/1 = 2
DDP Welcome page = 6/2 = 3
The more DDP point, the more rate of regression the feature needs than the others. If the density still climbs high at the release date, then the feature might not ready to release.
3 – This screenshot illustrates my answer: https://drive.google.com/file/d/0B8ccy5GjZAhuYTk0djVuZ29odEk/view
This is how I refine the matrix myself. I count the High priority bugs through each sprint and put the test areas for each feature along each sprint. From the matrix number you can easily see that some features related each other get climbing number of bugs after they are integrated. You can guess the idea by combining the development history and look at the number of bug up/down from the matrix.
Also, the two sample charts will show you which areas have a history of high number of bugs and the stability of the features in last 4 sprints. Then you will know which features need more regression testing / integration testing if they open a new US around that feature
Thong’s answer is good enough.
However, please be careful when using metrics. Number will never lie but it can be misleading. So, before using a metric, it would better to understand what problem we are facing and why we think a metric can help.
Also, since you are curious to know about metrics, you can look up metric “Defect Removal Efficiency” and “Defect Leakage Rate”. I think it’s quite interesting