CRAP Podcast Romel's essay
From Softwaremetrics
The CRAP metric it's an intent for determine how crappy a source code is, trough a static analysis. CRAP stands for Change Risk Analysis and Prediction. CRAP metric helps to determine the risk of breaking something if you make a change on source code.
Some developers think that CRAP metric should consider code volume in the formula, that should not punish case statements, and that dependencies should be detected and processed differently. Savoia says some suggestions are being considered, but developers should understand that a single metric wont solve all it's problems.
The foundations for the CRAP metric come from Mc Cabe's Cyclomatic Complexity metric and Carnagie Mellon's Software Maintainability Index metric. Also taking into account the number of automated tests for the piece of code being measured as a way to reduce complexity or change risk.
Crappy code represents higher cost for software maintainability for organizations
We could consider as drawbacks of CRAP metric that just work on Java and PHP, and that it's just available as an Eclipse plugin. Nevertheless this is hopped to change
CRAP metric have been tested in free software projects and projects using TDD (Test Driven Development). On projects developed using TDD you could spec a low CRAP because the TDD methodology force you to create test of anything you code. On the other side it's common to found free software projects with crappy code because to much times the changes are made without to much planning and to accomplish specific tasks. Also supporting multiple platforms could lead you to crappy code.
A high score on CRAP metric represents a higher risk on breaking something if you make a change. CRAP load it's another metric provided that gives an advice on how much effort you need to refactor crappy code a high CRAP load represents a high effort.
At the end CRAP metric it's a way to alarm developers on how they can improve their code using available tools as TDD frameworks, so the message is “keep it simple” and “keep it tested”.
The way to obtain feedback for the CRAP metric it's very interesting and I think it will probe to be useful. Software metrics in general to much times end being a paper that ends saying “we hope others investigate more on the subject” and “others” never come. I think Savoia's approach to validate and improve the metric it's the correct way to proceed.