Eclipse

From Maisqual Wiki

(Difference between revisions)
Jump to: navigation, search
Line 5: Line 5:
* Checked rules have been summarised in another page: [[Eclipse Rules]].
* Checked rules have been summarised in another page: [[Eclipse Rules]].
-
 
-
= Quality, the Eclipse way =
 
-
 
-
== Definitions of Software quality ==
 
-
 
-
 
-
 
-
== Perspectives on quality ==
 
-
 
-
== Defining a quality model ==
 
-
 
-
There are plenty of software quality models, from McCall and Boehm early quality models to the ISO 9126 and ISO SQuARE (25xxx series) standards. Since quality varies according to the domain (safety, testability of usability do not always have the same importance in a software product) some quality models have been published for specific domains like HIC for the automotive, DO-178 for aeronautics, ECSS for space. The model we intend to use needs to be adapted because:
 
-
# Embedded systems share some characteristics, but may really differ according to their application.
 
-
# Eclipse (and thus Polarsys) is a free software organisation, and open-source projects have specific concerns, both in quality and in metrics availability. As examples, we may cite:
 
-
## Maintainability has a strong importance since the project is intended to be modified by anyone.
 
-
## Contributions may not be consistent in conventions, patterns used, or process followed. It may introduce some bias in measurement.
 
-
## Community-related attributes (health, activity, popularity?) should be included in the model.
 
-
 
-
A quality model shall meet the following requirements:
 
-
* It should be usable from top to bottom: users shall be able to understand how quality is decomposed.
 
-
* It should be usable from bottom to top: quality shall be computable from the retrieved metrics up to the quality characteristics.
 
-
* It should include the [http://maisqual.squoring.com/wiki/index.php/Garvin_Perspectives_on_Quality five perspectives on quality] as defined by Garvin.
 
= Model decomposition =
= Model decomposition =

Revision as of 12:46, 8 July 2013

This is the homepage of the work around the Eclipse Quality Model. More information can be found in the following places:


Contents

Model decomposition

Maisqual Quality Model decomposition

Product

Analysability

Goal: Question:

Decomposition:

  • Class Analysability Index: Sum of class Analysability technical debts divided by the total number of classes
    • Number of attributes
    • Number of methods
    • Depth of Inheritance Tree
    • Weighted Methods per Class
    • Comment rate
  • Function Analysability Index: Sum of function Analysability technical debts divided by the total number of functions
    • Control flow complexity
      • Maximum nested structures
      • Non-cyclic paths
      • Cyclomatic complexity
    • Data flow complexity
      • Number of parameters: Number of formal parameters in the function
      • Halstead's Program Vocabulary: Distinct number of operands and operators(n1 + n2) [Halstead76]
      • Halstead's Program Length: Total number of operators and operands (N1 + N2) [Halstead76].
    • Comment rate
  • Number of Analysability Non Compliances: Number of non conformities to Analysability rules found in the artefact.
  • Compliance to Analysability Practices: Ratio of conform analysability practices on the number of checked analysability practices.

Changeability

Goal:

Question:

Decomposition:

  • Class Changeability Index:
    • Number of public attributes
    • Number of public methods
    • Weighted methods per class: Sum of the Cyclomatic Complexities of the functions and methods defined in the class.
  • Function Changeability Index:
    • Control flow complexity
      • Maximum nested structures
      • Non-cyclic paths
      • Cyclomatic complexity
    • Data flow complexity
      • Number of parameters: Number of formal parameters in the function
      • Halstead's Program Vocabulary: Distinct number of operands and operators(n1 + n2) [Halstead76]
      • Halstead's Program Length: Total number of operators and operands (N1 + N2) [Halstead76].
    • Function code cloning
  • Number of Changeability Non Compliances: Number of non conformities to changeability rules found in the artefact.
  • Compliance to Changeability Practices: Ratio of conform changeability practices on the number of checked changeability practices.

Reusability

Goal: Measure the ability for others to reuse the software.

Question: How easy is it to reuse the software artefact?

Metrics:

  • Class Reusability Index: Sum of class Reusability technical debts divided by the total number of classes
    • Number of public attributes
    • Number of public methods
    • Weighted Methods per Class
    • Class Javadoc: number of comment lines in file minus the number of comment lines in file's functions. TODO
    • Class comment rate TODO
  • Number of Reusability Non Compliances: Number of non conformities to Reusability rules found in the artefact.
  • Compliance to Reusability Practices: Ratio of conform Reusability practices on the number of checked Reusability practices.

Reliability

Goal: Measure the capability of the software product to avoid failure as a result of faults in the software.

Question: How easy is it to reuse the software artefact?

Metrics:

  • File Reliability Index: Sum of class Reliability technical debts divided by the total number of classes
    • Ratio of fix-related commits on file.
    • Number of public data in classes
    • Number of test cases for classes. (NOT AVAILABLE FOR NOW)
  • Function Reliability Index: Sum of function Reliability technical debts divided by the total number of functions
    • Control Flow complexity
    • Data Flow complexity
    • Number of test cases for the current function. (NOT AVAILABLE FOR NOW)
  • Registered bugs (NOT AVAILABLE FOR NOW)
    • Mean time to fix bugs.
    • Number of open bugs at the time of analysis.
    • Defect density: overall total number of bugs on product divided by 1000's of lines (KLOC).
  • Number of Reliability Non Compliances: Number of non conformities to Reliability rules found in the artefact.
  • Compliance to Reliability Practices: Ratio of conform Reliability practices on the number of checked Reliability practices.

Process

Change and Configuration Management

Goal: Measure how is managed the history and consistency of the software product.

Question: Is any user able to retrieve any consistent and complete set of files from the project?

Metrics:

  • Completeness
    • Is there a change management tool?
    • Is there a configuration management tool?
  • Consistency and completeness
    • Is everything included to build the project?
  • Collaborative work
    • Are there
    • How many branches are there?
  • Integrated CCM
    • Are there conventions to track commits against CRs?
    • Are there hooks to ensure coding or naming conventions?

Intellectual Property Management

Goal: Make sure that the intellectual property rights are correct.

Question: How much intellectual property rights are respected?

Metrics:

  • User IP log
  • Commits IP log
  • Ratio of licences provided.

Test Management

Goal: Make sure that the software has enough testing.

Question: How much the software is tested?

Metrics:

  • Are there tests?
  • Number of tests
  • Test coverage
  • Continuous testing?

Project Planning

Goal: Make the project has predictable outputs, in time, content, and quality.

Question:

Metrics:

  • Number of milestones
  • Predictability
    • Dates announcement
    • Dates achievement

Build and Release Management

Goal: Make sure that

Question:

Metrics:

  • Continuous Integration:
    • Has CI server
    • Has CT Server
  • Number of failed builds
  • Total number of builds

Community

Activity

Goal: Measure the recent activity of the project.

Question: How much was the project recently modified?

Metrics:

  • Number of commits, computed on the last week, last month, and last three months.
  • Number of committed files, computed on the last week, last month, and last three months.
  • Number of emails exchanged on the dev mailing list, computed on the last week, last month, and last three months.
  • Number of emails exchanged on the user mailing list, computed on the last week, last month, and last three months.

Diversity

Goal: Measure the diversity of actors in the project.

Question: How many different users are involved in the project?

Metrics:

  • Number of distinct committers to the SCM repository, computed on the last week, last month, and last three months.
  • Number of distinct authors on the user mailing list, computed on the last week, last month, and last three months.
  • Number of distinct authors in the dev mailing list, computed on the last week, last month, and last three months.

Responsiveness

Goal: Measure the time to answer to requests.

Question: how fast users gets answers to requests (support).

Metrics:

  • Median response time on the dev mailing list: the median of time to first answer to a request on the dev mailing list, computed on the last week, last month, and last three months.
  • Median response time on the user mailing list: the median of time to first answer to a request on the user mailing list, computed on the last week, last month, and last three months.

Support

Goal: Measure the ability to get support on the project.

Question: how many requests are satisfied?

Metrics:

  • Response ratio on the dev mailing list: the average number of answers to a single request on the dev mailing list, computed on the last week, last month, and last three months.
  • Response ratio on the user mailing list: the average number of answers to a single request on the user mailing list, computed on the last week, last month, and last three months.
  • Number of subjects on the dev mailing list: the number of threads on the dev mailing list, computed on the last week, last month, and last three months.
  • Number of subjects on the user mailing list: the number of threads on the dev mailing list, computed on the last week, last month, and last three months.
Personal tools