2011.11 Milestone Report

From Maisqual Private Wiki

Jump to: navigation, search

This article summarises what has been done until now for the thesis and what we want to do next.


Contents


[edit] Where we are

Since May 2011, we have gathered many resources on data mining and metrics and now know better where we are, and where we want to go to.

From there, we identified the following steps in our work:

  1. Setup pertinent quality models for evaluation.
  2. Recommend actions according to the current state of the project.

[edit] Setup pertinent quality models

This step is required for two reasons:

  • We need to know if a release is Good or Bad(tm) to classify practices, from an (as most as possible) objective point of view.
  • It is one of the SQuORING deliverables.

[edit] Theoretical Foundations

We define the following axioms for our analysis model:

Axiom 1
A measure corresponds to the degree of achievement of a practice.
Axiom 2
An attribute of quality corresponds to a set of one or more practices.
Axiom 3
A practice can serve more than one attribute of quality.
Axiom 4
A model can be expressed as a set of attributes of quality.
Axiom 5
A model can be expressed as a set of practices.
Axiom 6
The conformance of a project to a model (i.e. an evaluation of quality) can be measured through metrics (i.e. the degree of achievement of its practices).

[edit] Data

Given a release Ri, we consider:

  • A set of constraints for the release; constraints are invariable parameters, e.g. language, domain of application, certifications:

C_i = \begin{Bmatrix}c_{i, 1}\\ c_{i, ...}\\ c_{i, n} \end{Bmatrix}

  • A set of practices[1] (and their achievement measures):

P_i = \begin{Bmatrix}p_{i, 1}\\ p_{i, ...}\\ p_{i, n} \end{Bmatrix}

  • A set of quality models attributes evaluations for the release. Consider we have j quality/analysis models. Each model has k quality attributes/subattributes[2].

Q_i = \begin{Bmatrix}q_{i, 1, 1} & q_{i, 1, ...} & q_{i, 1,j} \\ q_{i, ..., 1} & q_{i, ..., ...} & q_{i, ..., j} \\ q_{i, k, 1}  & q_{i, k, ...} & q_{i, k, j} \end{Bmatrix}

We have: R_i ( C_i, M_i, Q_i ) = R_i \left(\begin{Bmatrix}c_{i, 1}\\ c_{i, ...}\\ c_{i,  n} \end{Bmatrix} \begin{Bmatrix}p_{i, 1}\\ p_{i, ...}\\ p_{i, n} \end{Bmatrix} \begin{Bmatrix}q_{i, 1, 1} & q_{i, 1, ...} & q_{i, 1,j} \\ q_{i, ..., 1} & q_{i, ..., ...} & q_{i, ..., j} \\ q_{i, k, 1}  & q_{i, k, ...} & q_{i, k, j} \end{Bmatrix}\right)

[edit] Correlate quality evaluations (theoretical version)

We want to correlate the following inputs and outputs:

R_i ( C_i, M_i, Q_i ) = R_i \left(\begin{Bmatrix}c_{i, 1}\\ c_{i, ...}\\ c_{i,  n} \end{Bmatrix} \begin{Bmatrix}p_{i, 1}\\ p_{i, ...}\\ p_{i, n} \end{Bmatrix}\right)

\Longrightarrow

Q_i = \begin{Bmatrix}q_{i, 1, 1} & q_{i, 1, ...} & q_{i, 1,j} \\ q_{i, ..., 1} & q_{i, ..., ...} & q_{i, ..., j} \\ q_{i, k, 1}  & q_{i, k, ...} & q_{i, k, j} \end{Bmatrix}

We want to build a transformation such as:


Q_i = \begin{Bmatrix}q_{i, 1, 1} & q_{i, 1, ...} & q_{i, 1,j} \\ q_{i, ..., 1} & q_{i, ..., ...} & q_{i, ..., j} \\ q_{i, k, 1}  & q_{i, k, ...} & q_{i, k, j} \end{Bmatrix}

\Longrightarrow

Q'_i = \begin{Bmatrix}q'_{i, 1}\\ q'_{i, ...}\\ q'_{i, n} \end{Bmatrix}

[edit] Define an optimised model (human version)

Semantically, this model (Q') is

  • Optimised, as the evolution of known and recognised quality models.
  • Weighted, it defines rules, measures and bounds. In that aspect, it as rather a analysis model than a quality model.
  • Specific to a set of constraints (domain, language, etc.), but generic for all projects that belong to this category. This means that measures, rules, and bounds are identical, so different projects are analysed with the same parameters (and thus are comparable).

Every analysis should be tagged with its constraints; this is used to adapt the recommendations to the domain, language, etc.

[edit] Organisation

We need to collect only known metrics for this step, that enter in the evaluation of the different quality models. These models could be:

  • ISO/IEC 9126
  • Other norms (ISO/IEC 15939, ISO/IEC SQuARE, etc.)
  • User Satisfaction surveys (see Data_To_Mine)
  • Manual inputs (e.g. we believe this project should have a good mark).

Once the optimised model is built, we can correlate practices to quality (or what we consider to be quality, according to our optimised model).

[edit] Recommend

We want to recommend actions based on two aspects:

  • Build a classification tree from the experience of projects to give generic advice (e.g. this practice is highly correlated to this attribute of quality).
  • Use collaborative filtering to find similar contexts and situations (metrics landscape) and propose what worked from then (i.e. there was an improvement of this attribute of quality in this context).

[edit] Correlate practices and attributes of quality

Decide if a practice is good or bad, considering the impact it has on the quality of the collected projects.

We want as many measures as possible for this step, since we will correlate them all to the quality results of each release. The data to mine for this step are:

Classification algorithms will help us build a classification tree that is used then for the recommendations.

Note that this step is a SQuORING deliverable. The classification tree may be optional, since collaborative filtering is contextual and thus more adapted to the current context.

[edit] Collaborative filtering

The different project's releases tree is close to a user's history, according to collaborative filtering[3] recommender systems.

From here, given a release Rc for the current project, we try to find a similar set of releases (i.e. with the same constraints and similar metrics/practices) and recommend actions according to what worked for them (i.e. do that, don't do that, because in projects it did/didn't work).

Let use again the previously defined material. We consider each couple of releases Ri, Ri + 1.

R_i ( C_i, M_i, Q'_i ) = R_i \left(\begin{Bmatrix}c_{i, 1}\\ c_{i, ...}\\ c_{i,  n} \end{Bmatrix} \begin{Bmatrix}p_{i, 1}\\ p_{i, ...}\\ p_{i, n} \end{Bmatrix} \begin{Bmatrix}q'_{i, 1}\\ q'_{i, ...}\\ q'_{i, n} \end{Bmatrix}\right)

\Longrightarrow

R_{i+1} ( C_{i+1}, M_{i+1}, Q'_{i+1} ) = R_{i+1} \left(\begin{Bmatrix}c_{i+1, 1}\\ c_{i+1, ...}\\ c_{i+1,  n} \end{Bmatrix} \begin{Bmatrix}p_{i+1, 1}\\ p_{i+1, ...}\\ p_{i+1, n} \end{Bmatrix} \begin{Bmatrix}q'_{i+1, 1}\\ q'_{i+1, ...}\\ q'_{i+1, n} \end{Bmatrix}\right)

Depending on the set of projects with the same constraints, i.e. if the panel is incomplete, we may as well use content-based recommender systems for recommendations. As an example, if there are very few projects for automotive embedded systems, maybe we can use aeronautic embedded systems in some extent.


[edit] Where we want to go

[edit] For the end of 2011

In the next month, we propose the following goals:

  • Confirm/infirm thoughts.
  • Validate goals and roadmap.
  • Propose a protocol for project's data acquisition: method, tools, data[4]

As background tasks, the following needs to be addressed continuously:

  • Continue readings (data mining, metrics).
  • Write, explain the ongoing work.
  • Improve the Maisqual public and private wikis: make them clear, concise, meaningful.

[edit] For 2012

For 2012, we propose the following goals. These should be validated at the end of the year 2011.

  • Constitute a huge database of different software releases with varying constraints.
  • Deliver a first set of optimised models, for at least the most common constraints (e.g. open-source/closed, java/c/c#, aeronautics/desktop/system).

As background tasks, the following needs to be addressed continuously:

  • Continue readings (data mining, metrics).
  • Write, explain the ongoing work.
  • Improve the Maisqual public and private wikis: make them clear, concise, meaningful.

[edit] Final goals

For the final year of the thesis, we should attempt to achieve the following:

  • Improve quality models.
  • Improve experience database.
  • Scale down the sampling frequency from the release to the day-to-day activities.

Sampling frequency

As a first drive, we want to set the sampling frequency at the release level. This allows to study the evolution between different releases: given the characteristics of the release Ri, recommend actions to be performed for the target release Ri + 1.

Our final target shall be a real-time analysis, that give advice between releases, in the day-to-day decisions that lead development. We assume that the mechanisms used in the release granularity phase can be reused quite easily for the real-time granularity phase.

As background tasks, the following needs to be addressed continuously:

  • Continue readings (data mining, metrics).
  • Write, explain the ongoing work.
  • Improve the Maisqual public and private wikis: make them clear, concise, meaningful.


[edit] Communication

[edit] Maisqual public wiki

We tried to put as much material as possible on the public and private wikis.

the public wiki now has:

  • A glossary of more than 400 definitions has been seup.
  • 21 papers and articles summarised.
  • A reference of 94 standards related to software quality measurement.

This is not yet a publishable website. It should be reviewed and checked for consistence, professionalism and shape (design, language).

[edit] Ideas for papers

We thought about the following subjects to write papers about:

  • Take a single project and propose an analysis of the evolution of metrics over the lifetime of the project.
  • EclipseCon[5]: present results of the analysis of some important Eclipse projects (JDT, EMF, XXX). Collaborate with SQuORING for this.

[edit] SQuORING open-source analyses

We want to create a new website to present public analyses of some known open-source projects.

The goals are two folds:

  • From the SQuORING point of view, assess the quality of highly reused pieces of software and propose an objective evaluation of the quality of software that is to be used in corporate contexts. Examples of projects include Apache Ant, JBoss.
  • From the research point of view, gather information about pertinent open-source projects, get feedback about them, and collaborate with the community. Let others know what we are doing, gain interest about our research work.

Note however that this may be dangerous. Communities do not always accept criticism, and a great care is needed to make it really useful and a constructive operation.

Nevertheless, this should have a huge communication and marketing impact, and I personally consider it a necessary step.


[edit] References

  1. Learn more about practices on the Practice page.
  2. Note that different models may have different numbers of characteristics/subcharacteristics.
  3. See maisqual:Recommender_Systems.
  4. Check the on-going work on the subject: Category:Data to Mine.
  5. Check EclipseCon Europe 2011: http://www.eclipsecon.org/europe2011/ .
Personal tools