Machine Learning Techniques And Incremental Learning Accounting Essay


Global and distributed software development makes it essential to find and connect developers with relevant expertise. Effective bug assignment is difficult to be accomplished manually, as it requires assigning a bug for the first time to a developer, and then reassigning it to another promising developer if the first assignee is unable to resolve it and then repeating this reassignment until the bug is fixed. In open source software development the issue of assigning the bug to active potential developer also has to be tackled. Bug assignment has the potential to significantly reduce software evolution effort and costs.

Contemporary methods of automating bug assignment include various machine learning techniques and tossing graphs. Machine learning approaches use various classifiers such as Naïve-Bayes, Bayesian Network, C4.5, SVM, etc. The success of each machine learning approach depends on the training data set (the fixed bug reports) used. The problems of out-dated dataset, inactive developers and imprecise single attribute tossing graphs in many of these approaches degrade the prediction accuracy. The task of keeping the classifiers has to be dealt with by making them learn from each new bug assignment.

Lady using a tablet
Lady using a tablet


Essay Writers

Lady Using Tablet

Get your grade
or your money back

using our Essay Writing Service!

Essay Writing Service

We emphasis on using a subset of training data to achieve accurate, yet efficient bug classification that reduces computational effort associated with training. Our focus is to apply a slightly modifies Naïve Bayes technique with a wide range of feature selection for classification to provide high prediction accuracy while reducing training and prediction time.


Machine Learning Techniques And Incremental Learning

P.Bhattacharya proposed an experiment for automation of bug assignment using Naïve Bayes Classifier and then further optimize the result using Bayesian networks. According to them, incremental learning helps to improve accuracy. Their approach gives a prediction accuracy of about 27.67% for top1 developer and upto 65.7% for top 5 developers in Mozilla dataset. Our approach is similar to this, and tries to improve this accuracy before it can be fed into the tossing graphs for optimization.

P.Bhattacharya (2010)-is the a prior work of P. Bhattacharya which introduced the idea of fine-grained incremental learning and merging multi-feature tossing graphs. They introduced product component pair as feature which gave better performance.

Lin did an experiment of bug triage using SVM classification algm, splitsample and cross-sample validation techniques on a proprietary Chinese bug dataset SoftPM. They found that introducing "Module Id" as feature for classification improved triage accuracy. Their experiment reported an accuracy of 77.64% when considering module ID i.e., the module a bug belongs, and it reduces to 63% when module ID is not used. This feature is implemented in the method proposed by P. Bhattacharya (2010,2012) as well as in our approach as the product-component pair. But our approach includes many other features to give more specification.

Matter et. al(2007) used vocabulary based model to classify developers based on their expertise as a preprocessing to bug triage. Their experiment created vocabulary-based expertise and interest model of developers which helped to set a better triage criteria.

John Anvik(2006) gave a demonstrative approach to semi-automate bug assignThey gave informations regarding use of diff erent recommendation algorithms such as supervised machine learning algorithms, clustering algorithms, and expertise networks. J.Anvik et. al. (2006) used SVM classifiers for automating bug triage. Also, naïve Bayes and C4.5were implemented and compared for better accuracy.

Data Preprocessing Techniques

Amir et. al.(2012) proposed an "N-gram" based algorithm approx string matching on char level. It can assist human triage with an accuracy of 52.76% in string matching.It performs data preprocessing using CPMerge algorithm on bug description.

D. Cubranic et. al(2004) is supposed to be the first to make an attempt on automating bug triage. They proposed the idea of truncating the vocabulary. Their approach used Supervised Bayesian learning and gave accuracy upto 30% on Eclipse dataset.


Problem Definition

A software bug is an error, flaw, mistake, failure, or fault in a computer program or system that produces an incorrect or unexpected result, or causes it to behave in unintended ways. The results of bugs may be extremely serious. It is common practice for software to be released with known bugs that are considered non-critical, that is, that do not affect most users' main experience with the product. Hence bug triage becomes essential.

Lady using a tablet
Lady using a tablet


Writing Services

Lady Using Tablet

Always on Time

Marked to Standard

Order Now

Bug Triage is a process related to Bugzilla's bug reports and means: closing reports that are obviously about invalid, duplicate or won't fix bugs and to make sure the remaining reports are treated correctly

The main objective of out project is to make the bug triage more efficient by improving the accuracy of classification at the very first stage of prediction. Our approach omits out-dated datasets and inactive developers from the triage. The approach tries to give more specifications during classification with increased number of feature selection.

Life Cycle Of A Bug

Bugs move through a series of states over their lifetime. We illustrate these states using the life-cycle of a bug report for the Mozilla bug project.


FIG 1: Life Of Bug [Bugzilla bug dataset]

When a bug report is submitted to the Eclipse repository, its status is set to NEW. Once a developer has been either assigned to or accepted responsibility for the report, the status is set to ASSIGNED. When a report is closed its status is set to RESOLVED. It may further be marked as being verified (VERIFIED) or closed for good (CLOSED). A report can be resolved in a number of ways; the resolution status in the bug report is used to record how the report was resolved. If the resolution resulted in a change to the code base, the bug is resolved as FIXED. When a developer determines that the report is a duplicate of an existing report then it is marked as DUPLICATE. If the developer was unable to reproduce the bug it is indicated by setting the resolution status to WORKSFORME. If the report describes a problem that will not be fixed, or is not an actual bug, the report is marked as WONTFIX or INVALID, respectively. A formerly resolved report may be reopened at a later date, and will have its status set to REOPENED.


Our approach uses a wider range of dataset to train the classifier Mozilla bug dataset from (May 1998 to July 2012). Hence it gives a higher prediction accuracy. Architecture of this system can be briefly described by the figure below :


FIG 2: Flowchart Depicting Bug Triage Procedure


Data Preprocessing

The bug dataset contains a large number of bug records. But not all are used to train the classifier as they may degrade the performance of the classifier. As proposed by Anvik et. al.(2006) we have filtered out the bugs which are not "FIXED" but "VERIFIED" or "RESOLVED". Our approach analyzes the short description and comments for a bug. The bug description is categorized to analyze the importance of each word in it and find developers who have solved similar bugs based on word familiarity. We use an approach similar to that described by Cubranic (2004). The data preprocessing techniques of tokenization including stemming,stop word and non-alphabetic word removal and tf-idf are performed to assist this analysis.

Naïve Bayes Classifier

There are enormous machine learning techniques experimented for bug triage. The actual efficiency depends on the dataset used. As this involves only text classification, according to the findings of P.Bhattacharya,2012), simple Naïve Bayes classifier can perform with an equal accuracy as other classifiers with complex calculations.A Naïve Bayes classifier classifies the bugs to potential developers. The Naïve Bayes Classifier uses Bayesian formula as its base. Bayes' theorem gives the relationship between the probabilities of Developer D and Component C ,P(D) and P(C), and the conditional probabilities of D given C and C given D, P(D|C) and P(C|D). In its most common form, it is :

Equation (1)

 It expresses how a subjective degree of belief should rationally change to account for evidence. Using Naïve Bayes classifier we calculate, for each developer the probability:

P(Developeri | product_id, component_id, no_of_fixes, relevant_words) Equation (2)

This is the probability that the developer i solves the bug, for given product-component (P - C ) pair that the bug belongs to, the number of bugs fixed by that developer in that P - C pair and the relevant words in the bug description obtained after tokenization. The top 5 experts, based on the probability are selected.

Lady using a tablet
Lady using a tablet

This Essay is

a Student's Work

Lady Using Tablet

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Examples of our work


Filtering Dataset

Our approach uses a filtered subset of bug dataset for training the classifier. The bugs which are not "fixed" but "verified" and "resolved" are particularly used to train the classifier. Also, our approach filters inactive users. Developers who are inactive for more than 4 months were avoided from the triage.

Feature Selection

Classifier performance is highly dependent on feature selection. We select product - component pair and the number of fixes in it as major attributes. A record of number of components fixed by a developer in a particular product and the number of fixes made by the developer in each such component is taken as additional parameter for classification. We also include text categorization as proposed by Buttenburg et al., (2008) for extracting relevant words from bug reports. We employ tf-idf, stemming, stop-word and non-alphabetic word removal (Manning et al., 2008). We use Porter Stemming algorithm for stemming.

Multi-Feature Classification

Our approach uses the selected features for classification. Probability that developer i solved newly arrived bug b is calculated with respect to each selected feature. The probability P in equation (2) is calculated as described below. For each developer d, the probability of solving a component c or product p is calculated as:

Equation (3)

Equation (4)

Probability that the developer d solves a bug if he has fixed 'n' number of bugs in the same P-C pair is:

Equation (5)

Where µ is the mean and is the standard deviation of the fix count with respect to each P-C pair for each developer. The record of number of components fixed by a developer in a particular product and the number of fixes made by the developer in each such component helps to suggest developers when no developer has fixed any bug respective to P-C pair of the newly arrived bug.

Incremental Learning

Incremental learning or inter-fold updates involves updating the classifier and tossing graphs after each fold validation. Our approach uses the dataset splitted into multiple chronological buckets which forms a fold for each run. After each run the prediction is validated, and they are added to the knowledge base of the training dataset.


FIG 3: Intra -Fold Upadte /Incremental Learning



Reference no/Year



Method used

Feature selection


/#bug reports

Matter et. al(2009)

~33.6% for top 1

~71% for top 10

Use bug description and Vocabulary based expertise

Use bug description and Vocabulary based expertise





Machine learning and tossing graphs with incremental learning

Product- component pair





N-gram based algorithm approx string matching on char level

Data preprocessing using CPMerge algorithm on bug description



(our approach)


Machine learning - Naïve Bayes Classifier with incremental learning

Product Component pair,

# components fixed by a developer in a particular product

# fixes made by the developer in each such component


7, 77,034


BUG: the new bug set - unassigned

TrainingSet : the set of existing bugs(training data set)

T : the similarity threshold

1: Crete database for TrainingSet

2: TrainingSet :=Filtered bug dataset (Omit inactive users, and irrelevant bug records)

3: Split unassigned BUG to small buckets.

4: for each bucket do:

5: for each new bug do

6: Perform Text Categorization

7: Using Naïve Bayes Classifier classify the developers

8: Select the top 5 developers as prediction result

9: done

10: Validate the result using Existing Bug Dataset

11: Update TrainingSet

12: done



#Developers Predicted

Prediction Accuracy

Top 1


Top 2


Top 3


Top 4


Top 5



The main focus of our project is to improve prediction accuracy for bug triage. Our approach successfully performed machine learning approaches on Mozilla bug dataset and reported an increased accuracy rate. The average precision and recall over all reports in the test set is computed based on the equation

Equation (6)

The calculations made for our experiment reported an average accuracy of up to 69.8%. The prediction accuracy was calculated for top 1 developer to top 5 developers as shown in table.

A comparison of our approach to previous work is listed in table. The accuracy listed here do not include the tossing graph features implemented by P.Bhattacharya


The assignment of bug is still primarily a manual process. Often bugs are assigned incorrectly to a developer or need to be discussed among several developers before the developer responsible for the fix is identified. These situations typically lead to bug tossing.

The project analyses 7, 77,034 bug reports and detailed activity from Mozilla projects. We find that it takes a long time to assign and toss bugs. When bugs are assigned to developers, the integration can recommend additional developers based on history.

Currently we automated the machine learning techniques of bug triage and proposed a bug tossing algorithm that can be integrated with it to improve prediction accuracy.

The implemented Naïve Bayes classifier with wide range of feature selection provides a prediction accuracy upto 66%, and can be combined with tossing graphs, to improve the prediction accuracy.

The project faces some threats to validity as the load balancing is not performed among the developers. Also the approach used here is domain dependent. We have applied this for Mozilla bug dataset alone.