# The Weighted Tree Similarity Algorithm Biology Essay

Published:

This paper presents the weighted tree similarity algorithm for searching on e-learning courses. Tree that is used is represented in the form of labeled nodes, labeled branches, and branches weighted. Weighted tree similarity algorithm used to compute the similarity of two fruit tree representation of a subject. Tree was developed using XML in Object-Oriented RuleML. By finding a few examples of similarity of two tree modules, so users get relevant results to search on e-learning modules according to the desires and needs.

Keywords: Weighted tree similarity, E-learning, Tree, XML, Object-Oriented RuleML.

## INTRODUCTION

In the middle of the development of Information Technology (IT) is growing rapidly, the concept and mechanisms of learning and teaching (education) based IT has become an inevitable requirement. This concept became known as the e-Learning. E-Learning has had an impact in the transformation process of conventional education to digital, both in terms of content and system. Currently the e-Learning has been widely accepted the world community, as evidenced by the more rapid implementation of e-Learning in education (schools, training, and universities) and industry (Cisco Systems, IBM, HP, Oracle, etc.) [10].

### Professional

#### Essay Writers

Get your grade

or your money back

using our Essay Writing Service!

Contrary to the positive effect of widespread implementation of e-Learning, particularly in education, problems often arise encountered by people who use e-Learning. When will follow or even a few e-learning, users are often faced with many choices of existing e-Learning. The confusion must have occurred Pls e-Learning will of the select a suitable, corresponding, and more precisely relevant to the needs and Desires of the user. Therefore necessary to have a solution that is able to provide answers to the appropriate e-learning and relevant to user searches.

Many previous studies on which the writing of this paper. These studies are related to the semantic search methods with the weighted tree similarity algorithm is the algorithm calculating the semantic similarity between two tree weight. This algorithm has been applied to match the e-business transactions [14], the search for learning object [7], a virtual market for electric networks [16], four-wheeled vehicle transactions [11], the estimated cost of the project [6], the search for appropriate information handheld devices [5], and automatic audit documents to the International Organization for Standardization (ISO) [4].

This paper will present the discussion of search methods based e-Learning tree using weighted tree similarity algorithm to search results more relevant to the needs and desires of the user. Tree representation for e-Learning as the basis for search by users by using the weighted tree similarity algorithm will be described in this paper. So that users will be able to get search results that the corresponding e-learning and relevant to the needs and wants.

## THEORY

This section will explain basic theories related to the research paper. Explanation will be structured from AgentMatcher architecture, tree representation weighing, counting tree similarity and similarity based on weighted variants.

## AgentMatcher Architecture

In the multi-agent systems architectures, such as Acorn [15], the agent brings buyer and seller to buyer and seller information. Buyer and seller agents need to communicate with each other through an intermediary agent to complete a transaction [16]. In this paper, the seller is a developer of e-Learning courses and the buyers are users who search on e-Learning courses that will be followed.

Developers and users enter this information each course on the virtual Learning Management Systems (LMS) as the manager of e-Learning. Developer courses that managed to enter the information users enter information and courses that will be searched for conformance with the needs and wants.

After Each course information from the developers and users acre defined, the measurements conducted subject similarity Will Be Between the user input information with the course developers. This similarity measurement will of courses-courses That Produce appropriate and relevant to what the user really wants.

## Developer 1 User 1

## Developer 2 User 2

## . .

## . .

## . .

## Developer m User n

Figure 1. Match-making on the LMS

## Weighted Tree Representation

Courses that will count the similarity represented by a tree that has the characteristics of the node labeled, labeled branches, and branches weighted, and the labels and weights normalized. Label sorted alphabetically from left to right. The number of branch-level weights in the same subtree is one [12].

### Comprehensive

#### Writing Services

Plagiarism-free

Always on Time

Marked to Standard

Figure 2 illustrates a simple example of the representation of user queries on a course in e-Learning.

Figure 2. Examples of weighted tree representation

By default the tree is represented by Woo RuleML file that refers to the standardization of XML. Examples can be seen in Figure 3. There are some symbols, as for a description of symbols, among others [13]:

<cterm> = whole tree

<opc> = the root of the tree

<ctor> = labels from the root node

<_r> = role of each arch / edge and has several attributes that represent the labels n and w represent the weight.

<ind> = label for the role

<cterm>

<_opc><ctor>Course</ctor><_opc>

<_r n="Credit" w="0,2"><ind>3</ind></_r>

<_r n="Duration" w="0,2">

<cterm>

<_opc><ctor>18 meetings</ctor></_opc>

<_r n="Start" w="0,5"><ind>january</ind></_r>

<_r n="End" w="0,5"> <ind>april</ind></_r>

</cterm>

</r>

<ind>18 meetings</ind></_r>

<_r n="Level" w="0,3"><ind>bachelor</ind></_r>

<_r n="Tuition" w="0,3"><ind>$1500</ind></_r>

</cterm>

Figure 3. Tree representation in RuleML Woo

Sub tree of a role have the same structure / identical that begins with <cterm> and beyond.

## Weighted Tree Similarity

Algorithm calculating the similarity between two weighted tree is contained in the paper [14] and [12]. Figure 4 shows an example of two trees T1 and T2 are calculated similarities.

Similarity value for each pair subtree lies between the interval [0,1]. Value 0 means that while a completely different meaning is identical. The depth and width of the tree is not restricted. Similarity computation algorithm tree recursively explore each pair of tree from top to bottom starting from left to right. This algorithm begins by calculating the similarity from the bottom up when reaching the leaf nodes. Similarity value of each pair of top-level subtree is calculated based on subtree similarity in the level beneath.

T1

T2

Figure 4. Example of calculating the basic similarity

At the time of calculation, the value of the branch weights are also taken into account. Branch weights averaged using the arithmetic average (wi + w'i) / 2. The average value is then multiplied by the weight of branches similarity Si obtained recursively. The first value is obtained based on the similarity of leaf nodes and can set its value using the function A (Si). Initially weighted tree similarity algorithm gives only a value if leaf node is equal and 0 if different [14]. This tree similarity calculation formulation contained in the following equation:

(1)

with A (si) is a leaf node similarity value, wi, and wi 'is the weight for the pair of weighted arc tree. Valuation of A (si) is analogous to logic while valuing pairs analogous to the logic OR.

In the example in Figure 4 the behavior of the algorithm can be explained as follows. As a first step two calculated similarity tree branch nodes obtained by 1st Credit. The resemblance is multiplied by the average weighting Credit branch (0.2 +0.2) / 2 produces similar branches. Algorithms then search for the next branch node similarity, Duration. Because this is not leaf node then the algorithm will move down the branch name to calculate similarity. Furthermore, the algorithm then calculates the similarity of branches and cumulating with the similarity Start End branch. This accumulation is 18meetings subtree similarity value. Algorithm moves to the branch level and the accumulation of similarity with other branches of the same level to produce a total similarity.

## Similarity Based on Weight Variance

Similarity computation algorithm that uses the average weight has been discussed in section 2.3. This section will discuss the extension of the similarity algorithm which for any pair of variants will be summed weights. Then the sum of these variants averaged. For the special case of tree-level sum of weights equal to 1, the variant is computed as follows [16]:

### This Essay is

#### a Student's Work

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Examples of our work(2)

where

(3)

n = number of pairs of weights

Average variance is very useful to select the results that feel better when some results have the same similarities. Variants that are smaller than the similarity values indicate better results for the chosen.

Figure 5. Tree with weights corresponding to extreme

Figure 5 above the average benefit describe mutation variants. Simply put, all of A (si) is equal to 1. In Figure 5 (a) and (b), T1 weighted tree representation for buyers, while T2 and T3 is a weighted tree for the two sellers. Figure 5 (a) and (b) together show that the similarity value is equal to 1. In this case would be confusing if the buyer must specify one seller because both sellers have the same similarity value. But if it is seen clearly that T2 should be chosen as a seller more than corresponding T3 because of the weight of Figure 5 (a) is the same. This confusion can be resolved by applying the average variance. If the calculated average variance of the similarity tree in Figure 5 (a) is 0, while Figure 5 (b) is 0:18. Thus, T2-seller preferred as an average value of the smaller variants.

Thus, based on the average smaller variants can be interpreted that the two fruit tree is considered more suitable or more selected eligible. The above example is an example in particular, and the average variance can be used in various cases with the same similarity value.

## METHODOLOGY

Before the search process modules with the weighted tree similarity algorithm, first prepared standard subject tree representing the metadata from all data in e-Learning course. Then the next step is calculating the similarity of two fruit trees, some tree matching courses, and scoring results.

## Course Standard Tree Scheme

Subjects: the user wishes to be represented in the form of tree that had been established standards for e-Learning modules, in order to calculate the resemblance to a tree course developers. In Figure 6 and 7 is the form of a standard tree schema for e-Learning courses in and instant.

Figure 6. Tree scheme standards on e-learning courses

Figure 7. Instant user tree

## Tree Design

In the tree similarity measurements, some idea of the differences will affect the tree shape similarity computation tree. The following comparison scheme will be presented five different tree similarity [14].

## Scheme 1

Figure 8. Tree pairs with different node labels

In Figure 8, tree T1 and T2 each have a root node with the same label of Course, but his label different nodes in the subtree. In this example, the similarity of these were deemed not exist. Similarity only be obtained from the root nodenya equally describe a course.

## Scheme 2

(A) Paired with a weight tree branch that is opposite

(b) Spouse tree (a) with identical node labels

Figure 9. Tree pairs with weights Opposite branch

In Figure 9 (a), seen couples who have one tree subtree the same label that is Bachelor. But they have different weights opposite, then the similarity between the two could be determined as follows:

Similarity first branch

1.0*(0.0+1.0)/2=0.5

Similarity second branch

0.0*(1.0+0.0)/2=0.0

Then the similarity both can be accumulated:

S(T1,T2)=0.5+0.0=0.5

In Figure 9 (b), the two branches of the tree pairs have the same node label on subtreenya, with the weight of the two branches of opposite as well. Then the similarity between the two tree can be calculated:

Similarity first branch

1.0*(0.0+1.0)/2=0.5

Similarity second branch

1.0*(1.0+0.0)/2=0.5

Then the similarity both can be accumulated:

S(T1,T2)=0.5+0.5=1.0

## Scheme 3

(a)

(b)

Figure 10. The couple tree branches that are distinguished by the weight

In Figure 10 (a) and (b), based on the calculation of scheme 2, then both partners should have the same resemblance. Only when seen in the node labeled Diploma and Bachelor-level branch weights, in Figure 10 (a), T1 and T2 has a value of 1.0 has a value of 0.1. Whereas in Figure 10 (b), T3 has a value of 1.0 and T4 has the value 0.9. Similarity (a) properly is higher than the similarity, (b). It concluded that:

S(T1,T2) > S(T3,T4).

## Scheme 4

(a)

(b)

Figure 11. Tree pairs with left-tree improvement

In Figure 11, T1 and T3 has one branch has two branches. While the T2 and T4 is the same tree views of the value of the weight of branches and labels on all subtree nodes. In Figure 11 (a) tree pair differs only in one node labels on the T1 and T2-Diploma Bachelor. Whereas in Figure 11 (b) pairs having two different tree node labels namely T3 and T4-Diploma Bachelor and $ 800 - $ 1,000. So that the similarity between the two pairs can be compared to that (a) have a similarity greater than (b) because the number different fewer. (S T1, T2)> S (T3, T4).

## Scheme 5

(a)

(b)

Gambar 12. Pasangan tree dengan struktur yang sama

In Figure 12 (a) and (b), T1-T2 and T3-T4 almost have the same tree structure, except in one pair of Tuition branches, has a different node labels. So that it can be said S (T1, T2) and S (T3, T4) has the same resemblance. But for a label branch nodenya Tuition different on each spouse, has a weighting of different branches of both partners. If the calculation is based on two schemes, we can see the similarity S (T1, T2) < S (T3, T4).

## Algorithm Design

In the weighted tree similarity algorithm similarity levels are denoted by real numbers between 0 and 1. The similarity value 0 if both are not at all equal and valuable one, if really the same. The forms of tree that can be processed by this algorithm is referring to the model of Object-Oriented RuleML Weighted.

Weighted Tree Similarity Algorithm has three main functions namely treesim, treemap and treeplicity [17]. Treesim function is implemented by writing "treesim [N, A] (t, t ')" which will produce real numbers [0.1]. Detailed description of this function as follows [14]:

The parameter "N" is the identity of the node-fraction, a number between 0 and 1, which determines a large fraction of common labels given to the root, relative to the comparison calculation result lists its subtree.

The parameter "A" is a function of the arc to make the adjustment on the results of similarity, similarity to compensate for degradation of nesting tree.

The argument "t" and "t '" are two fruit tree will be searched resemblance.

Treemap function that is implemented by way of writing "treemap [N, A] (l, l ')" function to compare two lists l and l' recursively, each of which represents the set of the arc at each level according to the identification of identical rootnya treesim . This function produces real numbers [0.1]. The following equation is executed recursively to compute the similarity in a tree:

(4)

with the leaf node is the similarity value, A (si) is the function of setting the value of leaf node similarity, while wi and wi 'the weight of the arc pair weighted tree.

Treeplicity function (I, t) are recursively to measure the level of simplicity (simplicity) of a tree, a value between 0 to 1. if the value of close to 1 means that the tree is more simple, and if the value is close to 0 then the tree is more complex. This value decreased with increasing number of arc and deeper levels (Breadth and depth). Value as an argument of this function is the depth degradation value that begins with the price (1-N) by the function treemap. Every there are additional levels, then I will be multiplied by a global factor treeplideg whose value <= 0.5. This function also generates real numbers [0.1].

For the special case of tree-level sum of weights equal to 1, the variant is computed as in equation (2) and (3). Average variance is very useful to select the results that feel better when some results have the same similarities. Variants that are smaller than the similarity values indicate better results for the chosen.

## EXPERIMENT RESULT AND ANALYSIS

In this section we will analyze the results of the comparison tree that has been discussed in Section 3.2 by using the algorithm in section 3.3.

Table 1. The result of an experiment scheme 1

Tree

Calculation

Similarity

(sim)

Variance

(var)

T1-T2

w12 = (0.5-0.5)/2=0

var = {((0.5-0)2+(0.5-0)2)+ ((0.5-0)2+(0.5-0)2)}/2=0.5

sim = [0.0*(0.5+0.5)/2]+[0.0*(0.5+0.5)/2]=0.0+0.1=0.1

0.1

0.5

Experiments using a pair tree in the example above the tree T1 and T2 each have a root node with the same label that is Course, but the label different nodes in its subtree. In this example, the similarity calculation using the average weight of both of them think there is no similarity value or 0. Similarity only be obtained from the root nodenya equally describe a Course. Therefore, in Table 1, although the calculations will be worth 0.0 but can be given in addition to the similarity of the root values of 0.1, so its value is no longer 0.0 but 0.1.

But the opposite by using the similarity calculation based on the variant, then the two couples are no longer considered to have similar 0.0 or 0.1 but the similarity value of 0.5 rather than 0.

Table 2. Results experiment scheme 2

Tree

Calculation

Similarity

(sim)

Variance

(var)

T1-T2

w1 = (0.0-1.0)/2=-0. 5

w2 = (1.0-0.0)/2=0.5

var = {((0.0+0.5)2+(1.0+0.5)2)+ ((1.0-0.5)2+(0.0-0.5)2)}/2=3

sim =

[1.0*(0.0+1.0)/2]+[0.0*(1.0+0.0)/2]=0.5

0.5

3

T3-T4

w1 = (0.0-1.0)/2=-0. 5

w2 = (1.0-0.0)/2=0.5

var = {((0.0+0.5)2+(1.0+0.5)2)+ ((1.0-0.5)2+(0.0-0.5)2)}/2=3

sim =

[1.0*(0.0+1.0)/2]+[1.0*(1.0+0.0)/2]=1.0

1.0

3

In the experiment two schemes, the pair T1-T2 tree tree seen couples who have one subtree the same label that is Bachelor. But they have different weights opposite. While the pair T3-T4, the two branches of the tree pairs have the same node label on subtreenya, with the weight of the two branches of opposite as well. So, from the similarity calculation using the average weights in Table 2 then S (T1, T2) <S (T3, T4).

But unlike similar calculations using variants, both partners are considered to have the same similarity similarity value is 3.

Table 3. Results experiment schemes 3

Tree

Calculation

Similarity

(sim)

Variance

(var)

T1-T2

w1 = (0.0-0.45)/2=-0.225

w2 = (1.0-0.1)/2=0.45

w3 = (0.0-0.45)/2=-0.225

var = {((0.0+0.225)2+(0.45+0.225)2)+ ((1.0-0.45)2+(0.1-0.45)2) + ((0.0+0.225)2+(0.45+0.225)2)}/2=1.4375

sim =

[0.0*(0.0+0.45)/2]+[0.0*(1.0+0.1)/2] +[0.0*(0.0+0.45)/2]=0.2823

0.2823

1.4375

T3-T4

w1 = (0.0-0.05)/2=-0.025

w2 = (1.0-0.9)/2=0.05

w3 = (0.0-0.05)/2=-0.025

var = {((0.0+0.025)2+(0.05+0.025)2)+ ((1.0-0.05)2+(0.9-0.05)2) + ((0.0+0.025)2+(0.05+0.025)2)}/2=1.6375

sim =

[0.0*(0.0+0.05)/2]+[0.0*(1.0+0.9)/2] +[0.0*(0.0+0.05)/2]==0.1203

0.1203

1.6375

In the third experiment, the second tree the couple should have the Same similarities. Only Pls seen in node-labeled Bachelor Diplome and weight of the branch level, at the couple's first tree, T1 and T2 has a value of 1.0 has a value of 0,1. While on the second pair tree, T3 has a value of 1.0 and has a value of 0.9 T4. T1-T2 pair deserved higher similarity than the similarity of pairs T3-T4. And based on the calculation of similarity using average weighting scheme 3 in Table 3, so That We Can conclude That (S T1, T2)> S (T3, T4).

And it turns out though by using the similarity calculation based on the variant, the second pair of this tree has a similarity value of S (T1, T2)> S (T3, T4). By comparing the value of T1-T2 pair variants smaller than T3-T4.

Table 4. The experimental results scheme 4

Tree

Calculation

Similarity

(sim)

Variance

(var)

T1-T2

w1 = (0.0-0.333)/2=-0.1665

w2 = (1.0-0.334)/2=0.167

w3 = (0.0-0.333)/2=-0.1665

var = {((0.0+0. 1665)2+(0.333+0. 1665)2)+ ((1.0-0.167)2+(0.334-0.167)2) + ((0.0+0.1665)2+(0.333+0.1665)2)}/2=1.276

sim =

[0.0*(0.0+0.333)/2]+[0.0*(1.0+0.334)/2] +[0.0*(0.0+0.333)/2]=0.2350

0.2350

1.276

T3-T4

w1 = (0.0-0.333)/2=-0.1665

w2 = (0.5-0.334)/2=0.083

w3 = (0.5-0.333)/2=-0.0835

var = {((0.0+0. 1665)2+(0.333+0. 1665)2)+ ((0.5-0.083)2+(0.334-0.083)2) + ((0.5+0.0835)2+(0.333+0.0835)2)}/2=0.7498

sim =

[0.0*(0.0+0.333)/2]+[0.0*(0.5+0.334)/2] +[0.0*(0.5+0.333)/2]=0.1675

0.1675

0.7498

In experiment 4, T1 and T3 has one branch has two branches. While the T2 and T4 is the same tree views of the value of the weight of branches and labels on all subtree nodes. On T1-T2 pair tree differs only in one node labels on the T1 and T2-Diploma Bachelor. While at T3-T4 couple have two different tree node label of T3 and T4-Diploma Bachelor and $ 800 - $ 1,000. So that the similarity between the two pairs can be compared to the first tree that couples have a greater similarity of the second pair because the fewer number of differences. And, based on Table 4 which uses similar calculations with an average weight of it can be proved that S (T1, T2)> S (T3, T4).

But instead if using the similarity calculation based on the variant, the second pair has a comparison of the similarity value S (T1, T2) <S (T3, T4).

Table 5. The experimental results scheme 5

Tree

Calculation

Similarity

(sim)

Variance

(var)

T1-T2

w1 = (0.3-0.3)/2=0

w2 = (0.2-0.2)/2=0

w3 = (0.5-0.5)/2=0

var = {((0.3-0)2+(0.3-0)2)+ ((0.2-0)2+(0.2-0)2) + ((0.5-0)2+(0.5-0)2)}/2=1.276

sim =

[1.0*(0.3+0.3)/2]+[1.0*(0.2+0.2)/2] +[0.01*(0.5+0.5)/2]=0.55

0.55

0.76

T3-T4

w1 = (0.3334-0.3334)/2=0

w2 = (0.333-0.333)/2=0

w3 = (0.333-0.333)/2=0

var = {((0.3334-0)2+(0.3334-0)2)+ ((0.333-0)2+(0.333-0)2) + ((0.3334-0)2+(0.3334-0)2)}/2=1.276

sim = [1.0*(0.3334+0.3334)/2]+[1.0*(0.333+0.333)/2] +[0.01*(0.333+0.333)/2]=0.7

0.7

0.67

In experiment 5, T1-T2 and T3-T4 almost have the same tree structure, except in one pair of Tuition branches, has a different node labels. So that it can be said S (T1, T2) and S (T3, T4) has the same resemblance. But for a label branch nodenya Tuition different on each spouse, has a weighting of different branches of both partners. Based on Table 5 above we can see the similarity S (T1, T2) <S (T3, T4).

And this is the case when using the similarity calculation based on the variant, the second pair of this tree has a similarity value of S (T1, T2) <S (T3, T4). Namely by comparing the value of T3-T4 variant pairs is smaller than T1-T2.

## Similarity and Pairing

Tree representing the buyer and seller information in the real world is complex. Figure 14 below shows a tree that represents the information of the buyer agent san sellers.

Figure 14. Form of tree information brought buyer and seller agents

Under the tree algorithm described above, the tree similarity value is calculated in each pair tree. Thus, after agents of buyers and sellers entering the market, the value of the similarity between their calculated and AgentMatcher decided based upon the similarity threshold, which is dynamically determined by demand right now, if they should begin negotiations according to their similarity value. On the other hand, this problem does not only happen to one buyer and one seller only on a very large market. Just like in the real world, there are many shops in the shopping center with lots of buyers inside. Generally, more than one buyer visiting the store to choose a product which satisfy them. Conversely, sellers in the store are looking for buyers who Also cans satisfy Them. Well, this is called matching or matching process. Whenever the seller and buyer preferences and interests to feel they are close, they can negotiate and try to get a proper profit as possible from the negotiation process [16].

Table 6. Case during the matching process

Rank

b1

b2

b3

b4

1

s1

0.73

s2

0.64

s4

0.85

s5

0.58

2

s4

0.69

s5

0.61

s1

0.76

s4

0.56

3

s2

0.52

s3

0.61

s2

0.69

s1

0.49

4

s3

0.44

s1

0.42

s3

0.60

s3

0.44

5

s5

0.30

s4

0.27

s5

0.56

s2

0.41

For instance there are four buyer agents and 5 seller agents in a shopping center. Suppose again every buyer will evaluate offers from every seller. Thus, for each buyer agent, we need to compute the similarity between a buyer and seller of all agents. So the table can be made for buyers scoring. This table is as shown in Table 6. For every buyer, this similarity algorithm ranks the similarity of the greatest value to the smallest.

Can be seen, ranking first in the row in Table 6, the similarity values of b3 and s4 are the largest. Definitely, should be recommended for b3 s4. Once a buyer agent to get recommendations, buyer agents and seller agents are recommended in each table will be marked Unavailable, as shown in Table 7. 'Unavailable' means they will not be able to take part in the matching process in the cycle. A cycle starting after four buyer agents and 5 seller agents entered the shopping center, and ends when no more recommendations to be made.

Table 7. Table after s4 recommended

Rank

b1

b2

b3

b4

1

s1

0.73

s2

0.64

s4

0.85

s5

0.58

2

s4

0.69

s5

0.61

s1

0.76

s4

0.56

3

s2

0.52

s3

0.61

s2

0.69

s1

0.49

4

s3

0.44

s1

0.42

s3

0.60

s3

0.44

5

s5

0.30

s4

0.27

s5

0.56

s2

0.41

Once a recommendation is made, then the seller is selected first available at each table buyer. Previous process is repeated again, and found that recommended for b1 s1. Table showing this is Table 8.

Table 8. Table after s1 recommended

Rank

b1

b2

b3

b4

1

s1

0.73

s2

0.64

s4

0.85

s5

0.58

2

s4

0.69

s5

0.61

s1

0.76

s4

0.56

3

s2

0.52

s3

0.61

s2

0.69

s1

0.49

4

s3

0.44

s1

0.42

s3

0.60

s3

0.44

5

s5

0.30

s4

0.27

s5

0.56

s2

0.41

The next process, is recommended for b2 s2. Table 9 indicates the table this process.

Table 9. Table after s2 recommended

Rank

b1

b2

b3

b4

1

s1

0.73

s2

0.64

s4

0.85

s5

0.58

2

s4

0.69

s5

0.61

s1

0.76

s4

0.56

3

s2

0.52

s3

0.61

s2

0.69

s1

0.49

4

s3

0.44

s1

0.42

s3

0.60

s3

0.44

5

s5

0.30

s4

0.27

s5

0.56

s2

0.41

Table 10. Table after all the buyers get their recommendation

Rank

b1

b2

b3

b4

1

s1

0.73

s2

0.64

s4

0.85

s5

0.58

2

s4

0.69

s5

0.61

s1

0.76

s4

0.56

3

s2

0.52

s3

0.61

s2

0.69

s1

0.49

4

s3

0.44

s1

0.42

s3

0.60

s3

0.44

5

s5

0.30

s4

0.27

s5

0.56

s2

0.41

## CONCLUSION AND FUTURE WORK

Of ideas and discussion in this paper, we can conclude that the algorithm based on the weighted tree with average weights and variants capable of producing results on E-learning courses more relevant, in this case in accordance with the wishes and needs of the user.

If the value of the second similarity is the same tree pair by calculating the average weight, it can be calculated with the variant to determine which pair tree which is considered better in this regard are variants that have a smaller value.

Furthermore, this algorithm can be implemented search for E-Learning in Learning Management Systems such as Moodle. It is expected that users will have easy to find on E-learning modules in accordance with the wishes and needs.