A Report On Modern Recommender Systems Computer Science Essay
Published:
This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
Modern recommender systems can be classified into three broad categories, contentbased recommender systems, collaborative filtering systems and hybrid systems. In the following section is provided a brief description of these categories accompanied by some of the most recent representative systems proposed in the literature.
1.1 Content Based Recommender Systems
Contentbased filtering approaches recommend items for the user based on the descriptions of previously evaluated items. In other words, they recommend items because they are similar to items the user has liked in the past (Montaner, 2003).
Representatives (After 2008)
(Zenebe, 2009) A system developed using fuzzy modeling for contentbased recommender systems. The method proposed consists of a representation method for items features and user feedback using fuzzy sets, and a contentbased algorithm based on various fuzzy set theoretic similarity measures, and aggregation methods for computing recommendation confidence scores.
(Felfernig, 2008) Base his research on one particular technology for recommender systems: constraintbased recommendation. In this paradigm recommendation is viewed as a process of constraint satisfaction, some constraints come from users, while other constraints come from the product domain.
1.2 Collaborative Filtering Recommender Systems
The collaborative filtering technique matches people with similar interests and then makes recommendations on this basis. Recommendations are commonly extracted from the statistical analysis of patterns and analogies of data extracted explicitly from evaluations of items given by different users or implicitly by monitoring the behavior of the different users in the system. (Montaner, 2003).
Representatives (After 2008)
(Acilar, 2009) Propose a collaborative filtering model, constructed based on the Artificial Immune Network Algorithm (aiNet). aiNet is chosen because is capable of reducing sparsity and providing the scalability of dataset via describing data structure, including their spatial distribution and cluster interrelations. Besides, the effects of using clustering for forming the neighborhoods to the system performance investigation, datasets are clustered by using kmeans algorithm and then these cluster partitions are used as neighborhoods.
(Campos, 2008) The proposed system use two main Soft Computing techniques in order to model the uncertainties and the tolerance of imprecision related to the recommending process: Bayesian network formalism to model the way the user's ratings are related and fuzzy logic to deal with the ambiguity and vagueness of the ratings.
(Shang, 2009) By applying a diffusion process, it is proposed a new index to quantify the similarity between two users in a userobject bipartite graph. To deal with the discrete ratings on objects, a multichannel representation is used where each object is mapped to several channels with the number of channels being equal to the number of different ratings. Each channel represents a certain rating and a user having voted an object will be connected to the channel corresponding to the rating. Diffusion process taking place on such a userchannel bipartite graph gives a new similarity measure of user pairs, which is further demonstrated to be more accurate than the classical Pearson correlation coefficient under the standard collaborative filtering framework.
(Chen, 2009) A framework for collaborative filtering is proposed, by applying orthogonal nonnegative matrix trifactorization (ONMTF), which alleviates the sparsity problem via matrix factorization and solves the scalability problem by simultaneously clustering rows and columns of the useritem matrix.
(Lee, 2009) Propose an approach that combines the userbased CF and the itembased CF, and associates the two predictions, which come from the different CF algorithms, by weighted averaging.
(Jeong, 2009) A memory based collaborative filtering technique that incorporate the level of a user credit instead of using similarity between users. The user credit is the degree of one's rating reliability that measures how adherently the user rates items as others do.
(Yang, 2009) A collaborative filtering approach based on heuristic formulated inferences. The proposed approach is based on the fact that any two users may have some common interest genres as well as different ones. This approach introduces a more reasonable similarity measure metric, considers users' preferences and rating patterns, and promotes rational individual prediction, thus more comprehensively measures the relevance between user and item.
(Bonnin, 2009) Propose a recommender system that considers the context of the recommendation. It uses Markov models inspired from the ones used in language modeling while integrating skipping techniques to handle noise during navigation. Weighting schemes are also used to alleviate the importance of distant resources.
(Zhang, 2008) Suggest a Topical PageRank based algorithm, which considers item genre to rank items for users and recommends the topranked items to users correspondingly. The basic idea of the proposed algorithm lies in the investigation of correlation between ranking and recommender systems since toprank items can be recommended to users. It is made an attempt to correlate ranking algorithms for web search with recommender systems. Specifically, it is attempted to leverage Topical PageRank, a recently proposed superior ranking algorithm, to rank items and then recommend users with toprank items.
(Randle, 2008) In this research, the regularized matrix factorization is generalized to regularized kernel matrix factorization. Kernels provide a flexible method for deriving new matrix factorization methods. Furthermore with kernels nonlinear interactions between feature vectors are possible. It is proposed a generic method for learning RKMF models. From this method an online update algorithm for RKMF models is derived that allows solving the newuser/newitem problem.
(Umyarov, 2009) Propose a more general class of methods that combine external aggregate information with individual ratings in a novel way. Unlike the previously proposed methods, one of the defining features of this approach is that it takes into the consideration not only the aggregate average ratings but also the variance of the aggregate distribution of ratings. The methods proposed, estimate unknown ratings by finding an combination of individuallevel and aggregatelevel rating estimators in a form of a hierarchical regression model that is grounded in the theory of statistics and machine learning.
(Takacs, 2009) Focus his research on the use of different techniques of matrix factorization (MF) applied to the recommendation problem. He proposes the use of incremental gradient descent method for weight updates, the exploitation of the chronological order of ratings and the use of a semipositive version of the MF algorithm.
(Yildirim, 2008) Propose an itembased algorithm, Random Walk Recommender, which first infers transition probabilities between items based on their similarities and models finite length random walks on the item space to compute predictions.
(Weimer, 2008) Suggest several extensions to maximum margin matrix factorization. First, the usage of arbitrary loss functions which paves the way to structured prediction. Inside this framework, it is presented an algorithm for the optimization of the ordinal ranking loss.
(Koren, 2009) Introduce the tracking of temporal changes in the customer's preferences in order to improve the quality of the recommendations provided.
(Hijikata, 2009)Propose a discoveryoriented collaborative filtering algorithm. The biggest difference between this algorithm and a pure CF algorithm is that this algorithm uses not only a profile of preference used by the pure CF algorithm but also the so called profile of acquaintance, used to map the knowledge, or the lack of it, about items.
(Schclar, 2009) Propose the use of an ensemble regression method. In all iterations, interpolation weights for all nearest neighbors are simultaneously derived by minimizing the root mean squared error. From iteration to iteration instances that are hard to predict are reinforced by manipulating their weights in the goal function that needs to be minimized.
(Koren, 2010) Introduce a new neighborhood model based on formally optimizing a global cost function. A second, factorized version of the neighborhood model is also suggested, aiming to improve the scalability of the algorithm.
(Kwon, 2008) Aim to find new recommendation approaches that can take into account the rating variance of an item in selecting recommendations.
(Amatriain, 2009) Present an approach to improve the system`s accuracy by reducing the natural noise in the input data via a preprocessing step. In order to quantitatively understand the impact of natural noise, they first analyze the response of common recommendation algorithms to this noise. Next, they propose an algorithm to denoise existing datasets by means of rerating: i.e. by asking users to rate previously rated items again.
(Ma, 2009) Propose a seminonnegative matrix factorization method with global statistical consistency. The method endows a new understanding on the generation or latent compositions of the useritem rating matrix. Under the new interpretation, their work can be formulated as the seminonnegative matrix factorization problem. Moreover, they propose a novel method of imposing the consistency between the statistics given by the predicted values and the statistics given by the data. They further develop an optimization algorithm to determine the model complexity automatically.
(Massa, 2009) Propose to replace the step of finding similar users on which the recommendation will be based, with the use of a trust metric, an algorithm able to propagate trust over the trust network in order to find users that can be trusted by the active user. Items appreciated by these trustworthy users can then be recommended to the active user.
(Lakiotaki, 2008) Propose a system that exploits multicriteria ratings to better model user's preference behavior and enhance the accuracy of the recommendations.
1.3 Hybrid Recommender Systems
Hybrid recommender systems combine two or more recommendation techniques to gain better performance with fewer of the drawbacks of any individual one. (Burke, 2002)
Representatives (After 2008)
(Albdavi,2009) A recommendation technique in the context of online retail store, called hybrid recommendation technique based on product category attributes (HRPCA), which extracts user preferences in each product category separately and provides more personalized recommendations. The overall procedure of HRPCA is divided into six phases: product taxonomy formation, grain specification, extracting product category attributes, user (customer) profile creation, useruser and userproduct similarity calculation and recommendation generation. The input data consist of web server log files, product database, user database and purchase database.
(Porcel, 2009) A fuzzy linguistic recommender system designed using a hybrid approach and assuming a multigranular fuzzy linguistic modeling.
(AlShamri, 2008) A hybrid, fuzzygenetic approach to recommender systems. The user model is employed to find a set of likeminded users within which a memorybased search is carried out. This set is much smaller than the entire set, thus improving system's scalability.
(Gunawardana, 2009) Make use of unified Boltzmann machines, which are probabilistic models that combine collaborative and content information in a coherent manner. They encode collaborative and content information as features, and then learn weights that reflect how well each feature predicts user actions. In doing so, information of different types is automatically weighted, without the need for careful engineering of features or for posthoc hybridization of distinct recommender systems.
(Nam, 2008) Focusing their research on solving the userside cold start problem, develop a hybrid model based on the analysis of two probabilistic aspect models using pure collaborative filtering to combine with users' information.
(Givon, 2009) propose a method for using socialtags alone or in combination with collaborative filteringbased methods to improve recommendations and to solve the coldstart problem in recommending books when few to no ratings are available. In their approach tags are automatically generated from the content of the text in the case of a new book and are used to predict the similarity to other books.
At Table 1 is presented a synopsis of the different approaches discussed on this chapter.
Researcher / Year 
Type 
Main Techniques Used 
Problem Focused 
Data Sets Used 
Zenebe 2009 
Content Based 
Fuzzy Sets 
Accuracy 
MovieLens 
Felfernig 2008 
Content Based 
Constraint driven recommendation 
Use in domains with complex rarely rated items 
 
Acilar 2009 
Collaborative Filtering 
Artificial Immune Networks algorithm 
Data Sparsity Scalability 
MovieLens 
Campos 2008 
Collaborative Filtering 
Bayesian Networks Fuzzy Logic 
Process the uncertainty involved in the recommendation 
MovieLens 
Shang 2009 
Collaborative Filtering 
Multichannel representation Diffusion process on the userchannel bipartite graph 
Accuracy 
Netflix MovieLens 
Chen 2009 
Collaborative Filtering 
Orthogonal nonnegative matrix trifactorization 
Data sparsity Scalability 
MovieLens 
Lee 2009 
Collaborative Filtering 
Combination of userbased and itembased CF 
Data sparsity Accuracy 
EachMovie MovieLens 
Jeong 2009 
Collaborative Filtering 
Use of “user credit” as degree of rating reliability 
Coldstart Accuracy 
MovieLens 
Yang 2009 
Collaborative Filtering 
Heuristic formulated inferences 
Accuracy 
EachMovie MovieLens 
Bonnin 2009 
Collaborative Filtering 
Markov model Skipping techniques to handle noise 
Accuracy 
Bank Intranet web logs 
Zhang 2008 
Collaborative Filtering 
Topical Page Rank algorithm 
Accuracy 
MovieLens 
Rendle 2008 
Collaborative Filtering 
Regularized kernel matrix factorization 
Coldstart Scalability 
Netflix MovieLens 
Umyarov 2009 
Collaborative Filtering 
Combination of external aggregate information with user ratings 
Accuracy 
Netflix MovieLens 
Takacs 2009 
Collaborative Filtering 
Matrix Factorization 
Scalability 
Netflix Jester MovieLens 
Yildirim 2008 
Collaborative Filtering 
Random walk itembased algorithm 
Data sparsity Scalability 
MovieLens 
Weimer 2008 
Collaborative Filtering 
Maximum margin matrix factorization 
Data privacy Crossdomain predictions 
WikiLens 
Koren 2009 
Collaborative Filtering 
Tracking of temporal changes in the customer's preferences 
Modeling drifting user preferences 
Netflix 
Hijikata 2009 
Collaborative Filtering 
Discoveryoriented CF algorithm 
Recommendation diversity 
Music ratings dataset built for the experiment 
Schclar 2009 
Collaborative Filtering 
Ensemble regression method 
Accuracy 
MovieLens 
Koren 2010 
Collaborative Filtering 
Optimization of global cost function 
Accuracy Scalability 
Netflix 
Kwon 2008 
Collaborative Filtering 
Rating diversity consideration 
Accuracy Diversity 
MovieLens 
Amatriain 2009 
Collaborative Filtering 
Natural data noise reduction 
Accuracy 
Customized movie rating dataset 
Mass, 2009 
Collaborative Filtering 
Use of trust in the neighbor finding 
Cold Start Data Sparsity 
Dataset from Epinions.com 
Lakiotaki, 2008 
Collaborative Filtering 
Multicriteria ratings 
Accuracy 
Dataset from Yahoo! movies 
Ma, 2009 
Collaborative Filtering 
Seminonnegative matrix factorization 
Accuracy Scalability 
EachMovie 
Albdavi 2009 
Hybrid 
Hybrid recommendation based on product category attributes 
More personalized recommendations 
Web logs from online retail store 
Porcel 2009 
Hybrid 
Fuzzy linguistic modeling 
Accuracy 
Digital Library dataset 
AlShamri 2008 
Hybrid 
Fuzzygenetic 
Data sparsity Scalability 
MovieLens 
Gunawardana 2009 
Hybrid 
Boltzmann machines 
Coldstart 
MovieLens TaFeng supermarket dataset 
Nam 2008 
Hybrid 
Combination of pure CF with users information 
Coldstart 
MovieLens 
Givon 2009 
Hybrid 
Automatic tag generation from text 
Coldstart 
Corpus of full text books 
Table 1 Approaches proposed in the recommender systems literature after 2008
2. Challenges
Recommender systems suffer from some common problems. The most usual ones and those that have drawn the most of the researcher's attention are the cold start and the data sparsity problems that can potentially lead to poor recommendations. Also due to their nature of implementation, recommender systems often face scalability problems. Other than these, there are a number of smaller problems that can also affect negatively the performance of the system and have become the reasons behind the introduction of some of the more innovative techniques at the recommender systems landscape.
2.1 Cold Start
The problem where items must be proposed to a new user without having previous usage patterns to support these recommendations (Rashid, 2002). The cold start problem may occur either from the introduction of a new user in the system or from the introduction of a new item in the dataset.
2.2 Data Sparsity
In a large ecommerce site such as Amazon.com, there are millions of products and so customers may rate only a very small portion of those products. Most similarity measures used in CF work properly only when there exists an acceptable level of ratings across customers in common. Such sparsity in ratings makes the formation of neighborhood inaccurate thereby resulting in poor recommendation. (Acilar, 2009)
2.3 Scalability
Modern recommender systems are applied to very big datasets for both users and items. Therefore they have to handle very high dimensional profiles to form the neighborhood and the nearest neighbor algorithm is often very timeconsuming and scales poorly in practice. (Acilar, 2009)
2.4 Other
Apart from the three commonly faced challenges mentioned above, researchers have tried to address a number of different problems. In this section we briefly present some of them.
 The “interest drift” problem. By the term interest drift in the recommender systems context we refer to the phenomenon that the taste and the interests of users may change over time or under changing circumstances, leading to inaccurate recommendation results (Ma, 2007). A once valid recommendation may not still be accurate after the user has changed his preference patterns. In order to counter fight this, the recommendation models should not be static but it must evolve and adapt itself to the changing preference environment in which it is called to work in.
 The noisy data problem. At the case of systems where the input data are explicit (e.g ratings) and not implicit (like web logs), there is an extra data noise added coming from the vagueness of the ratings themselves as a product of the human perception. The given ratings are only an approximation of the user's approval on the artifact that he is rating and are restricted by the rating scale`s accuracy. For example in a rating scale of five stars, a user may give a movie three stars, but if he had the opportunity to rate the same movie in a percentage scale he may give something different than 60%. Results may be even more different if the scale he was called to rate the movie on, was something like “I hated it  it was OK  I loved it”
Moreover a fuzziness of the rating is also introduced by the user himself and his own ratings may differ at another time, place or emotional condition. It is sawn (Amatriain, 2009) that if users are called to rate again movies that have seen and rated at the past, their new ratings will differ respectably from their originals ratings. Cases that users did not even remember seeing the movie they have rated at the past were also not uncommon.
Finally there are deviations in the ratings characterizing the overall voting trends of either the user, or the items. For example a user may be strict and have a tendency to give lower ratings than the average reviewer, or from the item`s point of view, there may be deviations affecting positively or negatively the ratings that the items receives. For example a movie that is considered “classic” may tend to receive higher ratings than it would normally receive without his reputation affecting the audience. To make things worse those observed trends may or may not be static over the time. For example there is a chance that a viewer becomes stricter as he grows up and as a result his ratings become more biased towards the lower end of the scale as the ratings he was giving some time ago.
All these factors introduce noise in the data and can have a negative effect on the accuracy of the recommendations.
The lack of diversity problem. Most of the researcher`s efforts are focused on making the recommendation produced by a recommender system more accurate. Lately thought, there are argues raised that accurate recommendations are not always what the user may be expecting from a recommender system. To start with, the logic of such a system is to help the user select items for which he has not formed an opinion of his own yet. If the system keeps suggesting items that are too similar to the ones he is already familiar with, then the systems selfcancels, to a point, his own purpose. We can assume that the user can speculate the rating of an item too close to an item he is already knows about, without the need of an elaborate recommender system. What the user is looking for is from the system to help him estimate the rating of an item that he could not rate himself without the assistance of the recommender, solely based on his own experience.
Speaking more generally, the diversity is something that may be a positive attribute for a recommender system, though this statement is very dependable on the context of implementation of the system. For example when we talk about the implementation of a recommender system that suggests songs to the user, then diversity can be a welcome change that will help the user stay interested and fulfill his mood for experimentation. Here a song that is not too close at the users taste still holds a good chance to be liked from him. But even if the suggestion does not bear fruits the consequences will not be severe. On the other hand on fields, like the financial services recommendations, there is no room for experimentation. The user here is looking for a suggestion accurately fitting his needs, already tested and with the less involved risk possible.
Metrics
Evaluating a recommender system can be a more complex procedure than it initially appears. Measuring the accuracy of the results is just the one face of the coin. Thus many different metrics have been proposed and used to evaluate the successfulness of recommender systems. In the current chapters the most common are presented.
1. Accuracy metrics
Accuracy is the most widely used metric for recommender systems. It measures how close the predicted by the system values are to the true values. It can be expressed as in (1)
(1)
We can more formally formulate the equation (1) as in (2)
(2)
Where P(u, i) is the predictions of a recommender system for every particular user u and item i, and p(u, i) is the real preferences. while R is the number of recommendations shown to the user. In the accuracy metric the P(u, i) and p(u, i) are considered binary functions and r(u, i) is 1 if the recommender presented the item to the user and 0 otherwise.
One common accuracy metric is the Mean absolute error (MAE) that is defined by the equation (3) and measures the average absolute deviation between each predicted rating P(u, i) and each user's real ones p(u, i). N is the total number of the items observed.
(3)
Variations of MAE include mean squared error, root mean squared error, or normalized mean absolute error (Goldberg, 2001).
From these, the most widely used, especially after chosen to be the metric used for the judgment of the entries at the Netflix Prize contest, is the root mean squared error which is defined as in the equation (4).
(4)
1.3 Information Retrieval metrics
Since recommender systems logic and techniques are close to the Information Retrieval (IR) discipline, it poses no surprise that some of the metrics of IR are also present at the recommender systems field. Two of the most widely used metrics are the precision and recall (Cleverdon, 1968).
The calculation of precision and recall is based on a table, as the Table 1 below, that holds the different possibilities of any retrieval decision (Hernandez del Olmo, 2008).
Relevant 
Non Relevant 

Retrieved 
a 
b 
Non Retrieved 
c 
d 
Table 1. Confusion matrix of retrieval decision outcomes
In recommender system terminology, a relevant information is translated to a useful (close to the user`s taste) item while a nonrelevant would be an item not satisfying the user.
Precision (eq.5) is defined as the ratio of relevant items selected to number of items selected
(5)
Precision represents the probability that a selected item is relevant. It determines the capability of the system to present only useful items, excluding the nonrelevant ones.
Recall (eq.6), is defined as the ratio of relevant items selected to total number of relevant items available. Recall represents the probability that a relevant item will be selected and is an indication of the coverage of useful items that the system can obtain.
(6)
Based on the precision and recall line of thought are the Fmeasure metrics (eq.7) which attempt to combine the behavior of both of the metrics in a single equation.
(7)
The most commonly used Fmeasure metric is the F1, where and is defined as in eq.8
(8)
Another metric originating from the information retrieval field and often used in the recommender system evaluation is the Receiver Operating Characteristic (ROC) analysis (Hanley, 1982). The ROC curve represents the recall against the fallout (eq.9).
(9)
Objective of the ROC analysis is to maximize the recall while at the same time minimize the fallout (fig.1).
1.4 Rank Accuracy Metrics.
The output of the recommendation is often a list of suggestions presented to the user from the most relevant to the least relevant. To measure how successful the system was on this, a category of metrics, called rank accuracy metrics was introduced. Rank accuracy metrics measure how accurate the recommender system can predict the ranking of a list of items presented to the user.
Two of the most commonly used rank accuracy metrics are the halflife utility metric and the Normalized Distancebased Performance Measure (NDPM). (Herlocker, 2004).
The halflife utility metric attempts to evaluate the utility of a ranked list to the user. The utility is defined as the difference between the user's rating for an item and the “default rating” for an item. The default rating is generally a neutral or slightly negative rating. The likelihood that a user will view each successive item is described with an exponential decay function, where the strength of the decay is described by a halflife parameter. The expected utility (Ra) is shown in equation (10) ra, j represents the rating of user a on item j of the ranked list, d is the default rating, and α is the halflife. The halflife is the rank of the item on the list such that there is a 50% chance that the user will view this item.
(10)
NDPM (Eq. (11)) can be used to compare two different weakly ordered rankings.
(11)
C− is the number of contradictory preference relations between the system ranking and the user ranking. A contradictory preference relation happens when the system says that item 1 will be preferred to item 2, and the user ranking says the opposite. Cu is the number of compatible preference relations, where the user rates item 1 higher than item 2, but the system ranking has item 1 and item 2 at equal preference levels. Ci is the total number of “preferred” relationships in the user's ranking (i.e., pairs of items rated by the user for which one is rated higher than the other).
2. Suggesting the nonobvious
While the accuracy metrics provide a good indication of the recommender system`s performance, there must be a distinction made between the accurate and the useful results (Herlocker, 2004). For example, a recommendation algorithm may be adequately accurate by suggesting to the user popular items with high average ratings. But often this is not enough. To some extent this kind of predictions are selfexplanatory and offer no useful information to the user, as they would be the items for which the user would less likely need help to discover by himself.
The coverage can be defined as a measure of the domain of items over which the system can make recommendations. In its simplest form, coverage is expressed as the percentage of the items for which the system can form a prediction over the total number of items.
Along the same line of thought, other metrics such as novelty and serendipity have been proposed, for measuring how effectively the system recommends interesting items to the user which he might not otherwise come across.
It should be noted at this point that the importance of the metrics discussed in this section depends greatly from the context of the implementation. For example while in a song recommendation system, proposing to the user something slightly out of his listening trends may be a welcome change that help break the monotony and be stimulation for broadening his horizons, in a financial services recommending system things would be different. Here the user is looking for a suggestion fitting his personal needs as closely as possible, safe and tested, taking under consideration the willingness of the user to take risks at his investment or not.
References
Acilar, M. Arslan, A. (2009) ‘A collaborative filtering method based on artificial immune network' Expert Systems with Applications 36, pp 83248332
AlShamri, M. and Bharadwaj, K. (2008) ‘Fuzzygenetic approach to recommender systems based on a novel hybrid user model' Expert Systems with Applications 35, pp 13861399
Aldbavi, A. and Shahbazi, M. (2009) ‘A hybrid recommendation technique based on product category attributes' Expert Systems with Applications 36, pp 1148011488
Amatriain, A. Pujol, J. Tintarev, N. and Oliver, N. (2009) ‘Rate it Again: Increasing Recommendation Accuracy by User reRating' RecSys'09, October 2325, 2009, New York, New York, USA
Bonnin, G. Brun, A. and Boyer, A. (2009) ‘A LowOrder Markov Model integrating LongDistance Histories for Collaborative Recommender Systems' IUI'09, February 811, 2009, pp 5766
Burke, R. (2002) ‘Hybrid Recommender Systems: Survey and Experiments' User Modeling and UserAdapted Interaction 12, pp 331370
Campos, L. FernandezLuna, J. and Huete, J. (2008) ‘A collaborative recommender system based on probabilistic inference from fuzzy observations' Fuzzy Sets and Systems 159, pp 1554  1576
Chen, G. Wang, F. and Zhang, C. (2009) ‘Collaborative filtering using orthogonal nonnegative matrix trifactorization' Information Processing and Management 45, pp 368379
Cleverdon, C. and Kean, M. (1968) ‘Factors Determining the Performance of Indexing Systems' Aslib Cranfield Research Project, Cranfield, England.
Felfernig, A. and Burke, R. (2008) ‘Constraintbased Recommender Systems: Technologies and Research Issues' 10th Int. Conf. on Electronic Commerce (ICEC) '08 Innsbruck, Austria
Givon, S. and Lavrenko, V. (2009) ‘Predicting Socialtags for Cold Start Book Recommendations' RecSys'09, October 2325, 2009, New York, New York, USA.
Goldberg, K. Roeder, T. Gupta, D. and Perkins, C. (2001) ‘Eigentaste: A constant time collaborative filtering algorithm' Information Retrieval 4 (2), pp 133151
Gunawardana, A. and Meek, C. (2009) ‘A Unified Approach to Building Hybrid Recommender Systems' RecSys'09, October 2325, 2009, pp 117124
Hanley, J. A., and Mcneil, B. J. (1982). ‘The meaning and use of the area under a receiver operating characteristic (roc) curve' Radiology 143, pp 2936.
Herlocker, J. (2004) ‘Evaluating Collaborative Filtering Recommender Systems' ACM Transactions on Information Systems 22 (1), pp 553
Hernandez del Olmo, F. and Gaudioso, E. (2008) ‘Evaluation of reccomender systems: A new approach' Expert Systems with Applications 35, pp790804
Hijikata, Y. Shimizu, T. and Nishida, S. (2009) ‘Discoveryoriented Collaborative Filtering for Improving User Satisfaction' IUI'09, February 811, 2009, Sanibel Island, Florida, USA
Jeong, B. Lee, J. and Cho, H. (2009) ‘User creditbased collaborative filtering' Expert Systems with Applications 36, pp 73097312
Koren, Y. (2010) ‘Factor in the Neighbors: Scalable and Accurate Collaborative Filtering' ACM Transactions on Knowledge Discovery from Data 4 (1), Article 1
Kwon. Y. (2008) ‘Improving TopN Recommendation Techniques Using Rating Variance' RecSys'08, October 2325, 2008, Lausanne, Switzerland, pp 307310
Lakiotaki, K. Tsafarakis, S. and Matsatsinis, N. (2008) ‘UTARec: A Recommender System based on Multiple Criteria Analysis' RecSys'08, October 2325, 2008, Lausanne, Switzerland, pp 219225
Lam, X. Vu, T. Le, T. and Duong, A. (2008) ‘Addressing coldstart problem in recommendation systems' Proceedings of the 2nd international conference on Ubiquitous information management and communication, January 31February 01, 2008, Suwon, Korea
Lee, J. and Olafsson, S. (2009) ‘Twoway cooperative prediction for collaborative filtering recommendations' Expert Systems with Applications 36, pp 53535361
Ma, H. Yang, H. King, I. and Lyu, M. (2009) ‘SemiNonnegative Matrix Factorization with Global Statistical Consistency for Collaborative Filtering' CIKM'09, November 26, 2009, Hong Kong, China, pp 767775
Ma, S. Li, X. Ding, Y. and Orlowska, M. (2007) ‘A Recommender System with InterestDrifting' Lecture Notes In Computer Science 4831, pp 633642
Massa, P. and Avesani, P. (2009) Computing with Social Trust, SpringerVerlag London Limited, pp 259285
Montaner, M. Lopez, B. and De La Rosa, J. (2003) ‘A taxonomy of Recommender Agents on the Internet' Artificial Intelligence Review 13, pp 285330
Porcel, C. Moreno, J.and HerreraViedma, E. (2009) ‘A multidisciplinar recommender system to advice research resources in University Digital Libraries' Expert Systems with Applications 36, pp 1252012528
Rendle, S. and SchmidtThieme, L. (2008) ‘OnlineUpdating Regularized Kernel Matrix Factorization Models for LargeScale Recommender Systems' RecSys'08, October 2325, 2008, Lausanne, Switzerland
Rashid, A. Albert, I. Cosley, D. Lam, S. McNee, S. Konstan, J. and Riedl, J. (2002) ‘Getting to Know You: Learning New User Preferences in Recommender Systems' In Proc. Of ACM IUI 2002, ACM Press
Schclar,A. Tsikinovsky, A. Rokach, L. Meisels, A. and Antwarg, L. (2009) ‘Ensemble Methods for Improving the Performance of Neighborhoodbased Collaborative Filtering' RecSys'09, October 2325, 2009, New York, New York, USA.
Shang, M. Jin, C. Zhou, T. and Zhang, Y. (2009) ‘Collaborative filtering based on multichannel diffusion' Physica A 388, pp 48674871
Takacs, G. Pilaszy, I. Nemeth, B. and Tikk, D. (2009) ‘Scalable Collaborative Filtering Approaches for Large Recommender Systems' Journal of Machine Learning Research 10, pp 623656
Umyarov, A. and Tuzhilin, A. (2009) ‘Improving Rating Estimation in Recommender Systems Using Aggregation and Variancebased Hierarchical Model' RecSys'09, October 2325, 2009, pp 3744
Weimer, M. Karatzoglou, A. and Smola, A. (2008) ‘Adaptive Collaborative Filtering' RecSys'08, October 2325, 2008, Lausanne, Switzerland
Yang, J. Li, K. and Zhang, D. (2009) ‘Recommendation based on rational inferences in collaborative filtering' KnowledgeBased Systems 22, pp 105114
Yıldırım, H. and Krishnamoorthy, M. (2008) ‘A Random Walk Method for Alleviating the Sparsity Problem in Collaborative Filtering' RecSys'08, October 2325, 2008, Lausanne, Switzerland
Zenebe A. and Norcio, A. (2009) ‘Representation, similarity measures and aggregation methods using fuzzy sets for contentbased recommender systems' Fuzzy Sets and Systems 160, pp 76  94
Zhang, L. Zhang, K. and Li, C. (2008) ‘A Topical PageRank Based Algorithm for Recommender Systems' SIGIR'08, July 2024, 2008, pp 713  714