ontentWise Introduces Open Connector, Opening Up UX Engine Platform to Any Third-Party Software | Contentwise
Articles

Top 5 takeaways from ACM RecSys Conference 2012

You are here:

The sixth ACM conference on Recommender System 2012 just took place in Dublin in early September.

As usual, the conference brought together researchers and experts from academia and industry, representing the most important yearly appointment for the recommender systems’ community worldwide.  During the 5-day program, the latest research results and findings were presented and discussed, and new trends and challenges were sketched out.

Here are what I consider the most interesting takeaways from this year’s edition of the conference:

1. Experiments and testing

A/B testing seems to be a very hot topic both in academic and industry circles. But did you know that the first A/B experiments were introduced right here in Dublin, at the Guinness brewery, by the famous chemist known as “Student” (pseudonym of William Sealy Gosset) who invented the Student-t test? Funny coincidence!

A/B testing was part of Xavier Amatriain’s tutorial on “Building Industrial-scale Real-world Recommender Systems”, that literally filled the room. Xavier, a veteran of the RecSys conference, shared Netflix’s experience, where everything is a recommendation, about large-scale industrial recommender systems. Beyond recommendation algorithms, he mentioned the Netflix’s software architecture and discussed hot topics such as explanation, diversity, and novelty of recommendations. Once again, Xavier highlighted the importance of conducting A/B experiments to find the recommendation settings that optimize live-time overall evaluation criteria, such as member retention.

A/B testing was brilliantly discussed also in the industry keynote by Ron Kohavi (Microsoft), who brought a set of inspiring examples and guidelines on how to run split experiments.

From a more theoretical point-of-view, a conceptual model on conducting user-centric experiments was presented by Bart Knijnenburg (UC Irvine) during his tutorial, providing an alternative (or maybe a complement) to the ResQue framework presented in previous RecSys editions by Pearl Pu’s research group at EPFL.

2. Context

As abundantly discussed by Francesco Ricci during his keynote at the “Context-Aware Recommender Systems” workshop, context influences the perception of the usefulness of an item for a user. As an example, while I might be interested in watching a love movie with my girlfriend, I would never watch the same movie if I am with my friends.

A domain where context plays an important role is the music domain, as highlighted by music data expert Paul Lamere (The Echo Nest); I might love listening to classical music while I am working, but I would definitely prefer rock music while I am jogging, and maybe pop songs while I am driving.

Among the different types of contexts, modal context (e.g., user’s mood) is probably one of the most difficult to capture. User’s emotions are linked to his/her personality, a factor hard to model in in computer systems. The tutorial of Maria Augusta S. N. Nunes (UF Sergipe) and Rong Hu (EPFL) tried to summarize the state-of-the-art in personality-based recommender systems; however I don’t expect practical applications to come out of it in the short term.

3. Decision-making process and interfaces

A full workshop was dedicated to the user decision process, where recommender systems are seen as tools for driving users towards optimal decisions. Have a look at the paper “Decision-Making in Recommender Systems: The Role of User’s Goals and Bounded Resources” (P. Cremonesi, A. Donatacci, G. Garzotto, and R. Turrin), where we have considered the impact of users’ goals and the dynamic characteristics of the resources space, exemplified in the settings of hotel on-line booking.

In the recent years, most researchers have focused on recommendation algorithms and quality evaluation, paying less attention to how users interact with the system. Well-designed interfaces can enhance user experience and his/her overall satisfaction. The problem of designing and evaluating novel intelligent interfaces was the subject of the workshop on “Interfaces for Recommender Systems”, where we can find the interactive application “TopicLen” designed by Laura Devendorf and others, allowing to control and inspect recommendations with a graphical tool .

4. Top-N recommendation and social networks

Some years after the Netflix’s contest, which resulted in in a stream of recommendation solutions evaluated with RMSE, the recommender systems’ community seems now to be more focused on ranking and related metrics other than rating prediction. As an example, a whole session was dedicated to top-N recommendation, where we can also find the work of Yue Shi et al. – “CLiMF: Learning to Maximize Reciprocal Rank with Collaborative Less-is-More Filtering“, which won the best-paper award.

In the era of social networks, RecSys couldn’t miss a session about social recommender systems. Interesting the fact that two papers – “On Top-k Recommendation Using Social Networks” (Yang et al.) and “Real-Time Top-N Recommendation in Social Streams” (Diaz-Aviles et al.) – both focused on top-N recommendations.

5. Ratings

Ratings, either explicitly expressed by the users or implicitly inferred by their activity, are the key building blocks of a recommender system.

This year, some of the research work took into account the importance of considering popularity and positivity biases of ratings, i.e., missing ratings are not missing at random. This point was addressed by Bruno Pradel et al. in “Ranking with Non-Random Missing Ratings: Influence of Popularity and Positivity on Evaluation Metrics“, work that was awarded with an honorable mention. It is worth noting that also the best-paper cited in the top-N recommendation section somehow took into account this issue by discarding negative ratings.

Two papers tackled the problem of how much information we should require from users. The MovieLens’ group analyzed the issue from a theoretical point of view in an interesting paper trying to answer to the question: “How Many Bits Per Rating?“. Our work “User Effort vs. Accuracy in Rating-based Elicitation” (P. Cremonesi, F. Garzotto, and R. Turrin) investigates how many ratings we should get from users to produce accurate-enough recommendations: all conference attendees’ already know the answer is 10!

Related News

Articles

Like VOD but in four dimensions? FAST channel programming automation and personalization with Contentwise Playlist Creator

Thanks to automation and AI, Playlist Creator, leverages content recommendation recipes to generate FAST channel playlists that are fed to cloud playout systems.

read more