When would it NOT MAKE SENSE to use ITEM - ITEM COLLABORATIVE Filtering?
when there are many more ITEMS than USERS
F1 Score Calculation
F1 = 2*(P*R)/(P+R)
What factors do we consider when deciding wether to assign weights to the item vectors being incorporated into a user’s profile?
we should consider ALL of these factors
Which of the following types of users have been the source of data for making recommendations in recommender systems?
All system users who have expressed opinions.
People with similar tastes to the target user.
Matrix Factorization (Check all that apply)
aims to decompose the user-item interaction matrix into the product of two lower dimensionality rectangular matrices.
builds a model that exploits the history of user-item interactions.
We’ve discussed the Netflix Competition. Which of the following statements about the competition and the winning solution is most correct?
The winning algorithm involved a complex hybrid algorithm that used statistical\machine learning techniques to mix together a variety of general-purpose and special-purpose algorithms, in the end resulting in a significantly improved prediction performance for the competition data.
Recall the notion of recall of recommender systems. Let us assume that in a movie catalogue there are 19 movies that are relevant for user u. What is the recall of the top-10 recommendations if 8 are relevant for user u?
Recall= (Relevant_Items_Recommended in top-k) / (Relevant_Items) Precision= (Relevant_Items_Recommended in top-k) / (k_Items_Recommended)
When using item-item CF with binary data, we usually just sum the similarities between the item and its neighbors, rather than computing a weighted average. Why?
Since there are no ratings, the weighted average is effectively an average of a set of 1s, which is always 1. Summing similarities creates a meaningful score
Why would you want to conduct A/B tests of a recommender system?
Zuletzt geändertvor 2 Jahren