The changes are ok, they do not affect the overall behavior of the recommenders.

About your questions Sean:

First, I think it is good to not assume the pref value is in the range 1 to 5. The final evaluation will be a little worst, but I think that cutting the final predictions to bound values should be the work of some "outer" recommender that knows the domain of the predictions.

Second, the Knn is a variation of the generic item-based recommender. It first takes the item i and calculates the k nearest neighbors of item i. The difference is in the calculation of the final prediction: instead of taking the similarity of each neighbor as the weight of each preference, it first makes a linear interpolation to calculate the best weights to apply in each preference. The paper of Bell and Koren have the details about it. As you see, this is an item-centric recommender, and we could create a different recommender just taking the neighbors of users instead of the neighbors of items. In this second case, the recommender would be a variation of the generic user-based recommender, but taking the values of a linear interpolation as the weights of each neighbor.

I think that the discussion about which one is better is beyond this comment, but the item-centric approach is much more used in practice because the data sets are limited in the number of items. In these cases, an item-centric approach have better results and better performance. But, as a researcher of social and complex networks, I think this is not the general case - see the cases of YouTube, LastFM, and also P2P clients, where the number of items is much greater than the number of users. In these cases, I think a user-centric approach would have better results.

The version I'm working with. There are still some fixes to make, and some modifications as adding documentation and license.

Some work is still needed in the Knn implementation to follow exactly the paper description, but I'm already having very good results with it.