4/28/2023 0 Comments Icab candidats![]() ![]() ![]() Our method performs inference on the consensus ranking of the items, also when based on partial rankings, such as top-k items or pairwise comparisons. We develop new computationally tractable methods for Bayesian inference in Mallows models that work with any right-invariant distance. The Mallows rank model is among the most successful approaches to analyse rank data, but its computational complexity has limited its use to a particular form based on Kendall distance. Ranking and comparing items is crucial for collecting information about preferences in many areas, from marketing to politics. Experiments on real-world data sets demonstrate the effectiveness of our approach. While we show that sampling from a Mallows model with pairwise evidence is computationally difficult in general, we develop approximate samplers that are exact for many important special cases and have provable bounds with pairwise evidence and derive algorithms for evaluating log-likelihood, learning Mallows mixtures, and non-parametric estimation. At the heart of our technique is a new algorithm, the generalized repeated insertion model (GRIM), which allows sampling from arbitrary ranking distributions, and conditional Mallows models in particular. We develop the first algorithms for learning Mallows models (and mixtures thereof) from pairwise comparison data. We relax these restrictions by considering as data arbitrary pairwise comparisons of alternatives, which represent the fundamental building blocks of ordinal rankings. However, many existing learning and inference methods impose restrictive assumptions on the form of user preferences that can be admitted as evidence. Learning preference distributions is a critical problem in many areas (e.g., recommender systems, IR, social choice).
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |