# Learning Decision-Making Functions Given Cardinal and Ordinal Consensus Data

## DOI:

https://doi.org/10.1609/aaaiss.v3i1.31280## Keywords:

Human-like Learning, Decision-making, Modeling Consensus, Learning Theory## Abstract

Decision-making and reaching consensus are an integral part of everyday life, and studying how individuals reach these decisions is an important problem in psychology, economics, and social choice theory. Our work develops methods and theory for learning the nature of decisions reached upon by individual decision makers or groups of individuals using data. We consider two tasks, where we have access to data on: 1) Cardinal utilities for d individuals with cardinal consensus values that the group or decision maker arrives at, 2) Cardinal utilities for d individuals for pairs of actions, with ordinal information about the consensus, i.e., which action is better according to the consensus. Under some axioms of social choice theory, the set of possible decision functions reduces to the set of weighted power means, M(u, w, p) = (∑ᵢ₌₁ᵈ wᵢ uᵢᵖ)¹ᐟᵖ, where uᵢ indicate the d utilities, w ∈ ∆_{d - 1} denotes the weights assigned to the d individuals, and p ∈ ℝ (Cousins 2023). For instance, p = 1 corresponds to a weighted utilitiarian function, and p = -∞ is the egalitarian welfare function. Our goal is to learn w ∈ ∆_{d - 1} and p ∈ ℝ for the two tasks given data. The first task is analogous to regression, and we show that owing to the monotonicity in w and p (Qi 2000}, learning these parameters given cardinal utilities and social welfare values is a PAC-learnable task. For the second task, we wish to learn w, p such that, given pairs of actions u, v ∈ ℝ₊ᵈ, the preference is given as C((u, v), w, p) = sign(ln(M(u, w, p)) - ln(M(v, w, p))). This is analogous to classification; however, convexity of the loss function in w and p is not guaranteed. We analyze two related cases - one in which the weights w are known, and another in which the weights are unknown. We prove that both cases are PAC-learnable given positive u, v by giving an O(log d) bound on the VC dimension for the known weights case, and an O(d log d) bound for the unknown weights case. We also establish PAC-learnability for noisy data under IID (Natarajan 2013) and logistic noise models for this task. Finally, we demonstrate how simple algorithms can be useful to learn w and p up to moderately high d through experiments on simulated data.## Downloads

## Published

2024-05-20

## Issue

## Section

Symposium on Human-Like Learning