An unethical optimisation principle


  • Prof. Anthony Davison
  • Ecole Polytechnique Fédérale de Lausanne (EPFL)
  • Local: FCUL – Bloco C/6 Piso 4 Sala: 6.4.30 – (5ª feira) – 17:00
  • Quinta-feira, 17 de outubro de 2019
  • Referência Projeto: UID/MAT/00006/2019
 

Artificial intelligence (AI) is increasingly deployed in commercial situations. Consider for example using AI to set prices of insurance products to be sold to a particular customer. There are legitimate reasons for setting different prices for different people, but it may also be profitable to game their psychology or willingness to shop around. The AI has a vast number of potential strategies to choose from, but some are unethical — by which we mean, from an economic point of view, that there is a risk that stakeholders will apply some penalty, such as fines or boycotts, if they subsequently understand that such a strategy has been applied. We consider this situation and show that if the AI is used to maximise risk-adjusted return, then under mild conditions it is disproportionately likely to pick an unethical strategy unless the objective function allows sufficiently for this risk. Even if the proportion of unethical strategies is small, the probability of picking an unethical strategy can become large; indeed, unless returns are fat-tailed, this probability tends to unity as the strategy space becomes large. We discuss this and related results, which involve classical results from the statistics of extremes. The work is joint with Heather Battey, Nicholas Beale and Robert MacKay.