Ekaterina Svetlova
Associate Professor at the University of Twente
Resume
Ekaterina Svetlova is Associate Professor at the University of Twente in the Netherlands. Previously, she held positions as a researcher and a lecturer at the University of Leicester, University of Constance (Constance, Germany), Zeppelin University (Friedrichshafen, Germany) and University of Basel (Basel, Switzerland). She also gained practical experience as a portfolio manager and financial analyst in Frankfurt/Main, Germany. Her interdisciplinary research sits at the intersection of finance, Science and Technology Studies (STS) and economic sociology.
Abstract
AI Ethics in Finance: Knowledge and Responsibility for AI failure
Dr. Ekaterina Svetlova
There are numerous ethical issues related to the rise of artificial intelligence (AI) in finance; however, many of them have not been sufficiently addressed so far. The issue of responsibility seems to be of a particular importance. The paper aims to develop a framework to analyse responsibility in AI ethics in finance. To do so, the paper discusses a real-life case of an AI that made independent investment decisions and lost client’s money (the case of Tyndaris). The question is: Under which conditions can AI designers, programmers and portfolio managers be held responsible for AI decisions and actions? The point of departure is the observation that AI programmers and users in financial markets might possess incomplete knowledge about how AI systems work and make decisions; as a result, the assignment of responsibility is hampered. At the same time, the ethical intuition suggests that ‘it’s complicated’ or ‘I don’t know how it works’ are quite unsatisfying answers to the responsibility question (Martin 2019, p. 837). AI produces states of incomplete knowledge that require systematic ethical considerations. This observation necessitates bringing together AI ethics and epistemology (the branch of philosophy studying the general principles of knowledge and knowledge production). This is the direction also taken by The Ethics and Epistemology of Artificial Intelligence initiative at the University of Twente (https://www.utwente.nl/en/bms/aiethicsandepistemology). While bringing together ethics and epistemology, the paper evaluates the requirements formulated by EU regulators to keep a human in the loop as someone who is directly responsible for AI-supported decisions and has to have a sufficient knowledge of AI workings as well as valid explanations for its recommendations. While further developing the concept of epistemological responsibility (van Baalen et al. 2021), the paper systematically analyses under which conditions a human in the loop concept makes sense, what responsible use of AI in finance implies and which epistemic and moral obligations can be reasonably assumed by AI programmers and users in the field of investments. Implications for regulators will be discussed.
References:
van Baalen, S. , Boon, M., & Verhoef, P. (2021). From clinical decision support to clinical reasoning support systems. Journal of evaluation in clinical practice, 27(3), 520-528. Martin, K. (2019). Ethical implications and accountability of algorithms. Journal of Business Ethics 160, 835-850.