Online ZOOM Link:

https://us02web.zoom.us/meeting/register/tZUsd-CrqjMvEt1dKDRWtOyNsb2woaanD_BH

Summary

Forum for interdisciplinary discussions and exchange of ideas on the adoption of innovative technologies in finance, bringing together academic and industry experts. The topics we will cover include (but are not limited to):

  • Opportunities for AI adoption by the financial sector;
  • Challenges emerging from the fast adoption of novel technologies in the provision of financial services;
  • State of the art solutions.

The conference is for free. We would kindly ask you to register: Conference Registration 

Conference Chairs and Organization Team

Prof. Dr. Branka Hadji Misheva, Professor of Applied Data Science and Finance, BFH, Switzerland 
Prof. Dr. Jörg Osterrieder, Professor of Sustainable Finance, BFH, Switzerland and Associate Professor of Finance and Artificial Intelligence, University of Twente, Netherlands
Prof. Dr. Christian Hopp, Professor of Finance, BFH, Switzerland , Head of Institute Applied Data Science and Finance 

Schedule

Artificial Intelligence in Finance

September 30, 2022, 13:00 – 17:30 (BFH Bern Business School, and online)

12:00 – 13:00

Registration and Check-In

13:00 – 13:10

Welcome and Opening

Prof. Dr. Christian Hopp, Prof. Dr. Branka Hadji Misheva, Prof. Dr. Jörg Osterrieder

Session Finance I

Chair: Prof. Dr. Joerg Osterrieder

13.10 – 13.30

Applying differential deep learning to calibrate stochastic volatility models

Dr. Paul Bilokon, Imperial College and The Thalesians, London, UK

Dr. Abir Sridi, Imperial College, London, UK

13:30 – 13:50

Direct Indexing and Bespoke Indexing using Optimization under Uncertainty

Prof. Dr. Ronald Hochreiter, Vienna University of Business and Finance, Vienna, Austria

13:50 – 14:10

Attention and sentiment around scheduled macroeconomic news announcements and the volatility on the U.S. Equity market

Prof. Dr. Stefan Lyocsa, Masaryk University, Brno, Czech Republic

14.10 – 14.30

Coffee Break

Session Finance II

Chair: Prof. Dr. Branka Hadji Misheva

14:30 – 14:50

A Time Series Approach to Explainability for Neural Nets

with Applications to Risk-Management and Fraud Detection

Prof. Dr. Marc Wildi, ZHAW, Zurich, Switzerland

14:50 – 15:10

A Modular Framework for Reinforcement Learning Optimal Execution

Dr. Jan-Alexander Posth, ZHAW, Zurich, Switzerland

Fernando de Meer Pardo, ZHAW, Zurich, Switzerland

15.10 – 15.30

Data for Explainable AI: Transparent, Green, and Secured

Dr. Tuan Trinh, EIT Digital, Budapest, Hungary

15:30 – 15:50

Algorithmic biases in financial applications

Prof. Dr. Catarina Silva, University of Coimbra, Portugal

15:50 – 16:20

Coffee Break

Session Finance III

Chair: Prof. Dr. Christian Hopp

16:20 – 16:40

Industry Keynote: Credit selection in
the bond market using machine learning

Mustafa Modjib,
BEKB, Switzerland

16:40 – 17:00

Transformers for electronic trading

Dr. Paul Bilokon, Imperial College and The Thalesians, London, UK

17:00 – 17:20

Why AI in Finance: A critical reflection and outlook in the year 2022

Sandro Schmid, LPA, Zurich, Switzerland

17:20 – 17:30

Closing Remarks: Artificial Intelligence in Finance – The European COST Network

Prof. Dr. Jörg Osterrieder

17:30 – 18:30

Apero

18:30 – 20:30

Unesco World Heritage Walk and Speakers’ Dinner

Speakers

hadj
pf_Alex_Posth
oste
Ronald2
pf_Fernando_De_Meer-1
show_picture.asp
sandro3-1
Marc_Wildi_BC8Y9778_web-1
tuan-1
Picture2

The COST Action Fintech and Artificial Intelligence in Finance

The COST (Cooperation in Science and Technology) Action 19130
49 countries, 200+ researchers from 100+ universities
Funded by the Horizon Europe Framework Programme of the EU
The network
Management CommitteeWorking group members

The Research Topics
Fintech, Artificial Intelligence in Finance, Transparency in Finance, Machine Learning, Blockchain, Cryptocurrencies
How to join our COST Action
Sign up for working group membership here.

Programme Committee

Prof. Dr. Nguyen Cuong, Lincoln University, New Zealand
Prof. Dr. Christian Hopp, BFH Bern University of Applied Sciences, Switzerland
Prof. Dr. Branka Hadji Misheva, Zurich University of Applied Sciences, Switzerland
Prof. Dr. Audrius Kabasinskas, Kaunas University of Technology, Lithuania 
Prof. Dr. Jörg Osterrieder, BFH Bern University of Applied Sciences, Switzerland and University of Twente, Netherlands
Prof. Dr. Valerio Poti, University College Dublin, Ireland
Prof. Dr. Catarina Silva, University of Coimbra, Portugal
Prof. Dr. Alessandra Tanda, University of Pavia, Italy
Prof. Dr. Simon Trimborn, City University Hong Kong, Hong Kong
Prof. Dr. Ania Zalewska, University of Bath, United Kingdom

Info and Location

The conference is hosted at the BFH Bern University of Applied Sciences.

The Institute for Applied Data Science & Finance aims to establish itself as a leading Swiss research institute for data-driven, finance-based and strategic insights, analysis and value creation. To this end, an interdisciplinary team of around 25 researchers conducts research and teaching on topics relating to data science, data ethics, data and technology management, data-based business models, corporate financing, digital financing, taxation, accounting and financial reporting.

Bern University of Applied Sciences

Brückenstrasse 73
3005 Bern

Bios and Abstracts

Prof. Dr. Štefan Lyócsa, Masaryk University, Czech Republic 

Štefan Lyócsa is a professor at the Department of Finance of the Masaryk University, Brno, Czech Republic, where he supervises PhD students and leads Financial Management and Artificial Intelligence in Finance courses. He is also a researcher at the Slovak Academy of Sciences, where he is supervising research on ‘Systemic risk on financial markets: interconnectedness of financial institutions (APVV-18-0335)’. His main research interest is centered around market and credit risks. Over the past 5 years he published 30+ peer-reviewed papers among others in ‘International Journal of Forecasting’, ‘Journal of Economic Dynamics and Control’, ‘Journal of the Operational Research Society’, ‘The European Journal of Finance’, ‘Energy Economics’ or ‘Expert Systems with Applications’.

Attention and sentiment around scheduled macroeconomic news announcements and the volatility on the U.S. Equity market

Most of the literature recognizes the existence of a relationship between attention and sentiment of the general public with market price fluctuations. Yet, for forecasting purposes, it is unclear what information should we look for. Scheduled macroeconomic news announcements are regular and potentially price moving events; thus representing a potential target for information retrieval. We analyze how information related to scheduled macroeconomic news announcements retrieved from multiple data sources improves volatility forecasts of over 400 major U.S. stocks. Specifically, we extract attention and/or sentiment from social media, news articles, information consumption and search engine. Working within the penalized regression framework, complete sub-set regression framework and random forest, we identify which data sources and measures of public interest are driving future price fluctuations.

Prof. Dr. Catarina Silva, University of Coimbra, Portugal

Catarina Silva is Assistant Professor at the Department of Informatics Engineering of the University of Coimbra. She has a PhD degree in Computer Engineering, with 20 years’ experience teaching Computer Engineering BSc and MSc, while also supervising MSc and PhD students. She is a senior researcher at the Adaptive Computation Group of CISUC with machine learning and pattern recognition as main areas of research. Skilled at managing different sized projects and scientific entrepreneurships, involving people with different backgrounds, namely faculty, students, alumni, and companies. Author and co-author of 4 books, circa 20 journal articles and 50 conference papers. Scientific committee and paper reviewer of several conferences and journals. President of the General Assembly of the Portuguese association of pattern Recognition, IEEE senior member of the Computational Intelligence Society. Past-Chair of the IEEE Portugal Section.

https://www.cisuc.uc.pt/en/people/catarina

Algorithmic biases in financial applications

Intelligent methods, as deep neural networks, are becoming standard go-to algorithms for a wide range of financial applications. However, applicability in such critical applications, e.g., credit scoring, risk analysis, investment performance, has been faced with some hurdles due to lack of model interpretability.

Such systems suffer from interpretability/explainability issues and in this talk an overview of challenges and current approaches is presented, including case studies.

Prof. Dr. Jörg Osterrieder

Joerg Osterrieder is Professor of Sustainable Finance at BFH and Associate Professor of Finance and Artificial Intelligence at University of Twente. He has been working in the area of financial statistics, quantitative finance, algorithmic trading, and digitisation of the finance industry for more than 15 years.

Joerg is the Action Chair of the European COST Action 19130 Fintech and Artificial Intelligence in Finance, an interdisciplinary research network combining 200+ researchers and 38 European countries as well as five international partner countries. He was the director of studies for an executive education course on “Big Data Analytics, Blockchain and Distributed Ledger”, co-director of studies for “Machine Learning and Deep Learning in Finance” and has been the main organizer of an annual research conference series on Artificial Intelligence in Industry and Finance since 2016. He is a founding associate editor of Digital Finance, an editor of Frontiers Artificial Intelligence in Finance and frequent reviewer for academic journals.

In addition, he serves as an expert reviewer for the European Commission on the “Executive Agency for Small & Medium-sized Enterprises” and the “European Innovation Council Accelerator Pilot” programmes

Previously he worked as an executive director at Goldman Sachs and Merrill Lynch, as quantitative analyst at AHL as well as a member of the senior management at Credit Suisse Group. Joerg is now also active at the intersection of academia and industry, focusing on the transfer of research results to the financial services sector in order to implement practical solutions.

Artificial Intelligence in Finance – The European COST research network

Joerg will give an overview of the European COST Action Fintech and Artificial Intelligence in Finance. COST stands for Cooperation in Science and Technology and the is the longest running research funding agency in Europe. He is the Action Chair of this COST Action, a network of more than 200 researchers from 38 European countries and 5 international partner countries. The research is broadly focusing on Artificial Intelligence in Finance, with a specific emphasis on transparent financial markets, methods and products.

Prof. Dr. Ronald Hochreiter, WU Vienna
Ronald Hochreiter is Docent at WU Vienna University of Economics and Business. He is President of the Academy of Data Science in Finance since 2017 and Vice-president of the Austrian Society of Operations Research since 2013. His research is based on Decision Science (Operations Research and Optimization under Uncertainty) as well as Data Science (Artificial Intelligence and Machine Learning) and applied to Algorithmic & Quantitative Finance, Social Science, Public Management, and Health Management. He serves as Principal Investor and partner for various national and international (EU) research projects. He is Program Director of the module Data Science within the Professional MBA on Digitalization and Data Science at the WU Executive Education.

Human-centered AI-based Bespoke Indexing Methodologies

 

Contemporary Direct Indexing and Bespoke Indexing methods based on Optimization under Uncertainty are going to be presented. It will be shown that optimization approaches offer many advantages over the classical ranking approaches. Optimization techniques both for the sub-selection of assets, the optimal allocation, as well as for modifiers will be shown. Numerical results substantiate the proposed methods.

Sandro Schmid, LPA
Sandro has over 20 years of experience in the financial services industry where he held different senior functions in front, risk, operations, and IT such as CEO, CRO, and COO. He was also a partner for two Big4s where he built up the advisory and risk consulting. Further, he founded “AAAccell”, a global Top100 FinTech and the “Swiss Risk Association”, which he chaired as president for almost 10 years. Sandro studied Economy and holds an MBA and a MAS as well as FRM and AI diplomas. He is/was also lecturing at different Universities such as University of Zurich, SFI and others.

Artificial intelligence (AI) is rapidly advancing in most industries around the world. To what extent did/will it transform financial services and if so, why or why not?

Dr Paul Alexander Bilokon
Dr Paul Bilokon is CEO and Founder of Thalesians Ltd and an academic at Imperial College, where his work focuses on machine learning, high performance computing, big data, and electronic trading. His career in quantitative finance spans Morgan Stanley, Lehman Brothers, Nomura, Citigroup, Deutsche Bank, and BNP Paribas, and he and his team at Thalesians continue to provide consulting services to numerous financial institutions, both on the buy-side and on the sell-side. He was one of the e-credit pioneers and has co-authored several books, including Machine Learning in Finance: From Theory to Practice and Big Data and High-Frequency Data with kdb+/q. Paul is fluent in C++, Java, Python, and kdb+/q and enjoys building distributed software systems powered by ML and applied mathematics.

Applying differential deep learning to calibrate stochastic volatility models

Stochastic volatility models, where the volatility is a stochastic process, can capture most of the essential stylized facts of implied volatility surfaces and give more realistic dynamics of the volatility smile/skew. However, they come with the significant issue that they take too long to calibrate.

Alternative calibration methods based on Deep Learning techniques have been recently used to build fast and accurate solutions to the calibration problem. Huge and Savine developed a Differential Deep Learning (DDL) approach, where Machine Learning models are trained on samples of not only features and labels but also differentials of labels to features. Our project aims to apply the DDL technique to price vanilla European options

(i.e. the calibration instruments), more specifically, Puts when the underlying asset follows a Heston model and then calibrates the model on the trained Network. DDL allows for fast training and accurate pricing. The trained Neural Network dramatically reduces Heston calibration’s computation time.
In this work, we also introduce different regularization techniques, and we apply them notably in the case of the DDL. We compare their performance in reducing overfitting and improving the generalization error. The DDL performance is also compared to the classical DL (without differentiation) one in the case of Feed-Forward Neural Networks. We show that the DDL outperforms the DL.

Dr Paul Alexander Bilokon, Imperial College, UK
Dr Paul Bilokon is CEO and Founder of Thalesians Ltd and an academic at Imperial College, where his work focuses on machine learning, high performance computing, big data, and electronic trading. His career in quantitative finance spans Morgan Stanley, Lehman Brothers, Nomura, Citigroup, Deutsche Bank, and BNP Paribas, and he and his team at Thalesians continue to provide consulting services to numerous financial institutions, both on the buy-side and on the sell-side. He was one of the e-credit pioneers and has co-authored several books, including Machine Learning in Finance: From Theory to Practice and Big Data and High-Frequency Data with kdb+/q. Paul is fluent in C++, Java, Python, and kdb+/q and enjoys building distributed software systems powered by ML and applied mathematics.

Transformers for electronic trading

 

For some time, LSTMs have been considered the go-to deep learning architecture for financial time series forecasting. Recently, a new type of deep learning architecture called the Transformer was introduced, which has shown state-of-the-art results in NLP and recently in time series forecasting. Can Transformers outperform LSTMs in high-frequency financial time series forecasting? After comparing the Transformer, Autoformer, FEDformer, Transformer, Temporal Fusion Transformer, and LSTM architectures, we propose a new model design called the HFformer. The HFformer has a Transformer encoder, spiking activations, no positional encoding, a direct linear decoder, and uses the quantile loss function. It combines elements from the previously studied architectures. We then compare the HFformer to the most popular architecture which is the LSTM. R^2 and classification accuracy were used to assess both the regression and classification properties of the HFformer for log-returns using high-frequency BTC-USDT level order book snapshots. It was found that the HFformer has a similar R^2 to the LSTM, however the HFformer outperformed the LSTM for classification tasks (buys and sells) on forecasting horizons from 1 to 30 ticks.

Abir Sridi, Imperial College, UK
Abir Sridi is currently completing an MSc in A.I. at Imperial College London. She holds an MSc. in modelling and mathematical methods in economy and finance from La Sorbonne (Paris I). She obtained a PhD with honours in financial modelling focused on Monte Carlo methods and volatility smile modelling in a multidimensional framework from La Sorbonne (Paris I). Abir worked as a research associate in the Department of Mathematics of King’s College London with Professor Brigo in relation to multivariate mixtures of distributions.

Applying differential deep learning to calibrate stochastic volatility models
Stochastic volatility models, where the volatility is a stochastic process, can capture most of the essential stylized facts of implied volatility surfaces and give more realistic dynamics of the volatility smile/skew. However, they come with the significant issue that they take too long to calibrate.

Alternative calibration methods based on Deep Learning techniques have been recently used to build fast and accurate solutions to the calibration problem. Huge and Savine developed a Differential Deep Learning (DDL) approach, where Machine Learning models are trained on samples of not only features and labels but also differentials of labels to features. Our project aims to apply the DDL technique to price vanilla European options

(i.e. the calibration instruments), more specifically, Puts when the underlying asset follows a Heston model and then calibrates the model on the trained Network. DDL allows for fast training and accurate pricing. The trained Neural Network dramatically reduces Heston calibration’s computation time.
In this work, we also introduce different regularization techniques, and we apply them notably in the case of the DDL. We compare their performance in reducing overfitting and improving the generalization error. The DDL performance is also compared to the classical DL (without differentiation) one in the case of Feed-Forward Neural Networks. We show that the DDL outperforms the DL.

Prof. Dr. Marc Wildi, ZHAW, Switzerland

 

Marc Wildi is Professor of Econometrics at the Zurich University of Applied Sciences. His research interests gravitate about forecasting, real-time signal extraction, business-cycle analysis, algorithmic trading and risk management. His recent work emphasizes hybrid approaches (mixing real-time filter designs and generic trading concepts) as well as explainability (XAI) of computationally intensive approaches (NN) in the context of longitudinal data (time series).

A Time Series Approach to Explainability for Neural Nets with Applications to Risk-Management and Fraud Detection
Artificial intelligence (AI) is creating one of the biggest revolution across technology-driven application fields. For the finance sector, it offers many opportunities for significant market innovation and yet broad adoption of AI systems heavily relies on our trust in their outputs. Trust in technology is enabled by understanding the rationale behind the predictions made. To this end, the concept of eXplainable AI (XAI) emerged introducing a suite of techniques attempting to explain to users how complex models arrived at a certain decision. For cross-sectional data classical XAI approaches can lead to valuable insights about the models’ inner workings, but these techniques generally cannot cope well with longitudinal data (time series) in the presence of dependence structure or non-stationarity. In this talk I will present common work with Branka Hadji about a novel XAI-technique for deep learning methods (DL) which preserves and exploits the natural time ordering of the data. After a brief introduction to the main concepts, simple applications to financial data illustrate the potential of the approach in the context of risk-management and fraud-detection. 

Dr. Jan-Alexander Posth, ZHAW, Switzerland 

Dr. Jan-Alexander Posth is a senior lecturer at the Institute for Wealth and Asset Management at the ZHAW School of Management and Law, with a research focus GreenTech and AI in finance. He has more than 12 years’ of professional track record in the financial industry, where he gained extensive expertise as a risk manager, quant and portfolio manager.

Starting at Deutsche Postbank as a credit risk manager, Alexander moved on to Landesbank Baden-Württemberg where he led the fund derivatives trading desk. Joining STOXX Ltd. in 2012, he was responsible for the development of smart-beta equity indices before becoming Head of Research and Portfolio Management at a start-up hedge fund in 2015.

Alexander started at ZHAW in 2017; he holds a PhD in theoretical physics

A Modular Framework for Reinforcement Learning Optimal Execution
One of the most crucial challenges of Reinforcement Learning applied to market applications is that the true environment (the market) is inaccessible and thus we have to decide on how to simulate its behaviour, be it via historical data or data generation approaches. In this article, we develop a modular framework for the application of Reinforcement Learning to the problem of Optimal Trade Execution. The framework is designed with flexibility in mind, in order to ease the implementation of different simulation setups. Rather than focusing on agents and optimization methods, we focus on the environment and break down the necessary requirements to simulate an Optimal Trade Execution under a Reinforcement Learning framework (data pre-processing, construction of observations, action processing, child order execution, simulation of benchmarks, reward calculations etc.),  give examples of each component, explore the difficulties their individual implementations & the interactions between them entail, and discuss the different phenomena that each component induces in the simulation, highlighting the divergences between the simulation and the behavior of a real market. We showcase our modular implementation through a setup that, following a Time-Weighted Average Price (TWAP) order submission schedule, allows the agent to place limit orders, simulates their execution via iterating over snapshots of the Limit Order Book (LOB), and calculates rewards as the $ improvement over the price achieved by a TWAP benchmark algorithm following the same schedule. We also develop evaluation procedures that incorporate iterative re-training and evaluation of a given agent over intervals of a training horizon, mimicking how an agent may behave when being continuously retrained as new market data becomes available, and emulating the monitoring practices that algorithm providers are bound to perform under current regulatory frameworks.

Fernando De Meer Pardo, ZHAW, Switzerland
Fernando De Meer Pardo is a PhD student who specialized in Deep Generative Models during his MSc studies. After working at the Industry for a couple of years, he has joined ZHAW for his PhD. His research is focused on applications of AI to financial settings, including Reinforcement Learning, Generative Models and NLP. 

A Modular Framework for Reinforcement Learning Optimal Execution
One of the most crucial challenges of Reinforcement Learning applied to market applications is that the true environment (the market) is inaccessible and thus we have to decide on how to simulate its behaviour, be it via historical data or data generation approaches. In this article, we develop a modular framework for the application of Reinforcement Learning to the problem of Optimal Trade Execution. The framework is designed with flexibility in mind, in order to ease the implementation of different simulation setups. Rather than focusing on agents and optimization methods, we focus on the environment and break down the necessary requirements to simulate an Optimal Trade Execution under a Reinforcement Learning framework (data pre-processing, construction of observations, action processing, child order execution, simulation of benchmarks, reward calculations etc.),  give examples of each component, explore the difficulties their individual implementations & the interactions between them entail, and discuss the different phenomena that each component induces in the simulation, highlighting the divergences between the simulation and the behavior of a real market. We showcase our modular implementation through a setup that, following a Time-Weighted Average Price (TWAP) order submission schedule, allows the agent to place limit orders, simulates their execution via iterating over snapshots of the Limit Order Book (LOB), and calculates rewards as the $ improvement over the price achieved by a TWAP benchmark algorithm following the same schedule. We also develop evaluation procedures that incorporate iterative re-training and evaluation of a given agent over intervals of a training horizon, mimicking how an agent may behave when being continuously retrained as new market data becomes available, and emulating the monitoring practices that algorithm providers are bound to perform under current regulatory frameworks.

Tuan Trinh, EIT Digital, Hungary
Tuan Trinh has 15+ years experience as tech leader and innovator with a strong background in R&D. He has a broad background in ICT, both in industry and in academia. He received his MSc degree in Compute Science and his PhD in the same field at the Budapest University of Technology (BME) and Economics, Hungary. He worked as an associated professor in Computer Science at Corvinus University of Budapest, and as a visiting professor at University of Bern, Switzerland. Tuan Trinh has been invited as a visiting scientist at Ericsson Research Budapest (Hungary), British Telecom (UK), Eurecom Institute (France), University of Luxembourg (Luxembourg), ILNAS Luxembourg, among others. He also acted as an evaluator of funded projects for the European Commission and Hungarian government. One of the main highlights of his carrier was to take a leading part in the foundation of two innovation centres focusing on Education-Research-Business integration at universities, namely the Network Economics Group at BME focusing on innovation on distruptive technologies in the frontier between economics and network technology and the Fintech Centre at Corvinus University of Budapest, an innovation centre focusing on Financial Technology. Tuan Trinh is a recipient of the Pollák-Virág prize from the Hungarian Scientific Association for Infocommunications.

Data for Explainable AI: Transparent, Green, and Secured

Transparency has its price. Transparent and explainable AI in Finance is no exception.  This talk will address the issue by examining the trade-off between transparency and other key aspects of AI in Finance focusing on data issues involved with green and security aspects. Can they both be achievable? What are the data issues and challenges to be handled? Finally, we discuss future research directions on this topic.

Registration

The conference is for free. We would kindly ask you to register: Conference Registration