+43 (0)732 / 2468-4210 mathinfo@mathconsult.co.at

Looking through the Turbulent Sky

Executive Summary

Images captured by ground-based telescopes are blurred due to the turbulence of the atmosphere. We have been working on algorithms for deformable mirrors which have to be adjusted 3000 times a second for sharp images. 

Problem overview

Turbulence in the atmosphere leads to different densities of air and therefore to different speeds of light, which makes uncorrected images of ground-based telescopes blurred.


 Left: Uncorrected image of the star HIC 59206. Right: After switching on the Adaptive Optics System of VLT, it is visible that it is a double star. Credit: ESO.


The Very Large Teslecope (VLT) of the European Southern Observatory (ESO) has a diameter of the primary mirror of 8.2m. With the planned European Extremely Large Telescope (E-ELT, 39m primary mirror), the separability of two objects becomes much narrower, and the computational requirements on Adaptive Optics Systems are much more demanding. To be more specific, for certain applications 60.000 actuators of deformable mirrors have to be adjusted 3000 times per second.

The turbulence measure is obtained by so-called wavefront sensors. The art of algorithm development lies in the performance requirements for online control. Credit: ESO.



Results and achievements

Austria joined ESO in 2008. As a part of Austria’s contribution to ESO, a team doing research and algorithm development on Adaptive optics was formed, consisting of the Industrial Mathematics Institute (Kepler University), the Radon Institute for Computational and Applied Mathematics of the Austrian Academy of Sciences, and of MathConsult.

From 2009 to 2013, a variety of algorithms for different Adaptive Optics Settings was developed: our Cumulative Reconstructor (CuRe) was improved by domain decomposition (CuReD) and parallelization.

Typical speed-ups over the until then widely used matrix vector multiplication (MVM) were between 20 and 1100. This makes adaptive optics systems also feasible for the E-ELT, where response times of 100 microseconds should be achieved.

Artist’s impression of the E-ELT. With its 39m primary mirror, it will be the largest ground-based telescope on earth. Credit: ESO/L. Calçada

Adaptive Optics Video

Schematic video on adaptive optics. (Source: ESO)



Further Reading:


Matthias Rosensteiner: Advanced algorithms for astronomical adaptive optics, Ph.D. thesis, Johannes Kepler University, 2013.

Temperature Control in the Production of Heavy Plates

Executive Summary

Excellent mechanical properties of heavy steel plates depend on a tight monitoring and control of the thermal history during the production steps “hot rolling” and “accelerated cooling”. 

Problem overview

Heavy steel plates are used, e.g., for sour service pipelines, as structural steels for offshore constructions like jackets or wind towers, pressure shafts of pumped storage plants and for the mobile-crane industry as well as for concrete pumps. Typical  thicknesses are up to 40 mm for line pipes, up to 150 mm for offshore constructions.

To obtain (high strength) plates that are able to resist high stresses and show very good toughness properties, the control of the recrystallization and the softening behavior is essential. This softening is influenced by the strains and strain rates during hot rolling, the temperature history and the alloying elements, especially niobium.


4.2 m quarto stand for hot rolling. Source: voestalpine



For the control and prognosis of temperatures at different locations on the surface and within the plate, a fast and reliable solver for the nonlinear heat transport equation should be developed that is able to resolve the massive temperature drops at the surface when entering the (cooled) rollers of the stand.


Accelerated cooling device. Source: voestalpine


Results and achievements

A careful dimension analysis allows to replace the transient three-dimensional problem by a family of transient one-dimensional problems that can be solved numerically with a satisfactory accuracy in 20 milliseconds and thus can build the basis for an online- control system.


Calculated temperature at the surface (blue), in the center (yellow) and at 75% height (red). In the specific example, fifteen hot rolling passes were performed. The drop at the very right comes from accelerated cooling.



Further Reading

E. Parteder, A. Binder, G. Wollendorfer, T. Antretter, K. Zeman: Hot Rolling and Accelerated Cooling Simulations using ABAQUS - a Fertile Basis for Fast Online Algorithms in Heavy Steel Plate Production, Simulia Community Conference, 2013.

A Kinetic Model of Blast Furnace Automation

Executive summary

Blast furnaces have been used for iron production since centuries. In today’s competitive engineering markets, math-based furnace simulation can cover operating conditions which are not accessible by experiment.


Challenge overview

The modernisation of steelmills around the world  requires a thorough understanding of the processes going on in a blast furnace and quantitative simulation tools for analysing the influence of different raw materials and of different operatiing conditions.   


The problem

The mathematical model of a blast furnace should cover at least: (a) the transient movement of layers of coke and of iron ore and the shrinking of the coke layers, (b) the movement of gas through the furnace, (c) the chemical reactions taking place (up to 50 of them taken into account), (d) balances of energy.


This leads to a system of (around 50) highly nonlinear partial differential equations with the unknowns depending on position and time. Assuming a rotational symmetry seems reasonable, leading to a (2D + time) problem. The discretised version (finite elements combined with method of lines for some reactions) led to systems with up to 800.000 unknowns.    


Results and achievements

The simulation tool which was developed is able to simulate e.g. various mixings of raw materials. The computation of one real-world blast furnace day takes typically 3 hours on a standard PC and therefore allows to simulate different operating conditions and thus to optimise, e.g., energy consumption. The kinetic blast furnace model is part of Siemens VAI’s automation offerings. 



Further Reading:

Gerald Gökler: Moving layers - tracking characteristics : A mathematical model and the numerical simulation of two ironmaking processes. Ph.D. thesis, Johannes Kepler University, 2005.

Robust Calibration of Local Volatility Models

Executive summary

For the so-called local volatility model, Bruno Dupire derived a closed form solution, which, when naively applied, delivers cliffy volatility surfaces. A robust and fast parameter calibration scheme has to be found. 

Problem overview

Call and/or put options on liquid assets or equity indices are traded for different expiries and for a range of strike prices. It turns out that the traded option prices do not fit into the constant volatility world of Black-Scholes but exhibit so-called “volatility smiles” or “volatility skewnesses”.

A model for which such a behavior can be obtained without the need of stochastic volatility is the local volatility model: Here the stochastic movement of the price of the underlying follows

$$dS = \mu S dt + \sigma(S,T) S\; dW$$

with the drift rate \(\mu \), the volatility function \(\sigma\) and the increment \(dW\) of the Wiener process. The local volatility function cannot be measured directly, but has to be identified from the quoted option prices as mentioned above. Bruno Dupire showed in 1994: If these call prices were available as a function, then \(\sigma\) must satisfy

$$\sigma_\mathrm{loc}(K,T) = \sqrt{ \frac{ \frac{\partial C}{\partial T} + r K \frac{\partial C}{\partial K }} {\frac{K^2}{2} \frac{\partial^2 C}{\partial K^2}}}$$

When we apply this inversion formula directly, we obtain


Local volatility surface by applying Dupire’s inversion formula on a 50x50 grid: Strikes from 50 to 150 percent of spot. Expiries up to 5 years. Synthetic implied (annual) volatilities between 25 and 35%. Noise level of up to absolute 0.1%.


There are mainly two reasons for this cliffy behavior already at a low noise level: (1) Differentiation per se is an instable process that leads to noise amplifying if the noise frequency is high. (2) The second derivative  term in the denominator is close to zero if the traced options under consideration are deep in the money or deep out of the money.  


Results and achievements

Mathematically, two conflicting targets should be achieved. On the one hand, the fit for the traded option data should be as good as possible (“model prices close to market prices”), on the other hand, the local volatility surface should be as smooth as possible.

Nonlinear Tikhonov regularization with an appropriate regularization parameter choice fulfills the requirement of a fast and robust identification procedure. Computing time for a market data set of 2500 options was 8 seconds on a Windows 7 laptop computer.


Regularised local volatility surface which can then be used for the valuation and risk analysis of more complex (non vanilla) financial instruments.


Further Reading

Aichinger, Binder: A Workout in Computational Finance, Wiley, 2013.

Egger, Engl: Tikhonov regularization applied to the inverse problem of option pricing: convergence analysis and rates, Inverse Problems 21, 1027–1045, 2005.

UnRisk website: www.unrisk.com

Speeding Up Simulation for Automotive Industries

Executive summary

The numerical simulation of parts of car engines may be quite time consuming. For running hardware-in-the-loop systems, computation times must be reduced by a factor 100.


Challenge overview

By connecting automotive hardware and simulation software in a testbed environment, the requirements on computation time get extremely high, as every engine cycle should be replicated by numerical simulation of the parts under consideration within real-time. The idea how to achieve this is to introduce surrogate models in the form of support vector machines.


The problem

For the various components of a car engine, the powertrain and the virtual driver system, sophisticated software tools are available to study, e.g. fuel consumption, exhaust gas aftertreatment or optimal gearing. If a hardware-in-the-loop-system is used, some components are realized in hardware and some of them as simulation programs to study, e.g. various designs of hybrid engines.

In such a combined testbed environment, it is essential that the simulation software runs at least as fast as the hardware meaning that every millisecond of real time must be simulated in not more than a millisecond.


Results and achievements

The approach which was realised in the project was based on surrogate models, here in the form of so-called support vector machines. These surrogate models aim to evaluate a function which is easily to calculate instead of solving numerically a partial differential equation. They require a training phase during which the shape of the ansatz function and its parameters are determined. After this training phase, which can be carried out offline, the surrogate model may achieve speed-ups of a factor 1000 and more compared to the full numerical simulation.


Torque measurements (black dots) and surface obtained from the surrogate model using a support vector machine (SVM). 


Further Reading:

Roman Heinzle: Machine learning methods and their application to realtime engine simulation. Ph.D. thesis, Johannes Kepler University, 2009.

A Workout in Computational Finance

Executive Summary

Michael Aichinger and Andreas Binder present their survival kit of numerical methods for finance. This book, published by Wiley in 2013, is also the basis of the UnRisk ACADEMY.


The Workout Chapters

1 Introduction and Reading Guide

2 Binomial Trees

3 Finite Differences and the Black-Scholes PDE

4 Mean Reversion and Trinomial Trees

5 Upwinding Techniques for Short Rate Models

6 Boundary, Terminal and Interface Conditions and their Influence

7 Finite Element Methods

8 Solving Systems of Linear Equations

9 Monte Carlo Simulation

10 Advanced Monte Carlo Techniques

11 Valuation of Financial Instruments with Embedded American/Bermudan Options within Monte Carlo Frameworks

12 Characteristic Function Methods for Option Pricing

13 Numerical Methods for the Solution of PIDEs

14 Copulas and the Pitfalls of Correlation

15 Parameter Calibration and Inverse Problems

16 Optimization Techniques

17 Risk Management

18 Quantitative Finance on Parallel Architectures

19 Building Large Software Systems for the Financial Industry

Reviewers’ Endorsements

“Mathematical Finance needs both: a well-founded theory based on stochastic calculus as well as numerical valuation schemes that work. In A Workout in Computational Finance the authors put emphasis on the numerical aspects and present an impressive range of numerical methods. All these techniques have been implemented by their group and can be used as a starting point for building a professional software system.”

—Walter Schachermayer, Full Professor for Mathematical Finance, University of Vienna


“With their strong background in numerical simulation of industrial problems, the authors succeed to develop the concepts of different numerical schemes which are useful for computational finance and essential for valuation, risk analysis and the risk management of financial instruments. Especially in times of difficult market environments, the mathematical and algorithmic foundation of software used in banking must be a solid one which avoids additional traps of poor implementation. A Workout in Computational Finance gives clear recommendations for the preferred numerical methods for various models and instruments. The book will be utmost useful for practitioners but it also will be of great interest for researchers in the field.”

—Gerhard Larcher, Institute of Mathematical Finance, Kepler Universität


“The authors cover a broad range of numerical techniques for differential equations in computational finance, such as finite elements, trees, Monte Carlo, Fourier techniques and parameter calibration. Using sound, yet compact mathematical reasoning, they capture the substance of models for interest rate and equity derivatives, and provide hands-on guidance to numerics, covering all sorts of practical challenges. A vast number of numerical results illustrate potential implementation pitfalls and the mitigation techniques presented. With its strong focus on tangible usability this book is a highly valuable manual for students as well as professionals.”

—Robert Maringer, Head Valuation Control Switzerland, Credit Suisse


“For shaping your body you should go to a gym, while for building up your numerical toolkit you need a workout in computational finance. This modern treatment of numerical methods in quantitative finance addresses problems that professionals working in the field face on a daily basis. The very clear presentation of the material also makes it a perfect fit for students having a background in the theory of mathematical finance who want to gain insight on how practical problems are tackled in the industry.”

—Philipp Mayer, Financial Modeling, ING Financial Markets, Brussels


UnRisk Software Solutions for the Financial Industries

Executive summary

Financial institutions (banks, asset management firms, insurance companies) must valuate their assets and liabilities to analyse their financial risk on a regular (often: daily) basis. MathConsult have been developing their UnRisk® product family to achieve these tasks.  


Depending on the specific details of financial instruments, valuation (meaning the calculation of a fair value) may be fairly easy, e.g., by looking up exchange traded equity prices or by discounting the cashflows of a fixed rate bond. Over the counter (OTC) instruments, on the other hand, are often quite complex and require the solution of a stochastic or a partial differential equation, equipped with appropriate terminal, interface and boundary conditions.

MathConsult has been working on a wide variety of valuation problems since 1997. The UnRisk ENGINE, which was (in its first version) released in 2002, covers now a wide range of financial instruments from various asset classes, and allows to apply different financial models for the valuation tasks needed.

Image Source: Aichinger-Binder. A Workout in Computational Finance. Wiley, 2013


Workflow in valuation and risk analysis

The typical workflow for the valuation of a structured financial instrument is the following:

  1. Choose a model for the stochastic movement of the underlying. In the case of fixed income instruments, an interest rate model (e.g., Black76, Bachelier, Hull-White, Black-Karasinski, Libor market model) is chosen.  
  2. Calibrate the parameters of the model by robust parameter identification techniques using market data of liquidly traded basic instruments.  
  3. Apply the appropriate forward valuation routines for the structured instrument. In UnRisk, Green’s functions, finite element techniques, (Quasi)Monte Carlo simulation, and Fourier techniques are implemented.  


Model uncertainty for a callable reverse floater under Hull-White and under Black-Karasinski. Top Curve(s): For the Non-callable bond, the two models deliver the same prices. Lower curves: Different prices arising from different models when callability is introduced. 


Risk management and UnRisk FACTORY automation.

In financial risk management, the value of an instrument portfolio must be valuated daily and the influence of changing market conditions must be analysed (Value at Risk, scenario tests, stress tests). The UnRisk FACTORY is an automated and scalable system that loads the market data, calibrates the models, valuates the instruments and applies various predefined and user-defined scenarios. To achieve these tasks, millions of single valuations may be necessary.


Further Information

UnRisk website: www.unrisk.com

Aichinger, Binder: A Workout in Computational Finance, Wiley, 2013.

A - 4040 Linz . Austria . Altenbergerstraße 69 . Phone +43 (0)732 / 2468-4210 . e-mail: mathinfo@mathconsult.co.at