Imaginary butterfly

Yesterday night, I finish reading a book. This morning, I was wondering what to do next. Weather is not good enough to play golf, and I was reduced to start a reading a new book or let my imagination wander. I choose the latter, and from imagination to imaginary there was only a short step. And from imaginary to imaginary capital and FRTB another short step.

So I decided to create a simple swap position with imaginary capital according to the standard approach of the FRTB.
  • Creating the matrix in rule 77: 10 lines of (Matlab) code.
  • Finding the not positively defined sub-matrices of dimension 3 of the qbove: 10 lines of (Matlab) code
  • Creating an actual position and computing its risk weighted sensitivity and FRTB capital: 50 lines of (Java) code
  • Result: priceless!
Priceless has to be understood as the capital required to hold that position is imaginary, and you can not buy imagination for any price!

Maybe a little bit more details about the computation. First step is creating the matrix described in rule 77. The result is:
Tenor 0.25Y 0.50Y 1Y 2Y 3Y 5Y 10Y 15Y 20Y 30Y
0.25Y 1.0000 0.9704 0.9139 0.8106 0.7189 0.5655 0.4000 0.4000 0.4000 0.4000
0.50Y 0.9704 1.0000 0.9704 0.9139 0.8607 0.7634 0.5655 0.4190 0.4000 0.4000
1Y 0.9139 0.9704 1.0000 0.9704 0.9418 0.8869 0.7634 0.6570 0.5655 0.4190
2Y 0.8106 0.9139 0.9704 1.0000 0.9851 0.9560 0.8869 0.8228 0.7634 0.6570
3Y 0.7189 0.8607 0.9418 0.9851 1.0000 0.9802 0.9324 0.8869 0.8437 0.7634
5Y 0.5655 0.7634 0.8869 0.9560 0.9802 1.0000 0.9704 0.9418 0.9139 0.8607
10Y 0.4000 0.5655 0.7634 0.8869 0.9324 0.9704 1.0000 0.9851 0.9704 0.9418
15Y 0.4000 0.4190 0.6570 0.8228 0.8869 0.9418 0.9851 1.0000 0.9900 0.9704
20Y 0.4000 0.4000 0.5655 0.7634 0.8437 0.9139 0.9704 0.9900 1.0000 0.9851
30Y 0.4000 0.4000 0.4190 0.6570 0.7634 0.8607 0.9418 0.9704 0.9851 1.0000

The next step is checking if there are sub-matrices of dimension 3 that are not positively defined. There are in total 120 possible combination of that type of matrices. A number small enough that one can scan all of them and look for negative eigenvalues. This is probably not the most efficient numerical procedure but as it takes less than a second to run it, this is not the most important part.

Out of those 120 possible matrices, 34 (!) are not positively defined. I selected the “worst”, according to a arbitrary criterion, for the sequel of this analysis. This is the sub-matrix with indices (2, 5, 8), i.e. the nodes 0.5Y, 3Y and 15Y. The smallest  eigenvalue is -0.0042 and the associated eigenvector [-0.4401, 0.7638, -0.4721]. The Matlab code for that part is provided below and should be applied to the “cor” matrix presented above.

% Copyright (C) 2016 by Marc Henrard

 c3 = nchoosek(1:10, 3);  
   e = eig(cor(c3(i,:), c3(i, :)));  
     b3(nbProb, :) = c3(i,:);  
     be3(nbProb,:) = e';  
 [m, ind] = min(be3(:,1))  
 b3(ind, :)  
 be3(ind, :)  
 B3 = cor(b3(ind, :), b3(ind, :));  
 [V, D] = eig(B3);  
 p3 = V(:,1)  

Creating a positively defined matrix than can be used as a correlation matrix and at the same time looks plausible for finance application is not trivial. Throwing random numbers or made-up functions to create the different elements is not likely to produce an acceptable result. From the above description, the functional form proposed by the BCBS is clearly not appropriate.

We have now enough information to create a simple example with imaginary capital. Select a position that will create a risk weighted sensitivity close enough to the above eigenvalue. I selected the following positions in EONIA ATM swaps with maturities 0.5Y, 3Y and 15Y: -400m, 150m and -20m with negative number for a receiver and positive number for a payer. This is a type of butterfly position. Entering the above position in my efficient tools, I get immediately the following sensitivity vector:
[0.00, -202516257.66, 0.00, 0.00, 460337138.18, 0.00, 0.00, -290770969.62, 0.00, 0.00]
The above figures were obtained with data from 20-Nov-2015. The sensitivities are computed by algorithmic differentiation; they do not fit the “bad” definition of sensitivity used in the FRTB, but are close enough for the result I want to illustrate.

Now let the FRTB capital computation tool apply the risk weights and the (pseudo-)correlation to that position and you get:

FRTB capital computed involves the square root of a negative number
  |-->; Partial capital is: 0.0 + 2100828.445881 i

What does that mean in practice? I don't have the slightest idea, but it certainly looks very cool! And ...

I have my imaginary butterfly!

Beyond the funny result and tone used in this blog, there is a serious problem. The careless construction of the approach make the result completely meaningless. The credibility of the financial regulators will decrease further, if possible, and the number produced by this new regulation will fill the boxes of costly and pointless reports.

If you have an interest in reducing the capital of your swap book, don't hesitate to contact me for consulting work.


Sometimes you win, sometimes you lose

The BCBS published it new “Minimum capital requirements for market risk”. When the last consultative paper on the Fundamental Review of the Trading Book was published in December 2014, I send my comments to the BCBS. The blog related to my comments is here.

From my comments, I would say there are some wins and some losses. Unfortunately, the losses outweighs the wins.


One of my comments was regarding the scaling of the numbers. The Committee had clearly proposed incoherent numbers with final capital 10,000 times larger than the expected numbers. A missing basis point in some formula. Fortunately the Committee has corrected that.

The correction was not in the sense I expected. Roughly the capital required for interest rate risk is the “PV01” multiplied by a risk weight. The choice was between describing PV01 as the “present value of one basis point” or dividing the weights by 10,000. The Committee selected the latter!

This means that the official definition of PV01 for capital computation is the “present value of one”, with one being 100% or 10,000 basis points.

From a quantitative library developer point of view, this is a very good news. It has been one of my battles to ensure that the number computed by the libraries I develop are non-scaled. The sensitivities are the partial derivatives (in the mathematical sense of derivatives) of the outputs with respect to the inputs. No scaling is done anywhere in the quant library; no one basis point multiplication here and one percent there. Everything is non-scaled inside the quant part, only the loader of external data and the display/export are allowed to do any scaling. Allowing even one scaling within the library would force all the numbers to be augmented by some metadata indicating which scaling was used. That would create a real nightmare and heavy computation burden on the library.

Now I can use the “official” definition of PV01 by the committee to claim that no scaling should be done. That will allow me to use an authority and legal argument with people that don’t understand rational arguments, like accountants and lawyers (sorry to my accountant and lawyer friends).

But this victory is a pyrrhic victory!


In the document, the definition of the sensitivity is, with simplified notations,
 s = ( V(r+0.0001) - V(r) ) / 0.0001

In other words, the official definition of sensitivity is a forward finite difference differential ratio with a fixed shift of one basis point. As I wrote in my previous blog on the subject, if you have implemented Algorithmic Differentiation in your libraries, too bad for you, you cannot use it. You have to implement a new approach using finite difference and multiply you computation time by 5 on your vanilla swap book and maybe by 10 for cross currency swaps. The required formula creates numerical instability in the sensitivity from you tree methods, never mind, this is the regulation.

One could think that this is simply a quick draft document and that this is a minor mistake from the drafter confusing the goal with one of the methods available to achieve it, but it is not. There were several versions of the document with more than a year since the latest draft and the above comments were officially made in the request for comments. It is thus a deliberate decision. This means that the number computed for regulatory purposes will not be used for any other purposes and will add to the cost of business. Why would a risk manager use the regulatory numbers if it takes him 10 times more time to produce them than his correct numbers? New code will have to be written in firm wide systems just to compute those numbers. Regulation becomes a drag on business without up-side. A separate process will be run on a monthly basis and largely ignored. Any error in the process will not be scrutinized on a permanent basis by traders and risk managers and is thus very is likely to be ignored. Funnily the regulation requires (47. (c)) the computation of the delta to be based on “pricing models that an independent risk control unit within a bank uses to report market risks or actual profits and losses to senior management”, but the requirement indicated on pricing model is not extended to the delta! The document adds (49.) “a bank’s pricing model used in actual profit and loss reporting provide an appropriate basis for the determination of regulatory capital requirements for all market risks” but again does not extend the “appropriateness” to the delta/sensitivity computation.

This is a bad decision for Algorithmic Differentiation (and the sales of my forthcoming book). This is a bad decision for unified risk management, bank will produce several sets of numbers for different purposes. This is bad for the credibility of regulators.

I’m pointing at the finite difference part of the formula, mainly because I have effort in developing Algorithmic Differentiation in mind. But I could have taken any part of the formula and spend paragraphs discussing them. What is “r” is the formula? A market rate! Which market rate? Not indicated! What if there is no liquid rate in the market that satisfies the requirements? Not indicated! What is the convention/day count for “r” and “0.0001”? No indicated! What is the 0.25 year market instrument for a 6M Libor? Bad question! If the market quote is in spread (e.g. cross-currency), how should that be used? Not indicated!

Again this is not a drafting mistake in a first quick draft. This is a document that took several years to be written, has been commented by numerous people and those questions have been asked explicitly in the request for comments.

Overall, I’m disappointed. 90 pages of detailed requirements, but from a quant perspective missing the essential. Prescriptive where it should be principle based and vague where it should be precise.

I may review the details of the capital computation at some stage. This is based on a two layers delta-normal-like approach with conventional volatilities (weights) and “correlations”. I indicate correlation between inverted comma as there is no indication that the numbers proposed actually form a positively define matrix.


Books - Rant on books

My LinkedIn headline could have been

Compulsive buyer, reader and collector of books.

I built (or more exactly I designed it and had it built for me) a two storey library. It roughly six meters wide and five meters high. It contains thousand of books and weight several tons. All that to say I love books, I’m addicted to them.

My addiction took a new turn recently when I published my first full book. Now I’m also addicted to writing. You may have seen my announcement for a new book. The new headline is

Compulsive buyer, reader, collector and writer of books.

Being in love and addicted, you could expect subjectivity from me. Nevertheless find below my objective and dispassionate opinion about books.

Books are expensive, too expensive, … if you want or need to buy them. For a hardcover book you pay 60 GBP, 15% of it is the actual cost of the physical book, the rest is immaterial: author royalties and services (editor and retailer). And believe me, the author royalties will not make him (me in this case) rich. Nevertheless, this is a lot of money to fill one centimeter of library shelf.

On the other side the books are not only ornaments for bookshelf; they also have contents. For technical books, the saying is that one page of content takes one day to write. I learned the hard way that this is a realistic estimate. The content is, for some books, a very efficient access to real expertise, to the deep thoughts of enthusiast people about their trade. You don’t even have to listen to the annoying and arrogant authors, you can read directly their thoughts at your pace, in the sequence you want and where you want. Obviously you have to make the effort to read and decipher what the author really means, but nobody said that though reading was easy.

You have 200 days of expert thoughts for GBP 60. Maybe you don’t really need the 200 days, only 100 of them are useful to you, but is that expensive or cheap? In the finance industry, what else can you have for 60 quid? You can hire a senior consultant, of untested expertise related to your problem, for 2400 quid a day (I round the numbers to make my computation easy), i.e. you can speak to him for 12 minutes for the price of the book. Your book value is gone by the time you have exchanged business cards. You can also have lunch with business acquaintances. A nice lunch in London for two, another 180 quid or three books. By the end of the starters you better have got a lot of expert opinion on your subject of interest, not only gossips. Maybe you can travel to see a client, a short flight, an hotel and a couple of lunches, that is 600 quid. I hope your clients will bring you a lot of information about your industry, enough to fill ten books, and that he will write the back-to-office reports for you.

How come that the banking industry, supposed to be full of expert at detecting arbitrage, is not full of books. If I was board member at any bank, I would ask: where is the library? Do you buy all the good books that come out? If you have not bought and read all of them, how do you know that they are not the best arbitrage opportunity in the market.

Books are cheap, too cheap … if you buy them and use them correctly. They almost look like open source software (by the way, check Strata). You get them and you can use all the ideas in them without paying anything to the people that have spend days, months, and maybe years developing them. You just have to chose wisely which ones you want to use the content of and which ones stay as ornament on your bookshelf.

Buy books, read them, love them or hate them, whatever they deserve. If you like a book, contact the author; maybe just to say that you like the book, maybe to invite him for a business lunch, maybe to invite him for to speak at a seminar, maybe to hire him as a consultant from whom you have just read a 200 pages CV or maybe to play a round of golf (I now this is completely unrelated, but I appreciate a round of golf over the week-end and my golf ball offer is still on).

I’m a compulsive buyer, reader and collector of books (I know I repeat myself). But I’m even happier when I receive a book. Not that I’m stingy and does not want to buy them. What I like in receiving a book is that often I receive books that I would not buy myself … and I love it. That is the important part, discovering something new that I would not have come across by myself. After my call for a round of golf, here is my call for books. If you have written a book, would like me to read it (or simply use it as an ornament to my library), don’t hesitate to contact me. I will be happy to barter a “Multi-curve Framework” copy for a different book; the offer is also valid for PhD thesis.  Of course you would need to pitch your book to me, but this is part of the preliminaries. Offer valid within the limits of the available stock (just to protect myself in case millions of people are interested in swapping their book for mine and I don’t have enough space in my library to store all those books).

Buy books, they are cheap.
Offer me books, I love it (and them).

Forthcoming book on Algorithmic Differentiation!

The first draft of my forthcoming book on Algorithmic Differentiation has been send to the editor.

The optimistic expectation is that in will be available in print in May 2016.

Only four months to wait!