Sometimes you win, sometimes you lose
The BCBS published it new “Minimum capital requirements for market risk”. When the last consultative paper on the Fundamental Review of the Trading Book was published in December 2014, I send my comments to the BCBS. The blog related to my comments is here.
From my comments, I would say there are some wins and some losses. Unfortunately, the losses outweighs the wins.
One of my comments was regarding the scaling of the numbers. The Committee had clearly proposed incoherent numbers with final capital 10,000 times larger than the expected numbers. A missing basis point in some formula. Fortunately the Committee has corrected that.
The correction was not in the sense I expected. Roughly the capital required for interest rate risk is the “PV01” multiplied by a risk weight. The choice was between describing PV01 as the “present value of one basis point” or dividing the weights by 10,000. The Committee selected the latter!
This means that the official definition of PV01 for capital computation is the “present value of one”, with one being 100% or 10,000 basis points.
From a quantitative library developer point of view, this is a very good news. It has been one of my battles to ensure that the number computed by the libraries I develop are non-scaled. The sensitivities are the partial derivatives (in the mathematical sense of derivatives) of the outputs with respect to the inputs. No scaling is done anywhere in the quant library; no one basis point multiplication here and one percent there. Everything is non-scaled inside the quant part, only the loader of external data and the display/export are allowed to do any scaling. Allowing even one scaling within the library would force all the numbers to be augmented by some metadata indicating which scaling was used. That would create a real nightmare and heavy computation burden on the library.
Now I can use the “official” definition of PV01 by the committee to claim that no scaling should be done. That will allow me to use an authority and legal argument with people that don’t understand rational arguments, like accountants and lawyers (sorry to my accountant and lawyer friends).
But this victory is a pyrrhic victory!
In the document, the definition of the sensitivity is, with simplified notations,
s = ( V(r+0.0001) - V(r) ) / 0.0001
In other words, the official definition of sensitivity is a forward finite difference differential ratio with a fixed shift of one basis point. As I wrote in my previous blog on the subject, if you have implemented Algorithmic Differentiation in your libraries, too bad for you, you cannot use it. You have to implement a new approach using finite difference and multiply you computation time by 5 on your vanilla swap book and maybe by 10 for cross currency swaps. The required formula creates numerical instability in the sensitivity from you tree methods, never mind, this is the regulation.
One could think that this is simply a quick draft document and that this is a minor mistake from the drafter confusing the goal with one of the methods available to achieve it, but it is not. There were several versions of the document with more than a year since the latest draft and the above comments were officially made in the request for comments. It is thus a deliberate decision. This means that the number computed for regulatory purposes will not be used for any other purposes and will add to the cost of business. Why would a risk manager use the regulatory numbers if it takes him 10 times more time to produce them than his correct numbers? New code will have to be written in firm wide systems just to compute those numbers. Regulation becomes a drag on business without up-side. A separate process will be run on a monthly basis and largely ignored. Any error in the process will not be scrutinized on a permanent basis by traders and risk managers and is thus very is likely to be ignored. Funnily the regulation requires (47. (c)) the computation of the delta to be based on “pricing models that an independent risk control unit within a bank uses to report market risks or actual profits and losses to senior management”, but the requirement indicated on pricing model is not extended to the delta! The document adds (49.) “a bank’s pricing model used in actual profit and loss reporting provide an appropriate basis for the determination of regulatory capital requirements for all market risks” but again does not extend the “appropriateness” to the delta/sensitivity computation.
This is a bad decision for Algorithmic Differentiation (and the sales of my forthcoming book). This is a bad decision for unified risk management, bank will produce several sets of numbers for different purposes. This is bad for the credibility of regulators.
I’m pointing at the finite difference part of the formula, mainly because I have effort in developing Algorithmic Differentiation in mind. But I could have taken any part of the formula and spend paragraphs discussing them. What is “r” is the formula? A market rate! Which market rate? Not indicated! What if there is no liquid rate in the market that satisfies the requirements? Not indicated! What is the convention/day count for “r” and “0.0001”? No indicated! What is the 0.25 year market instrument for a 6M Libor? Bad question! If the market quote is in spread (e.g. cross-currency), how should that be used? Not indicated!
Again this is not a drafting mistake in a first quick draft. This is a document that took several years to be written, has been commented by numerous people and those questions have been asked explicitly in the request for comments.
Overall, I’m disappointed. 90 pages of detailed requirements, but from a quant perspective missing the essential. Prescriptive where it should be principle based and vague where it should be precise.
I may review the details of the capital computation at some stage. This is based on a two layers delta-normal-like approach with conventional volatilities (weights) and “correlations”. I indicate correlation between inverted comma as there is no indication that the numbers proposed actually form a positively define matrix.
From my comments, I would say there are some wins and some losses. Unfortunately, the losses outweighs the wins.
Win
One of my comments was regarding the scaling of the numbers. The Committee had clearly proposed incoherent numbers with final capital 10,000 times larger than the expected numbers. A missing basis point in some formula. Fortunately the Committee has corrected that.
The correction was not in the sense I expected. Roughly the capital required for interest rate risk is the “PV01” multiplied by a risk weight. The choice was between describing PV01 as the “present value of one basis point” or dividing the weights by 10,000. The Committee selected the latter!
This means that the official definition of PV01 for capital computation is the “present value of one”, with one being 100% or 10,000 basis points.
From a quantitative library developer point of view, this is a very good news. It has been one of my battles to ensure that the number computed by the libraries I develop are non-scaled. The sensitivities are the partial derivatives (in the mathematical sense of derivatives) of the outputs with respect to the inputs. No scaling is done anywhere in the quant library; no one basis point multiplication here and one percent there. Everything is non-scaled inside the quant part, only the loader of external data and the display/export are allowed to do any scaling. Allowing even one scaling within the library would force all the numbers to be augmented by some metadata indicating which scaling was used. That would create a real nightmare and heavy computation burden on the library.
Now I can use the “official” definition of PV01 by the committee to claim that no scaling should be done. That will allow me to use an authority and legal argument with people that don’t understand rational arguments, like accountants and lawyers (sorry to my accountant and lawyer friends).
But this victory is a pyrrhic victory!
Loss
In the document, the definition of the sensitivity is, with simplified notations,
s = ( V(r+0.0001) - V(r) ) / 0.0001
In other words, the official definition of sensitivity is a forward finite difference differential ratio with a fixed shift of one basis point. As I wrote in my previous blog on the subject, if you have implemented Algorithmic Differentiation in your libraries, too bad for you, you cannot use it. You have to implement a new approach using finite difference and multiply you computation time by 5 on your vanilla swap book and maybe by 10 for cross currency swaps. The required formula creates numerical instability in the sensitivity from you tree methods, never mind, this is the regulation.
One could think that this is simply a quick draft document and that this is a minor mistake from the drafter confusing the goal with one of the methods available to achieve it, but it is not. There were several versions of the document with more than a year since the latest draft and the above comments were officially made in the request for comments. It is thus a deliberate decision. This means that the number computed for regulatory purposes will not be used for any other purposes and will add to the cost of business. Why would a risk manager use the regulatory numbers if it takes him 10 times more time to produce them than his correct numbers? New code will have to be written in firm wide systems just to compute those numbers. Regulation becomes a drag on business without up-side. A separate process will be run on a monthly basis and largely ignored. Any error in the process will not be scrutinized on a permanent basis by traders and risk managers and is thus very is likely to be ignored. Funnily the regulation requires (47. (c)) the computation of the delta to be based on “pricing models that an independent risk control unit within a bank uses to report market risks or actual profits and losses to senior management”, but the requirement indicated on pricing model is not extended to the delta! The document adds (49.) “a bank’s pricing model used in actual profit and loss reporting provide an appropriate basis for the determination of regulatory capital requirements for all market risks” but again does not extend the “appropriateness” to the delta/sensitivity computation.
This is a bad decision for Algorithmic Differentiation (and the sales of my forthcoming book). This is a bad decision for unified risk management, bank will produce several sets of numbers for different purposes. This is bad for the credibility of regulators.
I’m pointing at the finite difference part of the formula, mainly because I have effort in developing Algorithmic Differentiation in mind. But I could have taken any part of the formula and spend paragraphs discussing them. What is “r” is the formula? A market rate! Which market rate? Not indicated! What if there is no liquid rate in the market that satisfies the requirements? Not indicated! What is the convention/day count for “r” and “0.0001”? No indicated! What is the 0.25 year market instrument for a 6M Libor? Bad question! If the market quote is in spread (e.g. cross-currency), how should that be used? Not indicated!
Again this is not a drafting mistake in a first quick draft. This is a document that took several years to be written, has been commented by numerous people and those questions have been asked explicitly in the request for comments.
Overall, I’m disappointed. 90 pages of detailed requirements, but from a quant perspective missing the essential. Prescriptive where it should be principle based and vague where it should be precise.
I may review the details of the capital computation at some stage. This is based on a two layers delta-normal-like approach with conventional volatilities (weights) and “correlations”. I indicate correlation between inverted comma as there is no indication that the numbers proposed actually form a positively define matrix.
Comments
Post a Comment