Full-text resources of CEJSH and other databases are now available in the new Library of Science.
Visit https://bibliotekanauki.pl

Results found: 4

first rewind previous Page / 1 next fast forward last

Search results

Search:
in the keywords:  LGD
help Sort By:

help Limit search:
first rewind previous Page / 1 next fast forward last
EN
Credit risk analysis is largely based on the principles set out by the Basel Committee. Next to the probability of default and recovery rate, one of the most important elements of risk management systems is the value of the exposure at default. This paper presents the issue of EAD (Exposure at Default) estimation with respect to both balance sheet and off-balance sheet items. The author refers to the problem regarding the assessment of EAD forecast quality. He presents the results of the simulations conducted for the most common retail portfolios. The results show that the expected value of the exposure at default is significantly lower than the balance value at the moment of capital requirement calculation. This leads to the conclusion that the approach recommended by the Basel Committee, based on the current book value of the exposure, may result in the overestimation of EAD. The paper contains the proposal of the method which was leveraged for retail products (cash loans, car loans, mortgage loans).
EN
The essential role in credit risk modeling is Loss Given Default (LGD) estimation. LGD is treated as a random variable with bimodal distribution. For LGD estimation advanced statistical models such as beta regression can be applied. Unfortunately, the parametric methods require amendments of the “inflation” type that lead to mixed modeling approach. Contrary to classical statistical methods based on probability distribution, the families of classifiers such as gradient boosting or random forests operate with information and allow for more flexible model adjustment. The problem encountered is comparison of obtained results. The aim of the paper is to present and compare results of LGD modeling using statistical methods and data mining approach. Calculations were done on real life data sourced from one of Polish large banks.
PL
Banki stosujące zalecenia Umowy Bazylejskiej II/III zobowiązane są do wyznaczania ryzyka na podstawie szeregu parametrów. Jednym z nich jest procent straty – Loss Given Default (LGD). W literaturze LGD traktowany jest jako zmienna losowa, o rozkładzie dwumodalnym. Do szacowania wielkości LGD stosuje się zaawansowane regresyjne modele statystyczne. Alternatywny sposób to wykorzystanie metod data miningowych. Szczególnie atrakcyjne wydają się estymatory typu rodzin klasyfikatorów, które pozwalają na uśrednienie rezultatów wielu „słabych klasyfikatorów” i uzyskanie bardziej precyzyjnych wyników. Rodziny klasyfikatorów operują tzw. informacją. Problemem jest interpretacja informacji w kategoriach biznesowych. Celem artykułu jest uzgodnienie obu podejść i interpretacji. Przedstawione zostaną wyniki szacowania przy użyciu modeli: ułamkowej regresji logistycznej, beta-regresji, boostingu gradientowego oraz lasów losowych. Porównane zostaną właściwości estymatorów. Obliczenia wykonane zostały na danych rzeczywistych.
EN
According to the Capital Requirements Directive banks applying the internal rating based approach are obliged to estimate risk based on a set of risk parameters. One of the risk parameters is Loss Given Default (LGD). LGD is treated as a random variable with a bimodal distribution. One can apply advanced statistical models in LGD estimation. An alternative approach is to use data mining methods. The most promising seem to be families of classifiers, that allow for averaging results of many weak classifiers and for obtaining more precise results. Families of classifiers are built based on information criterion. The problem encountered is interpretation of obtained results in terms of business applications. The aim of the paper is to compare both approaches. We present results of LGD estimation with help of two regression models: fractional and beta regression and two ensemble methods: gradient boosting and random forests. Calculations were done on real life data.
first rewind previous Page / 1 next fast forward last
JavaScript is turned off in your web browser. Turn it on to take full advantage of this site, then refresh the page.