Full-text resources of CEJSH and other databases are now available in the new Library of Science.
Visit https://bibliotekanauki.pl

Results found: 10

first rewind previous Page / 1 next fast forward last

Search results

help Sort By:

help Limit search:
first rewind previous Page / 1 next fast forward last
PL
Od czasu wprowadzenia VaR pod koniec XX wieku, miara ta stała się najpopularniejszą miarą ryzyka. Jako główne jej zalety uznaje się: łatwość interpretacji, możliwość uzyskania syntetycznej informacji o poziomie ryzyka w postaci jednaj liczby oraz porównywalność poziomów ryzyka raportowanych przez różne instytucje. Jednak możliwość porównywania poziomów ryzyka pozostaje w sprzeczności z faktem stosowania różnych procedur wyznaczania tej miary. Wybór metody estymacji jest wewnętrzną decyzją przedsiębiorstwa i nie podlega regulacjom międzynarodowego nadzoru bankowego. Praca poświęcona została analizie porównawczej błędów estymatora związanych z konkurencyjnymi metodami szacowania VaR. Za pomocą badania Monte Carlo porównano cztery metody, wśród których wybrano dwie oparte na założeniu stacjonarności rozkładu – metodę wariancji-kowariancji oraz symulacji historycznej – oraz dwie metody szeregów czasowych – GARCH i RiskMetricsTM. Analiza porównawcza została przeprowadzona ze względu na wybór metody estymacji, długość szeregu czasowego oraz poziom tolerancji VaR.Wyniki badania pokazały przewagę estymatorów VaR opartych na wariancji nad kwantylową metodą symulacji historycznej. Ponadto porównanie estymatorów opartych na założeniu stacjonarności z estymatorami wywodzącymi się z metod szeregów czasowych pokazało, że uwzględnienie zmienności parametrów pozwoliło na znaczącą redukcję obciążenia i wariancji estymatorów.
EN
Since its inception at the end of the XX century, VaR risk measure has gained massive popularity. It is synthetic, easy in interpretation and offers comparability of risk levels reported by different institutions. However, the crucial idea of comparability of reported VaR levels stays in contradiction with the differences in estimation procedures adopted by companies. The issue of the estimation method is subject to the internal company decision and is not regulated by the international banking supervision. The paper was dedicated to comparative analysis of the prediction errors connected with competing VaR estimation methods. Four methods, among which two stationarity-based – variance-covariance and historical simulation – and two time series methods – GARCH and RiskMetricsTM – were compared through the Monte Carlo study. The analysis was conducted with respect to the method choice, series length and VaR tolerance level.The study outcomes showed the superiority of the sigma-based method of variance-covariance over the quantile-based historical simulation. Furthermore the comparison of the stationarity-based estimates to the time series results showed that allowing for time-varying parameters in the estimation technique significantly reduces the estimator bias and variance.
PL
Artykuł dotyczy problemu ryzyka estymacyjnego przy testowaniu VaR. Występowanie ryzyka estymacyjnego (zwanego również niepewnością parametrów) oznacza, że obserwowany proces przekroczeń VaR może nie spełniać standardowych wymogów określających ramy testowe. W konsekwencji testy VaR mogą odrzucać prawidłowe modele VaR ze względu na błędy estymacji popełnione podczas wyznaczania prognoz VaR. W badaniu omawianym w artykule oceniana jest odporność testów VaR na ryzyko estymacyjne. U podstaw badania leży spostrzeżenie, że ryzyko estymacyjne w istotny sposób zależy od elementów schematu prognozowania. Z tego powodu w badaniu uwzględniono schematy prognozowania bardziej realistyczne niż schemat oparty na ustalonym oknie, co stanowi rozszerzenie w stosunku do wcześniej prowadzonych badań. Cel badania jest dwojaki: znalezienie metod, które pozwalałyby zniwelować negatywny wpływ ryzyka estymacji na testy VaR, oraz kompleksowe porównanie metod testowania VaR w odniesieniu do problemu ryzyka estymacyjnego. Przeprowadzone analizy wskazują m.in. na to, że odpowiednie dostosowanie schematu prognozowania daje lepsze wyniki pod względem dokładności testów niż korygowanie błędów estymacji techniką podpróbkowania.
EN
The paper addresses the issue of estimation risk in VaR testing. The occurrence of estimation risk (also called parameter uncertainty) implies that the observed VaR violation process may not fulfil the standard requirements that underpin the testing framework. As a result, VaR tests may reject correct VaR models due to estimation errors committed when predicting the VaR. The paper examines the robustness of VaR tests to estimation risk. The research is based on an observation indicating that certain elements of a forecasting scheme have a significant influence on estimation risk. Thus, the article extends the previous studies to include several more realistic forecasting schemes than those based solely on a fixed window. The aim of the research is twofold: firstly, to find methods of mitigating the negative impact of estimation risk on VaR tests, and secondly, to provide a comprehensive comparison of VaR testing methods with reference to the issue of estimation risk. The conducted analyses demonstrate that a proper adjustment of the forecasting scheme yields better results in terms of the accuracy of the tests than correcting estimation errors by means of the subsampling technique.
EN
Although regulatory standards, currently developed by the Basel Committee on Banking Supervision, anticipate a shift from VaR to ES, the evaluation of risk models currently remains based on the VaR measure. Motivated by the Basel regulations, we address the issue of VaR backtesting and contribute to the debate by exploring statistical properties of the exponential autoregressive conditional duration (EACD) VaR test. We show that, under the null, the tested parameter lies at the boundary of the parameter space, which can profoundly affect the accuracy of this test. To compensate for this deficiency, a mixture of chi-square distributions is applied. The resulting accuracy improvement allows for the omission of the Monte Carlo simulations used to implement the EACD VaR test in earlier studies, which dramatically improves the computational efficiency of the procedure. We demonstrate that the EACD approach to testing VaR has the potential to enhance statistical inference in most problematic cases - for small samples and for those close to the null.
EN
Dynamic development in the area of value-at-risk (VaR) estimation and growing implementation of VaR-based risk valuation models in investment companies stimulate the need for statistical methods of VaR models evaluation. Following recent changes in Basel Accords, current UE banking supervisory regulations require internal VaR model backtesting, which provides another strong incentive for research on relevant statistical tests. Previous studies have shown that commonly used VaR independence Markov-chain-based testing procedure exhibits low power, which constitutes a particularly serious problem in the case of finite-sample settings. In the paper, as an alternative to the popular Markov test an overview of the group of duration-based VaR backtesting procedures is presented along with exploration of their statistical properties while rejecting a non-realistic assumption of infinite sample size. The Monte Carlo test technique was adopted to provide exact tests, in which asymptotic distributions were replaced with simulated finite sample distributions. A Monte Carlo study, based on the GARCH model, was designed to investigate the size and the power of the tests. Through the comparative analysis we found that, in the light of observed statistical properties, the duration-based approach was superior to the Markov test.
EN
In the presented paper GARCH class models were considered for describing and forecasting market volatility in context of the economic crisis. The sample composition was designed to emphasize models performance in two groups of markets: well-developed and transition. As a preview to our results, we presented the procedure of model selection form the GARCH family. We distinguished three subperiods in the time series in a way that the dependencies between forecast outcomes and a scale of market volatility were emphasized. The comparison of the forecast errors revealed a serious problem of volatility prediction in times of high market instability. The crisis impact was particularly apparent in transition markets. Our findings showed that GARCH models allowed risk control, with risk understood as a relation of forecast error to the level of predicted volatility.
EN
Research background: The Russian invasion on Ukraine of February 24, 2022 sharply raised the volatility in commodity and financial markets. This had the adverse effect on the accuracy of volatility forecasts. The scale of negative effects of war was, however, market-specific and some markets exhibited a strong tendency to return to usual levels in a short time. Purpose of the article: We study the volatility shocks caused by the war. Our focus is on the markets highly exposed to the effects of this conflict: the stock, currency, cryptocurrency, gold, wheat and crude oil markets. We evaluate the forecasting accuracy of volatility models during the first stage of the war and compare the scale of forecast deterioration among the examined markets. Our long-term purpose is to analyze the methods that have the potential to mitigate the effect of forecast deterioration under such circumstances. We concentrate on the methods designed to deal with outliers and periods of extreme volatility, but, so far, have not been investigated empirically under the conditions of war. Methods: We use the robust methods of estimation and a modified Range-GARCH model which is based on opening, low, high and closing prices. We compare them with the standard maximum likelihood method of the classic GARCH model. Moreover, we employ the MCS (Model Confidence Set) procedure to create the set of superior models. Findings & value added: Analyzing the market specificity, we identify both some common patterns and substantial differences among the markets, which is the first comparison of this type relating to the ongoing conflict. In particular, we discover the individual nature of the cryptocurrency markets, where the reaction to the outbreak of the war was very limited and the accuracy of forecasts remained at the similar level before and after the beginning of the war. Our long-term contribution are the findings about suitability of methods that have the potential to handle the extreme volatility but have not been examined empirically under the conditions of war. We reveal that the Range-GARCH model compares favorably with the standard volatility models, even when the latter are evaluated in a robust way. It gives valuable implication for the future research connected with military conflicts, showing that in such period gains from using more market information outweigh the benefits of using robust estimators.
first rewind previous Page / 1 next fast forward last
JavaScript is turned off in your web browser. Turn it on to take full advantage of this site, then refresh the page.