Full-text resources of CEJSH and other databases are now available in the new Library of Science.
Visit https://bibliotekanauki.pl

Results found: 25

first rewind previous Page / 2 next fast forward last

Search results

Search:
in the keywords:  Monte Carlo
help Sort By:

help Limit search:
first rewind previous Page / 2 next fast forward last
EN
In many statistical applications the main point of interest is estimating some population central characteristic such as the mean or the median. The Shewhart’s control chart X is based on monitoring the average process level. However, in some areas the main interest is based on estimating the maximum or the minimum. The proposal of the monitoring processes based on the properties of the Gumbel distribution is presented in the paper. The properties of the proposed method have been analyzed in the Monte Carlo study.
PL
Klasyczne metody pozwalające na monitorowanie poziomu przeciętnego procesów produkcyjnych odwołują się zwykle do założenia normalności rozkładu badanej zmiennej i niezależności kolejnych pomiarów. W wielu analizach statystycznych interesująca jest ocena poziomu przeciętnego badanej charakterystyki. Tak jest np. przy monitorowaniu procesów z wykorzystaniem karty kontrolnej X . Jednak w wielu zastosowaniach może być interesująca ocena wielkości maksymalnych lub minimalnych. W artykule przedstawiono propozycję wykorzystania własności rozkładu wartości ekstremalnych w monitorowaniu procesów. Rozważania teoretyczne zostały uzupełnione analizami symulacyjnymi własności proponowanej metody.
PL
Klasyczne metody pozwalające na monitorowanie poziomu przeciętnego procesów produkcyjnych odwołują się zwykle do założenia normalności rozkładu badanej zmiennej. Wynika to z faktu, że w konstrukcji kart kontrolnych Shewharta wykorzystuje się sekwencje testów parametrycznych, które wymagają spełnienia wspomnianego założenia. Stosowanie testów permutacyjnych nie wymaga spełnienia tak ostrych założeń. W artykule zaproponowano zastosowanie zamiast sekwencji testów parametrycznych sekwencji testów permutacyjnych. Zaproponowano konstrukcję karty kontrolnej wykorzystującej sekwencje testów permutacyjnych. Rozważania teoretyczne zostały uzupełnieniowe analizami symulacyjnymi. Analizy symulacyjne wykazały, że stosowanie proponowanej karty kontrolnej może być szczególnie przydatne dla prób o małych liczebnościach pochodzących z rozkładów o silnej asymetrii.
EN
The multiple regression analysis is a statistical tool for the investigation relationships between the dependent and independent variables. There are some procedures for selecting a subset of given predictors. These procedures are widely available in statistical computer packages. The most often used are forward selection, backward selection and stepwise selection. In these procedures testing the significance of parameters is used. If some assumptions such as normality errors are not fulfilled, the results of testing significance of the parameters may not be trustworthy. The main goal of this paper is to present a permutation test for testing the significance of the coefficients in the regression analysis. Permutation tests can be used even if the normality assumption is not fulfilled. The properties of this test were analyzed in the Monte Carlo study.
EN
A non-financial enterprise with receivables or liabilities denominated in a foreign currency is exposed to currency risk. Wanting to calculate a financial reserve in order to secure its receivables or liabilities, an enterprise can introduce the concept of the value at risk. To determine value at risk, an enterprise has to know the probability distribution of the future value of the receivable or the liability for a specific moment in future. Using a geometric Brownian motion to reflect exchange rate changes is among the possible solutions. The aim of the paper is to indicate that using the Monte Carlo simulation for forecasting the currency risk of an enterprise is a clear, easy-to-implement and flexible in terms of the assumptions approach. The flexibility of the Monte Carlo approach relies on the possibility to take up the assumption that the currency position changes caused by currency fluctuations have an other than normal probability distribution.
EN
The paper presents the possibility of application of stochastic methods in supporting the selection of project team members. According to research of M. Belbin an effective collaboration of members of the project team requires 8 team roles (describing the soft skills) in its composition, which will enable the occurrence of the synergy effect. For the purpose of the reasearch, the original software was developed, which uses stochastic methods in the process of assembling teams that meet these criteria. The paper presents the results of experiments based on anonymized surveys conducted among students of different faculties. Teams obtained thorough conducted simulations fulfilled the criterion of completeness of team roles.It was pointed out that the use of stochastic methods in supporting the process of selection of employees in project teams may contribute to improving the efficiency of resource allocation through the appropriate assignation of roles and responsibilities, for example, to avoid a situation where a qualified professional would be attached to the team, wherein his potential would be wasted.
6
100%
PL
W artykule przedstawiono propozycję nieparametrycznego testu do weryfikacji hipotezy o postaci rozkładu badanej zmiennej. Proponowany test jest modyfikacją znanego testu pustych cel. W teście pustych cel obszar zmienności jest dzielony na ustalone cele i sprawdza się w ilu celach nie ma żadnego elementu z próby. W proponowanej modyfikacji położenie celi jest zmienne. Wyznaczana jest funkcja podająca czy dla danego położenia celi jest ona pusta, a następnie na podstawie przebiegu tej funkcji podejmowana jest decyzja odnośnie weryfikowanej hipotezy. Przedstawiono rozważania dla szczególnego przypadku gdy testowana jest hipoteza o normalności rozkładu. Wyznaczone zostały wartości krytyczne dla proponowanego testu oraz porównania tej metody z testem pustych cel. Proponowana modyfikacja została porównana z klasycznym testem pustych cel.
EN
In the paper the proposition of the nonparametric test to verify the hypothesis on the distribution of the random variable is presented. The proposed test is the modification of well known empty cells test. In the empty cells test the area of variability of the random variable is divided into some fixed cells. In the proposed modification the cell is moving over the whole area of variability of the random variable. The analysis of testing the hypothesis of normality is presented. The table with critical values of the test statistic and the comparison of the empty cells test and the proposed modification is presented.
7
100%
EN
In this paper we present the problem of forecasting efficiency of the TAR models. Three methods of forecasting are considered to compare their accuracy: the Monte Carlo method, and the two versions the bootstrap technique. The basic models are two- or three- regimes stationary threshold autoregressive models with the endogenous or exogenus switching variable. The time series set consists of the weekly stock returns of the banking sector quoted at the Warsaw Stock Exchange.
PL
Celem artykułu jest porównanie metod prognozowania nieliniowych modeli progowych. Wykorzystane zostały dwie metody prognozowania: metoda bootstrap w dwóch wariantach oraz metoda Monte Carlo. Przedmiotem analizy są tygodniowe stopy zwrotu spółek sektora bankowego, notowanych na GPW w Warszawie. W konkluzji stwierdza się, że przewidywanie dokładnych wartości stóp zwrotu jest bardzo trudne, natomiast modele progowe dają bardzo dobre wyniki w zakresie przewidywania kierunków zmian w przyszłości.
EN
Modern enterprises use various spreadsheet financial models to project their financial situation as well as to address potential entrepreneurial activity risk exposure. The most advanced solution is provided by the Monte Carlo approach that offers much broader possibilities in terms of entrepreneurial risk measurement than in the case of traditional methods. One of the most significant problems of the Monte Carlo approach is to identify, quantify and reflect interdependencies between variables that are risk factors in any risk analysis. The aim of this paper is to discuss possibilities to identify and quantify interdependencies in terms of historical data availability as well as to present a spreadsheet solution that would reflect interdependencies in risk simulation and which would be easy to implement. The solution presented is not the only one available, but it does not require too much effort to be implemented in any financial model developed in the form of a spreadsheet, especially by the individuals responsible for risk management in small and medium sized enterprises.
EN
The aim of the study is to examine the robustness of the estimates and standard errors in the case of different structure of the sample and its size. The two-level model with a random intercept, slope and fixed effects, estimated using maximum likelihood, was taken into account. We used Monte Carlo simulation, performed on a sample of the equipotent groups.
PL
W opracowaniu przedstawiono propozycję modyfikacji testu adaptacyjnego dla porównania wartości oczekiwanych w dwóch populacjach. W zaproponowanym rozwiązaniu nie dokonuje się wyboru postaci statystyki testowej, lecz na podstawie danych pochodzących z wylosowanej próbki modyfikowane są wagi występujące w statystyce testowej. Własności rozważanego testu i testów klasycznych zostały porównane z wykorzystaniem symulacji komputerowych. Test może być wykorzystany w procedurach kontroli jakości. Nie wymaga on spełnienia ostrych założeń dotyczących postaci rozkładu zmiennej diagnostycznej i z tego powodu może być wykorzystywany do wykrywania rozregulowania procesu w przypadkach, gdy nie jest znana postać rozkładu zmiennej.
EN
The paper presents a proposal of a modification of the L. Hao and D. Houser adaptive test for comparing the locations of two distributions. The modification is based on the linear combination of three test statistics. In the Hao and Houser test, due to the values of the robust asymmetry and shape characteristics, the test statistic is chosen. A method of continuous modification of the test statistic is presented. The properties of the proposed procedure are analyzed in a Monte Carlo study. The proposal could be used in quality control monitoring processes.
EN
An indication of correlation between dependent variable and predictors is a crucial point in building statistical regression model. The test of Pearson correlation coefficient – with relatively good power – needs to fulfill the assumption about normal distribution. In other cases only non-parametric tests can be used. This article presents a possibility and advantages of permutation tests with the discussion about proposed test statistics. The power of proposed tests was estimated on the basis of Monte Carlo experiments. The investigations were carried out for real data – a sample of refinery process parameters, where the indication of changes in correlation, even for sample with small size is very important. It creates an opportunity to react to changes and update statistical models quickly and keep acceptable quality of prediction
EN
Randomisation tests (R-tests) are regularly proposed as an alternative method of hypothesis testing when assumptions of classical statistical methods are violated in data analysis. In this paper, the robustness in terms of the type-I-error and the power of the R-test were evaluated and compared with that of the F-test in the analysis of a single factor repeated measures design. The study took into account normal and non-normal data (skewed: exponential, lognormal, Chi-squared, and Weibull distributions), the presence and lack of outliers, and a situation in which the sphericity assumption was met or not under varied sample sizes and number of treatments. The Monte Carlo approach was used in the simulation study. The results showed that when the data were normal, the R-test was approximately as sensitive and robust as the F-test, while being more sensitive than the F-test when data had skewed distributions. The R-test was more sensitive and robust than the F-test in the presence of an outlier. When the sphericity assumption was met, both the R-test and the F-test were approximately equally sensitive, whereas the R-test was more sensitive and robust than the F-test when the sphericity assumption was not met.
EN
In our previous studies, we modified the Enders and Siklos test for threshold error correction to a version allowing the individual threshold variable to be responsible for the asymmetric mechanism of the system. The idea was to learn about the threshold mechanism both in the long and short run. In this paper, we tested for the asymmetry of the adjustment of the error correction mechanism towards the long-run path. The subsamples within regimes differ in size with respect to the threshold value. The novelty lies in the division of both short and long-run variables according to a threshold variable with a given threshold value (assumed or estimated). We named the test extended Enders and Siklos test (exE-S). The present study focuses on the power and size of the modified procedure. A simulation study was designed, computed and conducted. The results are favourable for the proposed approach, although they strongly depend on the difference in values between the adjustment parameters in the regimes.
EN
Prospective financial analysis is a key decision tool in an enterprise. The traditional approach confronts the forecasted value of a financial category or a financial ratio with a requirement or a standard. Knowing that the particular category or the ratio meets the requirement or the standard is a kind of risk information, but realizing that the requirement or the standard is met with a particular probability level is a detailed image of risk. The aim of the paper is to indicate the possibility to increase the effectiveness of prospective financial analysis by using a Monte Carlo simulation. The biggest advantage of the presented approach (that is in fact the evolution of the traditional scenario approach to risk analysis) is that it delivers the detailed probability distributions of key financial categories and ratios. Shareholders accepting the results of prospective financial analysis with the Monte Carlo simulation should accept risk in a more conscious way than in the case of the traditional approach
EN
The objective of this research is to estimate the model risk, represented as precision, and the accuracy of the Value at Risk (VaR) measure, under three different approaches: historical simulation (HS), Monte Carlo (MC), and generalized ARCH (GARCH). In this work, to analyze the VaR model, the accuracy and precision were used. Estimation of the accuracy and precision was done under the three approaches for four European banks at 95 and 99% confidence levels. The percentage crossings and Kupiec POF were used to judge the model accuracy, whereas the ratio of the maximum and minimum VaR estimates, and the spread between the maximum and minimum VaR estimates were used to estimate the model risk. This was achieved by changing input parameters, specifically, the estimation time window (125, 250, 500 days). Implications/Recommendations: The accuracy alone is not sufficient to evaluate a model and precision is also required. The temporal evolution of the precision metrics showed that the VaR approaches were inconsistent under different market conditions. This article focuses on the accuracy and precision concepts applied to estimate model risk of the Value at Risk (VaR). VaR is the foundation for sophisticated risk metrics, including systemic risk measures like Marginal Expected Shortfall and Delta Conditional Value at Risk. Thus, understanding the risk associated with the use of VaR is crucial for finance practitioners.
EN
This paper reports our estimates of the Value at Risk using Monte Carlo simulations for which we developed a computer program. Our approach involves obtaining Monte Carlo parameters by fitting real historical data of different periods to probability distributions. We applied the algorithm to the WIG20 and mWIG40 stock indices, and performed simulations for the Value at Risk at 95% and 99% confidence intervals over six estimation periods ranging from 1 trading day to 250 trading days. This approach was evaluated using the percentage failures and the Kupiec Proportion of Failures test. Our results indicate that this method is highly influenced by the choice of past historical and estimation period lengths considered. Overall, we observed that the Monte Carlo computational scheme is a reliable method for quantifying VaR when parametrized well.
EN
Following a dynamic development of VaR estimation methods from 90s, in recent literature much attention has been paid to testing procedures designed to evaluate quality of VaR models. There has been a wide-ranging discussion on both – statistical properties and empirical application of the two most popular tests, which are Kupiec test from 1995 that considers the ratio of VaR exceedances and Christoffersen autocorrelation test from 1998. We focused on autocorrelation property and compared Christoffersen test to Ljung Box test of 1978 and to the proposition of Engle and Mangianelli from 2004. The goal of the paper was to explore the design of experiments in the context of evaluating power of autocorrelation tests. We presented and contrasted simulation experiments proposed in the literature, indicated their design influence on the results and proposed a new scheme for power evaluating in autocorrelation tests.
PL
W ślad za dynamicznym rozwojem metod estymacji VaR, począwszy od lat dziewięćdziesiątych ubiegłego wieku, w literaturze pojawiła się obszerna dyskusja dotycząca możliwości testowania statystycznego w kontekście oceny modeli VaR. Z jednej strony powstało wiele prac odnoszących się do własności statystycznych dwóch najpopularniejszych testów – testu Kupca z 1995 roku, który bada udział przekroczeń VaR w szeregu i testu autokorelacji przekroczeń VaR Christoffersena z 1998 roku. Z drugiej strony istnieje bogata literatura dotycząca zastosowań rozważanych testów do empirycznych szeregów czasowych. W niniejszej pracy skoncentrowano się na analizie własności testów autokorelacji i porównano test Christoffersena do testów Ljunga Boxa z 1978 roku i testu Engla i Mangianelli’ego z 2004. Celem pracy było przedstawienie przeglądu eksperymentów symulacyjnych wykorzystywanych do badania mocy testów autokorelacji przekroczeń VaR w odniesieniu do założeń metody Monte Carlo oraz zaprezentowanie własnej propozycji eksperymentu.
Przegląd Statystyczny
|
2016
|
vol. 63
|
issue 4
431-447
EN
The first aim of this paper is to present the theory of the proposal of the author in the form of modular statistics for three-way contingency table 2×2×2 and examine its properties in relation to known “chi-squared statistics”. The second aim is to describe the procedure of generating the content of these tables using the bar method. The third aim is to propose the measure of untruthfulness of null hypothesis as well as to compare the quality of independence tests using their power. Critical values for all analyzed statistics were determined by simulation methods of Monte Carlo.
PL
Pierwszym celem pracy jest przedstawienie teorii dotyczącej autorskiej statystyki modułowej mierzącej niezależność zmiennych dla tablic trójdzielczych 2×2×2 i zbadanie jej własności w odniesieniu do znanych „statystyk chi-kwadrat”. Drugim celem jest opisanie procedury generowania zawartości tych tablic metodą słupkową. Trzecim celem jest zaproponowanie miary nieprawdziwości hipotezy zerowej, a także porównanie jakości testów niezależności za pomocą ich mocy. Wartości krytyczne dla testów niezależności wyznaczono symulacyjnie metodami Monte Carlo.
first rewind previous Page / 2 next fast forward last
JavaScript is turned off in your web browser. Turn it on to take full advantage of this site, then refresh the page.