Full-text resources of CEJSH and other databases are now available in the new Library of Science.
Visit https://bibliotekanauki.pl

Results found: 20

first rewind previous Page / 1 next fast forward last

Search results

Search:
in the keywords:  sensitivity analysis
help Sort By:

help Limit search:
first rewind previous Page / 1 next fast forward last
EN
Research background: The article deals with implementing VMI between the supplier and customer. To assess whether the system will be implemented, the evolution game theory is used. The contribution is based on the limitations of the study of the evolutionary game theory approach to modelling VMI policies (Torres et al., 2014) and its later extension, The evolutionary game theory approach to modelling VMI policies (Torres & García-Díaz, 2018). It aims is to complement the studies and provide a comprehensive picture of the issue. Purpose of the article: The main objective of the contribution is to respond to the question whether the VMI system will be introduced between the supplier and customer. Methods: In the first phase, the matrix is analysed from the point of view of the game meaning and its limit parameters. The limit parameters are set taking into account the economic reality. The only examined states of the matrix are those where the result is not obvious. For the purposes of the contribution, we work with a 5-year period. A new software capable of calculating evolutionary focus and their stability is created. Sensitivity analysis is carried out for the individual parameters that affect the system behaviour. Findings & Value added: Value added is a complex description of the system and complementation of previous studies in this field. VMI is confirmed. The results obtained can be used for practical management, so that the managers are able to identify what the actual costs are and what the probability of introducing the system is. At the same time, they can identify the parameters that can be influenced by them and observe their impact on the shift of the system introduction probability.
EN
The aim of our paper is to present the new results of research work on optimization and simulation for some logistic problems in the company. The System Dynamics (SD) method and the Vensim simulation language are applied in order to solve specific managerial problems described by Forrester in the model of supply chain. The historical model of Customer-Producer-Employment System by Forrester (Forrester, 1961) has not been examined with the sensitivity analysis, from the “automatic” testing perspective. Optimization experiments have not been conducted, either. It is surprising, since the model is old and widely known. The opportunities offered by the Vensim language allow us to perform such analysis. The visualization called “confidence bounds“ is used, to show the behaviour of chosen variables over a period of time. The Monte-Carlo method is applied for sampling a set of numbers from within bounded domains (distribution for each searching parameters is specified). The authors of this paper conducted numerous experiments in this scope. This paper presents their results and offers some conclusions formulated at the end.
EN
In today's dynamic and competitive environment, planning for effective use of the company resources requires an analytical and integrated approach of its essential functions. With such goal in mind, the corporate planning model, in which the modules of production and marketing are related to the financial module, presents a very efficient solution. It is particularly well suited for the needs of Algerian companies operating in an environment that has undergone a transformation from planned economy to market economy where risks and uncertainties are ubiquitous. Furthermore, Algerian companies should take account of the importance of strategic planning and forecasting where, in that context, the corporate planning model provides a powerful tool for decision-making. This work provides a corporate planning model specified for the Algerian National Marble Company. The presented model has been devised and validated from the company data to generate physical and financial short-term forecasts. The obtained empirical results show the usefulness of such a model for the managers in terms of providing a precise model of the essential functions of the company, helping to evaluate the consequences of different management scenarios and assisting in the decision-making process. Furthermore, using prospective simulations, the presented model can be used as a tool for forecasting.
EN
Sensitivity analysis of parameters is usually more important than the optimal solution when it comes to linear programming. Nevertheless, in the analysis of traditional sensitivities for a coefficient, a range of changes is found to maintain the optimal solution. These changes can be functional constraints in the coefficients, such as good values or technical coefficients, of the objective function. When real-world problems are highly inaccurate due to limited data and limited information, the method of grey systems is used to perform the needed optimisation. Several algorithms for solving grey linear programming have been developed to entertain involved inaccuracies in the model parameters; these methods are complex and require much computational time. In this paper, the sensitivity of a series of grey linear programming problems is analysed by using the definitions and operators of grey numbers. Also, uncertainties in parameters are preserved in the solutions obtained from the sensitivity analysis. To evaluate the efficiency and importance of the developed method, an applied numerical example is solved.
EN
Research background: Composite indicators are commonly used as an approximation tool to measure economic development, the standard of living, competitiveness, fairness, effectiveness, and many others being willingly implemented into many different research disciplines. However, it seems that in most cases, the variable weighting procedure is avoided or erroneous since, in most cases, the so-called ?weights by belief? are applied. As research show, it can be frequently observed that weights do not equal importance in composite indicators. As a result, biased rankings or grouping of objects are obtained. Purpose of the article: The primary purpose of this article is to optimise and improve the Human Development Index, which is the most commonly used composite indicator to rank countries in terms of their socio-economic development. The optimisation will be done by re-scaling the current weights, so they will express the real impact of every single component taken into consideration during HDI?s calculation process. Methods: In order to achieve the purpose mentioned above, the sensitivity analysis tools (mainly the first-order sensitivity index) were used to determine the appropriate weights in the Human Development Index. In the HDI?s resilience evaluation process, the Monte Carlo simulations and full-Bayesian Gaussian processes were applied. Based on the adjusted weights, a new ranking of countries was established and compiled with the initial ranking using, among others, Kendall tau correlation coefficient. Findings & Value added: Based on the data published by UNDP for 2017, it has been shown that the Human Development Index is built incorrectly by putting equal weights for all of its components. The weights proposed by the sensitivity analysis better reflect the actual contribution of individual factors to HDI variability. Re-scaled Human Development Index constructed based on proposed weights allow for better differentiation of countries due to their socio-economic development.
EN
The paper presents the application of high dimensional model representation to characterise the relationship between structural and reduced form coefficients of estimated general equilibrium models. The function representation is considered a state-dependent regression that is estimated non-parametrically, based on Monte Carlo sample, and generated from the probability distribution of structural parameters. The estimation method consists of recursive filtering and smoothing algorithms, derived from the Kalman filter, enhanced with special data re-ordering, to capture strong variability of the parameters in the state-dependent regression. The estimated function decomposition is used to build sensitivity indices. The methodology presented is illustrated with an example from the literature.
EN
Transport is the second energy consumer sector after housing in Algeria. In this article, we explore the ener- gy implication of commuting by considering a panel of socio-economic (SE) and built environment (BE) driving factors. The method is based on four steps: (i) The first step is to identify the main and potential drivers from the literature review and to propose a model that summarises the main assumptions that could explain the volume of commuting and the resulting energy consumption. (ii) In the second step, we designed and distributed 700 questionnaires in the municipality of Djelfa and retained 184 valid questionnaires in the final study sample. (iii) In the third step, we developed a method adapted to urban areas to quantify energy consumption as a function of the distance travelled, the type and density of occupation by means of transport and the type of fuel. (iv) The fourth step is to check the fit of the hypothetical model with a path analysis-based approach. The model developed identifies 15 factors, of which five have a direct impact and 10 have an indirect impact on the energy consumption of commuting. The model shows that building density and the age of the respondent can reduce the energy consumption of commuting by up to −15% and −12% respectively; whereas the number of cars by housing and the round-trip frequency could increase the energy consumption up to 38% and 27% respectively. Our results suggest a structuring role of the socio-economic characteristics of households in explaining the energy consumption of commuting.
EN
A study that would otherwise be eligible is commonly excluded from a meta-analysis when the standard error of its treatment-effect estimator, or the estimate of the variance of the outcomes, is not reported and cannot be recovered from the available information. This is wasteful when the estimate of the treatment effect is reported. We assess the loss of information caused by this practice and explore methods of imputation for the missing variance. The methods are illustrated on two sets of examples, one constructed specifically for illustration and another based on a published systematic review.
EN
In this paper the method of the Analytic Hierarchy Process (AHP) is described. At the beginning the general assumptions of the method are characterized and discussed. These are related to assumptions held within General Systems Theory. Then the problems of pairwise comparisons of elements, with its use of a specific scale, as well as the resulting reciprocal matrix are presented. There are many ways of estimating the eigenvectors of this matrix. These eigenvectors reflect weights of preferences. Despite the fact that we are able to evaluate the consistency of judgements the problem of acceptable weights still remains. Therefore, by way of an illustration, the method for the sensitivity analysis of preferences is also discussed in the paper.
EN
This paper explores the role of the built environment and socio-economic drivers in shaping the modal share of commuting. For this, we have identified through our literature review 67 potential variables categorised into two groups; the built environment and the households’ socio-economic characteristics. We have considered the city of Djelfa as a case study and used the questionnaire as a data collection tool. The questionnaire processing of the 700 questionnaires provided to the households allowed us to select 184 questionnaires for our analysis. The sensitivity analysis protocol is designed for two stages; (i) an exploratory stage, conducted by principal component analysis and bivariate correlation analysis; (ii) and a confirmatory stage conducted by a path analysis. The first step allowed us to hypothesise several causal pathways that could explain, directly or indirectly, the modal share of commuting. The results of the path analysis show that the modal shares of walking, private car and public transit are controlled by 13, 16 and 12 explanatory variables, respectively. Overall, the socio-economic characteristics of households discourage walking and transit use, and encourage private car commuting. On the other hand, the variables identified in this paper related to the built environment discourage walking, but encourage the use of public transit rather than private cars for commuting.
11
88%
EN
Business valuation through DCF is recognized as one of the most popular valuation approaches. DCF valuation models, however, have become extremely complex. Modeling requires plenty of input data to be processed, the process is done in many stages, and the data obtained on each of the stages may be interrelated. The process then is not simply a chain of tasks. The modern models work via sophisticated mechanisms of loops being triggered whenever a new piece of information is revealed and the whole model needs updating. Technically speaking, in the spreadsheets environment, this may only be done with the use of iterations. The valuation model should also be subjected to the sensitivity analysis, which is able to quantify the impact of every single assumption made on the final company value. The analysis points out the set of critical assumptions, which have the major impact on the calculated company’s value. Apart from quantifying the impact of the assumptions, the analysis runs qualitative checks on the assumptions assessing the robustness of the arguments standing behind the critical factors for valuation. Consequently, the sensitivity analysis improves the objectivity of the model and mitigates the exposure for the possible results manipulation. The sensitivity analysis reveals its critical role in the valuation process and proves that it should be considered as the standard step in every DCF valuation.
EN
Food loss is one of the challenges in the cold chain (CC), which can lead to serious problems with human safety, environment, and economies around the world. Recently, reducing food loss has drawn public attention; previous studies mostly gave attention to food loss drivers in the retailer-consumer stages of the supply chain. In this study, we focused on identifying food-loss-factors (FLF) all over the CC, and developed an approach based on multi decision-making methods and fuzzy sets to rank FLFs by those who have more influence on food loss in the poultry sector. The first phase concerns the identification of FLFs based on the literature as well as experts opinions in the poultry field. Then fuzzy Delphi method was implemented to reach the consistency level of >75% among all the group members. In the second phase, fuzzy AHP method was employed for the weighting of FLFs, in order to rank them. For the validation of our contribution, a sensitivity analysis was performed. This research presents a guide for decision makers in the CC to help them make an efficient strategy plan to reduce food loss during logistic activities.
EN
It is common to address the problem of uncertainty in computable general equilibrium modeling by sensitivity analysis. The relevant studies of the effects of parameter uncertainty usually focus on various elasticity parameters. In this paper we undertake sensitivity analysis with respect to the parameters derived from calibration to a benchmark data set, and describing the structure of the economy. We use a time series of benchmark databases for the years 1996-2005 for Poland to sequentially calibrate a static CGE model, and examine the dispersion of endogenous variables’ responses in three distinct simulation experiments. We find a part – though not the most – of the results to be significantly sensitive to the choice of calibration database (including ambiguities about the direction of response). The dispersion of the results and its sources clearly depend on the shock in question. Uncertainty is also quite diverse between variables. It is thus recommended that a thorough parametric sensitivity analysis be a conventional part of a simulation study. Also, the reliability of results would likely benefit even from simple, trend-based updates of the benchmark data, as the responses of endogenous variables exhibit systematic changes, observed when the model is calibrated to the data for consecutive years.
PL
Typowym sposobem odniesienia się do problemu niepewności wyników symulacji na podstawie modeli CGE (Computable General Equilibrium) jest analiza wrażliwości. Większość prac poświęconych temu zagadnieniu koncentruje się na kwestii wyboru wartości różnego rodzaju elastyczności. W niniejszej pracy podejmujemy analizę wrażliwości dotyczącą parametrów opisujących strukturę gospodarki, uzyskiwanych w drodze kalibracji. Do kalibracji modelu używamy zestawów danych za kolejne lata z okresu 1996-2005, a następnie analizujemy rozrzut wyników dla trzech różnych eksperymentów symulacyjnych. Wyniki dla części – choć nie większości – zmiennych charakteryzują się znaczącą wrażliwością na wybór bazy danych wykorzystanej do kalibracji (włączając niepewność co do kierunku reakcji). Stopień rozrzutu wyników i jego źródła istotnie zależą od rodzaju analizowanego scenariusza symulacyjnego. Skala niepewności dotyczącej poszczególnych zmiennych jest również zróżnicowana. Zaleca się zatem, aby gruntowna analiza wrażliwości była standardową częścią badania symulacyjnego. Ponadto zastosowanie nawet prostych (np. opartych na analizie trendów) metod aktualizacji bazy danych mogłoby najprawdopodobniej zwiększyć wiarygodność wyników, biorąc pod uwagę, że reakcje zmiennych endogenicznych na zadawane w symulacjach impulsy podlegają systematycznym zmianom, gdy model kalibrowany jest do danych z kolejnych lat.
EN
Introduction: It is shown in this paper, presented selected aspects of the impact that the distribution of products in a warehouse has on the picking of orders. This problem is particularly important for medium and large warehouses characterized by considerable rotation of goods. The aim of this study was to assess the impact of the method of classification of products that were used, on the efficiency of the order picking process. Method: For each classification method, two cases of picking products were considered, one including the impact of the fact that the products can be piled, and the other, that they cannot be piled. Simulation studies were preceded by a sensitivity analysis in order to determine the impact of the criteria on the effectiveness of each of the methods. Results: The best results were obtained after applying the product distribution in the warehouse on the basis of: COI Index or ABC analysis according to the number of units sold. It can be concluded that for large warehouses and for products with low susceptibility to stacking, the method based on COI Index proves to be the most effective. Conclusions: If susceptibility to stacking is irrelevant in the products picking process, for average-size and large-size warehouses it is important to distribute products on the basis of COI Index. This method allows obtaining better results than in the case of free storage places by an average of 28.72%. For products with low susceptibility to stacking, applying COI Index also proves to be the most effective.
PL
Wstęp: W pracy pokazano wybrane aspekty wpływu, jaki ma rozlokowanie produktów w magazynie na proces poboru zamówień. Zagadnienie to jest szczególnie ważne w średnich i dużych magazynach, charakteryzujących się istotną rotacją wyrobów. Celem pracy jest ocenienie wpływu zastosowanej metody klasyfikacji produktu na efektywność procesu poboru zamówień. Metody: Dla każdej metody klasyfikacji wybrano dwie sytuacji poboru zamówienia, w jednej dopuszczalne jest sztaplowanie towarów, w drugiej nie jest. Następnie przeprowadzono symulacje i oceniono je przy pomocy analizy wrażliwości w celu określenie wpływu poszczególnych kryteriów na efektywność każdej z metod. Wyniki: Najlepsze rezultaty w rozlokowaniu produktu w magazynie otrzymano przy oparciu metody na współczynniku COI oraz analizy ABC w stosunku do ilości sprzedanych jednostek produktu. Można wnioskować, że w przypadku dużych magazynów oraz dla produktów o niskiej podatności do sztaplowania, najbardziej efektywną metodą była metoda oparta o współczynnik COI. Wnioski: Przy założeniu, że podatność na sztaplowanie nie jest istotna w procesie poboru towaru, w przypadku magazynów o średniej i dużej powierzchni, istotnym jest oparcie metody rozlokowania produktów na zastosowaniu współczynnika COI. Metoda ta pozwala na uzyskanie lepszych rezultatów średnio o 28, 72% aniżeli dla rozlokowania na zasadzie wolnego miejsca. W przypadku produktów słabo podatnych na sztaplowanie, również zastosowanie metody opartej na współczynnika COI jest bardziej efektywne.
EN
As a result of society’s ageing and changes in the demographic situation, the demand for long-term social care services at budget institutions is growing in Latvia, which also increases the spending of financial resources by the local governments. Given such circumstances, efficient economic processes within these institutions would allow for rational use of resources available to local governments. Within the framework of this paper, the authors compare the technical efficiency of long-term care centres (LTCC) for the elderly in 64 Latvia’s municipalities, based on the results of the evaluation of the relative efficiency of each LTCC. The results were obtained by using the data envelopment analysis (DEA) method, where LTCCs are treated as decision making units (DMUs). The objective of the study is to identify the technically most efficient DMU (ME DMU) within the distribution of different DMUs, by determining the technically ME DMUs in terms of human resources, costs, and remuneration, as well as to find out the inputs that affect the efficiency of less efficient DMUs and the necessary changes for these inputs to achieve ME DMU status. Achieving the goal included a literature analysis on the related topic, data selection and the adjustment thereof according to the objectives of the study, application of a cluster analysis, DEA and sensitivity analysis, followed by the analysis of results. By using the selected methods to achieve the objective, one technically ME DMU was identified in terms of labour, costs, and remuneration in the cluster distribution, and the input that reduces technical efficiency of DMU was identified. The reduction of this input within DEA can raise efficiency ratio (ER) of less efficient DMUs to reach ME DMU. The authors conclude that in terms of identified workforce, costs, and remuneration, technically ME DMUs, based on input/output, can serve as a benchmark for lower efficiency DMUs of similar size in order to increase their technical efficiency. In turn, within the framework of sensitivity analysis the reduction of input that has been identified to affect DMU efficiency, which contributes to the increase in ER, can be applied to all lower efficiency DMUs based on the high proportion of this input relative to other DEA model inputs. The novelty of the study is the assessment of the technical efficiency of the LTCC for the elderly, by using the administrative data of the area, as well as the data analysis approach within the scope of the selected method, which has not been conducted in the area of social care in Latvia so far.
EN
In the present business environment, rapidly developing technology and the competitive world market pose challenges to the available assets of industries. Hence, industries need to allocate and use available assets at the optimum level. Thus, industrialists must create a good decision plan to guide their performance in the production sector. As a result, the present study applies the Meta-Goal Programming technique to attain several objectives simultaneously in the textile production sector. The importance of this study lies in pursuing different objectives simultaneously, which has been almost ignored till now. The production scheduling problem in a textile firm is used to illustrate the practicability and mathematical validity of the suggested approach. Analysis of the results obtained demonstrates that the solution met all three meta-goals with some original goals being met partially. An analysis of the sensitivity of the approach to the weights of the preferences was conducted.
PL
Dobrostan (well-being) jest pojęciem wieloaspektowym obejmującym czynniki wpływające na satysfakcję z życia. Celem artykułu jest ocena dobrostanu społeczeństw krajów OECD w 2013 roku, na podstawie danych z bazy OECD Regional Well-Being. Uwzględniono 9 obszarów: dochody gospodarstw domowych, miejsce pracy, warunki zamieszkania, poziom wykształcenia, zdrowie, środowisko, bezpieczeństwo, zaangażowanie obywatelskie i dostęp do usług. Wykorzystano metodę DEA. Przedstawiono zależność efektywności od poziomu bogactwa krajów. Uzyskane wyniki umożliwiają ocenę zróżnicowania przestrzennego dobrostanu w krajach OECD oraz wskazanie jego przyczyn.
EN
Well-being is a multi-faceted concept encompassing factors affecting satisfaction with life. The aim of this paper is to assess the well-being of the societies of OECD countries in 2013, based on data from the OECD Regional Well-Being database. Nine areas are included: household income, place of work, living conditions, education, health, environment, safety, civic engagement, access to services. DEA method is applied. The dependence of efficiency on the level of wealth of countries is presented. The results allow assessing the spatial differentiation of well-being in OECD countries and identifying its causes.
EN
One of the serious drawbacks of observational studies is the selection bias caused by the selection process to the treatment group. Propensity Score Matching (PSM), which allows for the reduction of the selection bias when estimating the average treatment effect on the treated (ATT), is a method recommended for the evaluation of projects and programmes co-financed by the European Union. PSM relies on a strong assumption known as the Conditional Independence Assumption (CIA) which implies that selection into the treatment group is based on observable variables, and all variables influencing both the selection process and outcome are observed by the researcher. If this does not hold, the estimated effect may be not so much the result of the treatment as of the lack of balance of an unobserved confounder, which affects both the selection process and the outcome. Rosenbaum’s sensitivity analysis allows researchers to determine how strong the impact of such a potential unobserved confounder on selection into treatment and the outcome must be to undermine conclusions about ATT estimated by PSM. Rosenbaum’s primal and simultaneous approaches are applied in the paper to assess robustness to an unobserved confounder of the net effect of internships for unemployed young people with a maximum age of thirty-five (estimated with PSM) organized by one of the biggest district employment offices in Małopolska.
PL
Jedną z poważnych wad badań obserwacyjnych jest obciążenie selekcyjne spowodowane selekcją jednostek do grupy poddawanej oddziaływaniu. Metoda Propensity Score Matching (PSM), która umożliwia redukcję obciążenia selekcyjnego podczas szacowania przeciętnego efektu odziaływania na jednostki poddane oddziaływaniu (ATT), jest metodą coraz częściej zalecaną przy ewaluacji projektów oraz programów współfinansowanych przez Unię Europejską. PSM opiera się na mocnym założeniu, zwanym założeniem warunkowej niezależności (CIA), które implikuje, że selekcja do grupy poddawanej oddziaływaniu musi być oparta wyłącznie na zmiennych obserwowanych i że wszystkie zmienne wpływające na poddanie oddziaływaniu oraz na potencjalne wyniki zmiennej wyjściowej są obserwowane przez badacza. Jeżeli założenie to nie jest spełnione, to oszacowany efekt może być nie tyle wynikiem oddziaływania, co skutkiem braku zbalansowania nieuwzględnionej (nieobserwowanej) w badaniu zmiennej, która wpływa zarówno na proces selekcji, jak i zmienną wyjściową. Analiza wrażliwości Rosenbauma umożliwia badaczom ocenę, jak silny musiałby być wpływ takiej potencjalnej nieobserwowanej zmiennej na proces selekcji oraz na zmienną wyjściową, aby podważyć wnioski na temat efektu ATT oszacowanego za pomocą PSM. Podejścia podstawowe oraz jednoczesne Rosenbauma są zastosowane w artykule do oceny odporności na występowanie nieobserwowanej zmiennej, efektu netto staży dla młodych bezrobotnych w wieku do 35 roku życia (oszacowanego za pomocą PSM), zorganizowanych przez jeden z największych powiatowych urzędów pracy w Małopolsce.
EN
The main purpose of this article is to extend evaluation of classic Fama-French and Carhart model for global equity indices. We intend to check the robustness of models results when used for a wide set of equity indices instead of single stocks for the given country. Such modification enables us to estimate equity risk premium for a single country. However, it requires several amendments to the proposed methodology for single stocks. Our empirical evidence reveals important differences between the conventional models estimated on single stocks, either international or US-only, and models incorporating whole markets. Our novel approach shows that the divergence between indices of the developed countries and those of emerging markets is still persistent. Additionally, research on weekly data for equity indices presents rationale for explanation of equity risk premia differences between variously sorted portfolios.
EN
The paper presents methods of estimation and evaluation of general equilibrium models, highlights problematic fields and challenges. After definition of preferences, technology and structural shocks model's equations, derived by solving microeconomic optimization problems, are loglinearised and the rational expectation solution is found. The next important step is the connection of theoretical variables with the observed counterparts that allows to construct the likelihood. Estimation, verification and numerical convergence plays crucial role in the overall goodness of the model. A general equilibrium model can also be used to construct hybrid vector autoregression that allows to test degree of its misspecification.
PL
W pracy omówiono podstawowe zagadnienia związane z rozwiązywaniem, estymacją, weryfikacją i stabilnością numeryczną empirycznych modeli równowagi ogólnej. Zasygnalizowano możliwość ich wykorzystania do budowy hybrydowych modeli wektorowej autoregresji, które umożliwiają ocenę stopnia poprawności i potwierdzenia przez obserwacje założeń ekonomicznych przyjętych w części teoretycznej modelu. Estymowany model równowagi ogólnej jest zbiorem warunków pierwszego rzędu, zagadnień optymalizacyjnych podmiotów zdefiniowanych w części teoretycznej i warunków równowagi, zapisywanych w postaci jednej funkcji wektorowej, warunkowej względem parametrów strukturalnych, która tworzy nieliniowy, dynamiczny system racjonalnych oczekiwań, podlegający loglinearyzacji i rozwiązaniu. Stabilność rozwiązania liniowego implikuje liczne, trudne do określenia restrykcje w przestrzeni parametrów strukturalnych, które mogą stanowić przyczynę problemów numerycznych w czasie estymacji. Estymacja parametrów strukturalnych wymaga połączenia danych, pochodzących z makroekonomicznych szeregów czasowych, ze zmiennymi endogenicznymi, zdefiniowanymi w konstrukcji teoretycznej modelu, poprzez równanie obserwacji, stanowiące podstawę konstrukcji funkcji wiarygodności. Liniowe rozwiązanie modelu zapisuje się w formie reprezentacji w przestrzeni stanów, na podstawie której możliwe jest skonstruowanie funkcji wiarygodności, wykorzystując filtr Kalmana, ze względu na nieobserwowalny charakter niektórych zmiennych stanu. Estymacja parametrów strukturalnych jest najczęściej dokonywana poprzez techniki wnioskowania bayesowskiego, które wykorzystują kompletny system warunków pierwszego rzędu, ograniczeń zasobowych i reguł decyzyjnych. Metody bayesowskie pozwalają na skonstruowanie jednej miary określającej stopień dopasowania modelu do danych empirycznych, w postaci brzegowej gęstości obserwacji, umożliwiające formalne porównywania modeli w obrębie danej klasy bądź też z uwzględnieniem wektorowej autoregresji. Możliwe jest również połączenie wiedzy z różnych specyfikacji. Kluczową rolę w ocenie jakości modelu pełni jego weryfikacja, na którą składa się ocena poprawności funkcjonowania algorytmów numerycznych, w szczególności procedury Metropolisa i Hastingsa, oraz analiza wrażliwości pozwalająca na uzyskanie pewnego wglądu w zależności miedzy parametrami w konstrukcji teoretycznej. Sposób rozwiązywania i liniowej aproksymacji modeli równowagi ogólnej nie umożliwia określenia bezpośredniego powiązania parametrów postaci strukturalnej z parametrami postaci zredukowanej, które determinują wnioski ekonomiczne uzyskiwane na podstawie modelu. Powoduje to, że charakterystyka takiego związku wymaga zastosowania dodatkowych metod, w szczególności technik stosowanych w analizie wrażliwości. Oddzielnym zagadnieniem jest stopień poprawności specyfikacji modelu, w szczególności poprawnego określenia relacji strukturalnych w gospodarce, przyjęcia odpowiednich założeń funkcyjnych dla preferencji konsumentów i technologii, nieujęcia zależności nieliniowych, czy też poprawności specyfikacji procesów stochastycznych. Estymowany model równowagi ogólnej jest konstrukcją teoretyczną łączącą w jednym systemie teorię makroekonomii i mikroekonomii, co powoduje że wszelkie wielkości opisujące gospodarkę i prognozy są wynikiem założonej w m odelu teorii i struktury procesów stochastycznych. Z tego względu metody badania stopnia zgodności przyjętych założeń z danymi empirycznymi stanowią szerokie pole badawcze. Jednym ze sposobów jej testowania jest budowa hybrydowych modeli wektorowej autoregresji, w których model równowagi ogólnej jest przyjmowany do generowania rozkładu a priori dla wektorowej autoregresji szacowanej dla danych obserwowanych. Stopień niezgodności przyjętych założeń ekonomicznych z danymi empirycznymi ujawnia się poprzez określone wartości parametru wagowego. Pracę podsumowuje wskazanie obszarów, w których potencjalnie mogą wystąpić problemy w trakcie wykorzystywania estymowanych modeli równowagi ogólnej w praktyce.
first rewind previous Page / 1 next fast forward last
JavaScript is turned off in your web browser. Turn it on to take full advantage of this site, then refresh the page.