Full-text resources of CEJSH and other databases are now available in the new Library of Science.
Visit https://bibliotekanauki.pl

Refine search results

Journals help
Authors help
Years help

Results found: 33

first rewind previous Page / 2 next fast forward last

Search results

Search:
in the keywords:  estimation
help Sort By:

help Limit search:
first rewind previous Page / 2 next fast forward last
EN
Research background: In applied welfare economics, the constant relative inequality aversion function is routinely used as the model of a social decisionmaker?s or a society?s preferences over income distributions. This function is entirely determined by the parameter, ?, of inequality aversion. However, there is no authoritative answer to the question of what the range of ? an analyst should select for empirical work. Purpose of the article: The aim of this paper is elaborating the method of deriving ? from a parametric distribution of disposable incomes. Methods: We assume that households? disposable incomes obey the generalised beta distribution of the second kind GB2(a,b,p,q). We have proved that, under this assumption, the social welfare function exists if and only if ? belongs to (0,ap+1) interval. The midpoint ?mid of this interval specifies the inequality aversion of the median social-decisionmaker. Findings & Value added: The maximum likelihood estimator of ?mid has been developed. Inequality aversion for Poland 1998?2015 has been estimated. If inequality is calculated on the basis of disposable incomes, the standard inequality?development relationship might be complemented by inequality aversion. The ?augmented? inequality?development relationship reveals new phenomena; for instance, the stage of economic development might matter when assessing the impact of inequality aversion on income inequality.
EN
The paper is devoted to the multivariate measures of dependence. In contrast to the classical approach, where the pairs of variables are studied, we investigate the dependence of more than two variables. We mainly consider the measures based on copulas. These are the multivariable generalizations of the known coefficients of such correlation as Spearman’s rho, Kendall’s tau, Blomquist’s beta and Gini’s gamma. We present the definitions, the constructions and the basic properties of such multivariate measures of dependence. The case of large number of dimension, greater than two, presents more complications. We have several different versions of such generalization in this case and the lower bound of the values of such measures of dependence are close to zero. We also study the multivariate tail dependences. The last part of the paper is devoted to the estimation of multivariable versions of Spearman’s rho coefficient.
EN
The paper proposes a new family of continuous distributions called the extended odd half Cauchy-G. It is based on the T-X construction of Alzaatreh et al. (2013) by considering half Cauchy distribution for T and the exponentiated G(x;ξ) as the distribution of X. Several particular cases are outlined and a number of important statistical characteristics of this family are investigated. Parameter estimation via several methods, including maximum likelihood, is discussed and followed up with simulation experiments aiming to asses their performances. Real life applications of modeling two data sets are presented to demonstrate the advantage of the proposed family of distributions over selected existing ones. Finally, a new regression model is proposed and its application in modeling data in the presence of covariates is presented.
PL
Jedną z najpopularniejszych miar asymetrii rozkładu cechy w populacji jest współczynnik skośności wyznaczany poprzez standaryzację trzeciego momentu centralnego względem średniej. W niniejszej pracy rozważono wykorzystanie powszechnie znanej procedury losowania dwufazowego do szacowania współczynnika skośności w populacji skończonej przy brakach odpowiedzi. Zaproponowano estymator tego współczynnika będący funkcją znanych nieobciążonych estymatorów wartości globalnych cechy w populacji. Własności skonstruowanego estymatora zbadano w drodze symulacji komputerowych. W eksperymentach wykorzystano dane uzyskane podczas spisu rolnego w wybranych gminach powiatu Dąbrowa Tarnowska.
EN
One of the most popular measures for the assymetry o f distribution is the coefficient of skewness computed by standarizing the third central moment about the mean. In this paper the well-known two-phase sampling procedure is applied to estimate the finite population skewness under nonresponse. An estimator of this parameter is constructed as a function of well-known unbiased estimators of population totals. The properties of proposed estimator are investigated by the simulation study. The data obtained from agricultural census in boroughs of the Dąbrowa Tarnowska district is used in simulations.
5
100%
XX
On a daily basis, managers in risk management teams use a number of methods to manage various types of risk. One of the most popular methods of measuring market risk is Value at Risk. Estimation of Value at Risk gives a possibility to determine a loss, which can occur or can be exceeded with a given probability and tolerance level. Moreover, this measure of risk shows in just one number entire risk of the portfolio. In addition, various methods and probability distributions can be used to estimate Value at Risk. A goal of this paper is the evaluation of Value at Risk estimation methods on the basis of backtesting results. In the empirical part, the data for 4 investment portfolios was used. The portfolios were diversified in terms of geographic location of firms that were taken into consideration.
PL
Klasyczna teoria wnioskowania statystycznego dostarcza nam metod estymacji nieznanych parametrów rozkładu, szacowanie postaci funkcji określającej ten rozkład oraz weryfikację hipotez na podstawie prób prostych, tzn. takich, w których obserwacje są niezależne i mają ten sam rozkład prawdopodobieństwa. Na ogół jednak ze względu na koszty i efektywność badań posługujemy się próbami nieprostymi lub złożonymi (complex samples). Wyniki obserwacji w tych próbach są realizacjami stochastycznie zależnych zmiennych losowych o różnych rozkładach. W badaniach reprezentacyjnych wyróżniamy między innymi następujące schematy: losowanie zależne (bez zwracania), losowanie z różnymi prawdopodobieństwami wyboru, warstwowe, zespołowe i wielostopniowe. Przykładowo, losowanie bez zwracania eliminuje stochastyczną niezależność obserwacji, proces warstwowania zróżnicowanie prawdopodobieństw wyboru elementów próby, natomiast losowanie wielostopniowe wpływa na różnorodność rozkładów. Przedmiotem tej pracy są problemy związane z estymacją (metody adaptacji centralnego twierdzenia granicznego dla prób nieprostych) oraz weryfikacja hipotez o zgodności rozkładów dla prób nieprostych.
EN
Classic theory of statistical inference gives us methods and verification of hypothesis for simple samples (observations are stochastically independent and have the same distribution). Because of costs and effectiveness of research we use simple samples. Observations in these samples are stochastically dependent and have different distribution. The paper presents problems in estimation and verifications of hypothesis of consistency of distributions for complex samples.
EN
In the paper we consider a modification of Sharpe’s method used in classical portfolio analysis for optimal portfolio building. The key idea of the paper is the modification of the classical approach by application of the errors-in-variable model. We assume that both independent (market portfolio return) as well as dependent (given asset’s return) variables are randomly distributed values related with each other by linear relationship and we build the model used for parameters’ estimation. For model evaluation we made a comparison of portfolios comprising nine stocks from Warsaw Stock Exchange, which are built using classical Sharpe’s and proposed method.
PL
W klasycznej jednoczynnikowej analizie portfelowej konstruując optymalny portfel papierów wartościowych, wykorzystuje się model jeg o budowy zaproponowany przez Sharpe’a. Podstawą tej teorii jest założenie, że stopa zwrotu danego waloru jest objaśniana stopą zwrotu portfela rynkowego poprzez zależność liniową. Wiadomo, że na zmienność cen walorów wpływ mają również inne (często trudne do zmierzenia) czynniki rynku. W klasycznym podejściu parametry zależności pomiędzy stopą zwrotu danego waloru a stopą zwrotu portfela rynkowego wyznaczane są z modelu prostej regresji, gdzie zaburzenie losowe jest dopuszczane tylko na wartości zmiennej objaśnianej. W proponowanym w pracy modelu obecność tych czynników uwzględniona jest jako zaburzenie na obu zmiennych losowych wchodzących do klasycznego modelu Sharpe’a. Przyjęto, że zarówno stopa zwrotu danego walom jak i stopa zwrotu portfela rynkowego są pewnymi zaburzonymi już wartościami, między którymi istnieje zależność liniowa. Dla ilustracji zagadnienia porównano portfele składające się z dziewięciu spółek zbudowane w oparciu o klasyczną metodę Sharpe’a i proponowaną jej modyfikację. Jako portfel rynkowy przyjęto portfel leżący u podstaw indeksu giełdowego WIG. Analizę przeprowadzono na podstawie notowań archiwalnych od stycznia 2000 do marca 2006. Budując wyżej wymienione portfele miesięczne dokonano porównań przebudowując je co miesiąc uzyskując w ten sposób wektor składów do analizy porównawczej.
EN
Background: The article presents a method of estimating the security level which indicates how probable it is for a phenomenon to occur. Objectives: The author attempts to answer the question: How do we estimate security? Decision makers usually need percentage showing the probability of an incident taking place in the future. This information is needed in the first place, later decision makers can use more descriptive information. Methods: The research problem concerns the assessment of security using the estimation method. Depicting security in numbers is difficult, thus the descriptive method is also usually applied. The estimation method facilitates the assessment. It is helpful since it is partly done by calculation and partly by guessing or approximation. Based on a case study analysing whether a terrorist attack may occur, the author also used tools such as averaging expert predictions, scenario analysis and risk analysis. Results: This article provides a view on forecasting security, which results in a method of estimating the level of security. Conclusions: The author presents an approach which allows to initially estimate the security level of the analyzed phenomenon in a relatively short period of time.
EN
The paper deals with a first-order autoregression model with parameter estimation with exponential forgetting, known and well established in the mathematical system theory. However, the use of exponential forgetting in econometry is not a standard. Under the assumption of slow timevariability of model parameters and model stationarity, this estimation method could however lead to significant improvement of the prediction quality. In this paper, we describe the Bayesian approach to such a modelling and parameter estimation. The use of the method is demonstrated on a one-step-ahead prediction of the EUR-USD exchange rate.
EN
A proper estimation of time in user stories is a crucial task for both the IT team as well as for the customer, especially in agile projects. Although agile practices offer a lot of flexibility and promote a culture of continuous change, there are always clearly de need timeboxed periods where an IT company has to commit to delivering working soft-ware. Estimating time of user story implementation provides clarity and the opportunity to control the project by the management, yet at the same time, it can increase pressure on software developers. Thus, incorrectly estimated user stories may lead to quality problems including system malfunction, technical debt, and general user experience issues. The paper describes user story characteristics, reasons of user story estimation inaccuracy as well as a model of their potential impact on post-release defects in large IT software ventures, all derived from the conducted interview with practitioners in Capgemini software development company.
EN
Selected econometric methods of modelling the world’s population size based on historical data are presented in the paper. Periodical variables were used in the models proposed in the paper. Moreover, a logistic-type function was used in modelling. The purpose of the paper was to obtain a model describing the world’s population with the lowest possible maximal relative error and possibly the longest period of durability. In this work, 13,244 models from three families models were analyzed. Only a small part of such a large number of models satisfies the conditions of stability. The method of modelling the world’s population size allows to obtain models with maximal relative errors not exceeding 0.5%. Selected models were used to prediction of the world’s population up to 2050. The obtained results were compared with data published by the Organisation for Economic Co-operation and Development.
EN
The calculation of minimum regulatory capital for operational risk is a challenging task for statisticians working in finance. The aim of this paper is to compare two alternative approaches that are widely exploited in the banking reality. Thorough attention is paid to the Loss Distribution Approach (LDA) and the Single Loss Approximation (SLA). Their applications in the operational risk industry are examined and their outputs based on simulated samples are compared. Particular attention is paid to the convergence of both outputs considering the characteristics of underlying data.
EN
The main goal of this research was to investigate whether people exhibit algorithm aversion-a tendency to avoid using an imperfect algorithm even if it outperforms human judgments-in the case of estimating students’ percentile scores on a standardized math test. We also explored the relationships between numeracy and algorithm aversion and tested two interventions aimed at reducing algorithm aversion. In two studies, we asked participants to estimate the percentiles of 46 real 15-year-old Polish students on a standardized math test. Participants were offered the opportunity to compare their estimates with the forecasts of an algorithm - a statistical model that predicted real percentile scores based on fi ve explanatory variables (i.e., gender, repeating a class, the number of pages read before the exam, the frequency of playing online games, socioeconomic status). Across two studies, we demonstrated that even though the predictions of the statistical model were closer to students’ percentile scores, participants were less likely to rely on the statistical model predictions in making forecasts. We also found that higher statistical numeracy was related to a higher reluctance to use the algorithm. In Study 2, we introduced two interventions to reduce algorithm aversion. Depending on the experimental condition, participants either received feedback on statistical model predictions or were provided with a detailed description of the statistical model. We found that people, especially those with higher statistical numeracy, avoided using the imperfect algorithm even though it outperformed human judgments. Interestingly, a simple intervention that explained how the statistical model works led to better performance in an estimation task.
EN
The problem of estimating a proportion of objects with particular attribute in a finite population is considered. This paper shows an example of the application of estimation fraction using new proposed sample allocation in a population divided into two strata. Variance of estimator of the proportion which uses proposed sample allocation is compared to variance of the standard one. In the paper an application of sample allocation described in Sieradzki & Zieliński [2017] is presented."
PL
Klasyczna teoria wnioskowania statystycznego dostarcza nam metod estymacji nieznanych parametrów rozkładu, szacowania postaci funkcji określającej ten rozkład oraz weryfikację hipotez na podstawie prób prostych, tzn. takich, w których obserwacje są stochastycznie niezależne i mają ten sam rozkład prawdopodobieństwa. Problemy związane z estymacją, w szczególności metody adaptacji centralnego twierdzenia granicznego dla prób nieprostych oraz weryfikację hipotez o zgodności rozkładów dla prób nieprostych za pomocą testu χ2 będą przedmiotem tego artykułu.
EN
Classical theory of statistical inference provides methods of estimation of unknown population parameters, density estimation and statistical hypothesis testing, based on simple random sampling, that is sampling scheme, in which all individuals are stochastically independent and identically distributed. Problems with estimation, especially with adaptation of central limit theorem to non-simple sampling and verification of goodness of fit hypothesis with χ2 test will be subject of this article.
PL
Dokonywana w sposób ciągły na etapie planowania i realizacji działań przez siły powietrzne ocena przeciwnika powietrznego przekłada się na podejmowane decyzje. Autor w artykule rozważa przydatność geopolityki jako perspektywy do jej prowadzenia. Wskazując na wartość informacyjną myśli geopolitycznych wykazuje jej przydatność jako źródła szeregu istotnych danych nie tylko o przestrzeni działań, lecz także o występujących na niej aktorach i ich interesach. Najważniejszymi wnioskami płynącymi z rozważań nad zastosowaniem perspektywy geopolitycznej do prowadzenia oceny przeciwnika powietrznego są oszczędność czasu oraz poprawa świadomości sytuacyjnej jednocześnie na wszystkich szczeblach dowodzenia siłami powietrznymi.
EN
Estimation of adversary air force is continuously made during planning process and execution phase and has influence on decisions made. In the article, the author considers the usefulness of geopolitics as a perspective for conducting estimation of adversary air force. Pointing to the informational value of geopolitical thoughts, he shows its usefulness as a source of a number of important data about not only the environment, but also potential political actors and their goals. The most important conclusions from the considerations are that use of geopolitical perspective to estimate adversary air force gives an opportunity to save time and to improve situational awareness at all levels of air force command structure.
EN
The introduction of digital technologies in the work of logistics companies is relevant, because it forms the basis for automation of business processes of firms. The main problems of logistics activities of enterprises include: outdated standards and technologies for managing logistics processes in China. Therefore, there is a need to introduce the Internet of Things in order to improve the quality of services of enterprises. A comparative analysis of the use of the Internet of Things in the work of logistics in Chinese companies. Positive and negative factors of influence on the digitalization of the processes of Chinese logistics companies have been formed. The organizational and economic support for the interaction of stakeholders with Chinese logistics companies with the use of the Internet of Things has been improved. The introduction of the Internet of Things is necessary for the formation of progressive trajectories for the development of China's logistics companies.
EN
The study presents a method for estimating the financial and economic impact of personal remittances on economic development migrants’ origin country. The relevance of the study refers to the high dependence of such countries on migrants' remittances. The developed method includes estimating the share of GDP, internal consumption, savings, imports and net cash flow in the balance of payments caused by remittances impact. Obtained estimation provides evidence of significant positive impact of remittances on receiving countries as their GDP, depending on the country, may decrease even for 5% when remittances drop for 10%.
EN
The paper presents a surface model of the accounting costs of electricity transmission over the 220 kV and 400 kV networks. In the further stages of the studies, taking into account the results of structural analysis of marginal costs variation, an estimation technique, such as the ordinary (block) kriging, was used to build the model (2D). The model describes the area and time variation of marginal costs, and has great potential in the electrical power sector, especially in the context of the development of market mechanisms in electric energy trading. The model has made it possible to observe the existing tendencies in cost (directional and time) variation, which is useful for setting electricity transmission tariffs stimulating properly the behaviour of the electric power network users – the electricity suppliers and consumers. Thus, this way of stimulating takes into account the electric power network’s specificity and operating conditions, including the network losses (caused by electricity transmission), the transmission constraints and the broadly understood operational safety of the electric power system. The model of the area variation of electricity transmission marginal costs is also useful in electric power system development planning procedures.
first rewind previous Page / 2 next fast forward last
JavaScript is turned off in your web browser. Turn it on to take full advantage of this site, then refresh the page.