Full-text resources of CEJSH and other databases are now available in the new Library of Science.
Visit https://bibliotekanauki.pl

Results found: 2

first rewind previous Page / 1 next fast forward last

Search results

Search:
in the keywords:  complex measure
help Sort By:

help Limit search:
first rewind previous Page / 1 next fast forward last
EN
When faced with missing data in a statistical survey or administrative sources, imputation is frequently used in order to fill the gaps and reduce the major part of bias that can affect aggregated estimates as a consequence of these gaps. This paper presents research on the efficiency of model-based imputation in business statistics, where the explanatory variable is a complex measure constructed by taxonomic methods. The proposed approach involves selecting explanatory variables that fit best in terms of variation and correlation from a set of possible explanatory variables for imputed information, and then replacing them with a single complex measure (meta-feature) exploiting their whole informational potential. This meta-feature is constructed as a function of a median distance of given objects from the benchmark of development. A simulation study and empirical study were used to verify the efficiency of the proposed approach. The paper also presents five types of similar techniques: ratio imputation, regression imputation, regression imputation with iteration, predictive mean matching and the propensity score method. The second study presented in the paper involved a simulation of missing data using IT business data from the California State University in Los Angeles, USA. The results show that models with a strong dependence on functional form assumptions can be improved by using a complex measure to summarize the predictor variables rather than the variables themselves (raw or normalized).
EN
The paper contains a proposal of original method of assessment of information loss resulted from an application of the Statistical Disclosure Control (SDC) conducted during preparation of the resulting data to the publication and disclosure to interested users. The SDC tools enable protection of sensitive data from their disclosure – both direct and indirect. The article focuses on pseudonimised microdata, i.e. individual data without fundamental identifiers, used for scientific purposes. This control is usually to suppress, swapping or disturbing of original data. However, such intervention is connected with the loss of some information. Optimization of choice of relevant SDC method requires then a minimization of such loss (and risk of disclosure of protected data). Traditionally used methods of measurement of such loss are not rarely sensitive to dissimilarities resulting from scale and scope of values of variables and cannot be used for ordinal data. Many of them weakly take also connections between variables into account, what can be important in various analyses. Hence, this paper is aimed at presentation of a proposal (having the source in papers by Zdzisław Hellwig) concerning use of a method of normalized and easy interpretable complex measure (called also the synthetic indicator) for connected features based on benchmark and anti–benchmark of development to the assessment of information loss resulted from an application of some SDC techniques and at studying its practical utility. The measure is here constructed on the basis of distances between original data and data after application of the SDC taking measurement scales into account.
first rewind previous Page / 1 next fast forward last
JavaScript is turned off in your web browser. Turn it on to take full advantage of this site, then refresh the page.