Full-text resources of CEJSH and other databases are now available in the new Library of Science.
Visit https://bibliotekanauki.pl

Results found: 13

first rewind previous Page / 1 next fast forward last

Search results

Search:
in the keywords:  scientific explanation
help Sort By:

help Limit search:
first rewind previous Page / 1 next fast forward last
PL
The statistical relevance model of scientific explanation was proposed by Wesley Salmon in 1971 as an interesting alternative to already existed models introduced by Hempel and supported by many other philosophers of science. The most important difference between the nomological models and statistical relevance model is that the latter tries not to use the very dubious term of 'law of nature'. The first part of the paper consists of the overview of the Salmon's model and of the main arguments which were raised by various authors against it. In the main part of the text all of those arguments which were meant to undermine the model are presented on an example taken from the economic practice. It is very popular among the economists and especially among valuation experts the so called 'statistical analysis of the market'. The main objective of the analysis is to discover all of the factors which influence the market value of the particular product, in other words to explain the market value of the product. The example was taken from the social science (economics) for purpose as one of the thesis in the paper is that, the SR model can work quite well in physics or chemistry, but it is dubious whether we can really deploy it in sciences which try to describe and explain the various phenomena of human activity and behavior. The final conclusions are: The practical deployment of the model in social sciences are problematic, as it is too idealistic and therefore it doesn't work properly. Against its initial presumption the model doesn't avoid the problem of laws of nature. Although the law of nature is not a required element of the explanans, it comes back at the stage of proposing the initial candidates for the relevant variables. The hypothesis on, which variables can be and which cannot be relevant to the explained phenomenon are constructed mostly according to the intuitively understood causal relationship founded on laws of nature. The important postulate of homogenous partition is in practice unachievable what causes that the explanation is bound with the enormous risk of a mistake. The risk is quantifiable and can be estimated, but the estimation is depended upon experience and intuition of a researcher.
EN
Book review: Daniel C. Dennett, From Bacteria to Bach and Back. The evolution of mind, Penguin Random House, UK 2017, pp. 467.
EN
The author describes his intellectual development and academic pursuits starting from his undergraduate studies at Oxford University in mid-1950s up to the present day, in the perspective of his attempts to resolve the conflict between a materialistically oriented scientific worldview and the worldview of traditional Christianity. In time, he came to recognize the conflict as a problem of distinguishing between levels of explanation or points of terminating an explanation. To deal with this problem adequately, he adopted as his own the program of natural theology laid down by St. Thomas Aquinas in his Summa Theologiae, where Aquinas provided good arguments in favor of the Christian doctrine taking as his starting point the most general phenomena of experience and using the best secular knowledge of his day. Thus Swinburne’s program of natural theology consisted in using the criteria used in modern natural science and historical inquiry for the probable truth of a suggested explanation, analyzed with the careful rigor of modern philosophy, to show the meaningfulness and probable truth of Christian theology. Scientific explanation explains phenomena in terms of prior states of affairs and natural laws; whereas personal explanation explains phenomena in terms of the powers and purposes of agents. Christian metaphysics explains the operation of scientific explanation in explaining why there are states of affairs at all and why the most fundamental natural laws have the character they do, in terms of the power and purposes of God, and in particular his purpose that humans should have a free choice of the kind of persons they are to be. Swinburne extends this model of explanation to show that our historical evidence about the life of Christ makes it very probable that Christ was (and so is) God Incarnate who rose from the dead.
EN
The ontic conception of explanation is predicated on the proposition that “explanation is a relation between real objects in the world” and hence, according to this approach, scientific explanation cannot take place absent such a premise. Despite the fact that critics have emphasized several drawbacks of the ontic conception, as for example its inability to address the so-called “abstract explanations”, the debate is not settled and the ontic view can claim to capture cases of explanation that are non-abstract, such as causal relations between events. However, by eliminating the distinction between abstract and non-abstract explanations, it follows that ontic and epistemic proposals can no longer contend to capture different cases of explanation and either all are captured by the ontic view or all are captured by the epistemic view. On closer inspection, it turns out that the ontic view deals with events that fall outside the scientists’ scope of observation and that it does not accommodate common instances of explanation such as explanations from false propositions and hence it cannot establish itself as the dominant philosophical stance with respect to explanation. On the contrary, the epistemic conception does account for almost all episodes of explanation and can be described as a relation between representations, whereby the explanans transmit information to the explanandum and that this information can come, dependent on context, in the form of any of the available theories of explanation (law-like, unificatory, causal and non-causal). The range of application of the ontic view thus is severely restricted to trivial cases of explanation that come through direct observation of the events involved in an explanation and explanation is to be mostly conceived epistemically.
EN
There are at least two deep and related debates about explanation: about its nature and about its norms. The aim of this special issue of Philosophical Problems in Science/Zagadnienia Filozoficzne w Nauce (ZFN) is to survey whether or not a consensus is at hand in these debates and to help settle what it can. The overarching foci are twofold: (i) the nature of scientific explanation, with special attention to the debate between ontic and epistemic conception of explanation, and (ii) the norms of scientific explanation, with special attention to so-called ‘ontic’ (or better, ‘alethic’) norms like truth and referential success and epistemic norms like intelligibility and idealized understanding. It called for advocates of various conceptions to articulate the current state of these debates. Researchers and scholars from around the globe-including Poland, Canada, Korea, The Netherlands, the United States, Greece, Austria, and Belgium-contributed. The special issue also attempts to provide an opening for new work on the norms of explanation, such as truth or model-based accuracy, information compression, abstraction, and generalization.
PL
The second part of the text is intended to deal with the anti-naturalistic argument of F.A. Hayek. To present it comprehensively, however, his theory of mind has to be outlined first. According to Hayek, the way in which we perceive the world is entirely grounded in the biological construction of our neural order and thus, from this perspective, he seems to be a naturalist. He excludes any non-natural properties of our cognition like e.g. transcendental free will. However, a closer look at the functioning of our biological apparatus of perception divulges certain inherent and internal restrictions. First of all, we notice that the neural order (biological construction of neurons) is in fact a very complex apparatus of classification and discrimination of sensory impulses. Impulses may come from reality which is outer to the neural order as well as from the inside. The apparatus of classification and discrimination of sensory impulses is not stable, but permanently dynamic. An unstoppable attack of sensations and relevant responses of the system creates new classification rules (neural connections) and demolishes those which have been inactive for a longer time. A system of those rules, existing in a particular time unit, forms a model of reality which imperfectly corresponds to the existing, transcendent reality. The final argument for anti-naturalism which is elucidated in the text is Hayek’s idea of what is explanation and where lie its limits. This idea can be reduced to the following quotation: “…any apparatus of classification must possess a structure of a higher degree of complexity that is possessed by an object which it classifies.” In other words: if our cognitive system is an “apparatus of classification”, and if an explanation means modeling, and if a complete explanation requires the explanation of the apparatus itself, then a complete explanation is not possible at all, as the apparatus, which has a certain level of complexity, cannot upgrade this level in order to explain itself. Hayek’s reasoning is generally approved yet it is emphasized, however, that it rests on very strong assumptions which are identified and named at the end of the text.
PL
The purpose of the paper is to challenge one of the most important assumptions of the neo-positivists, namely the unity of science. The idea that all of the sciences, both natural and social, should have the same structure and should deploy similar methods is, after Grobler, called naturalism. I try to argue for anti-naturalism. An interesting example seems to be economics. It does not, however, demonstrate the success, similar to that achieved by natural sciences. Certain naturalistic explanations for this lack of success are reviewed and criticized in the paper. Firstly, complexity: at the beginning of this naturalistic argument, one encounters the problem of definition. Up to nine different notions of complexity are proposed and only a few of them are practically quantitative. Secondly, mathematics: in the natural sciences we explore mathematical theories in order to capture the regularities in the investigated phenomena and to include them in the corresponding equations. However, even if we do not have a perfectly corresponding mathematical model, regularities themselves can be observed. Wherever we do not have a good theory expressed in terms of exact mathematical equations, we should at least be able to judge the existence or non-existence of certain regularities on the basis of linear (statistical) or non-linear methods. Those methods, some of them extremely sophisticated, are being extensively applied in economics and in econometrics (the so called quantitative methods). The results are disappointing. The anti-naturalistic argumentation of Grobler is dealt with separately. Grobler names three anti-naturalistic arguments: complexity (as mentioned above), the free will of humans (which the author did not find interesting enough) and, finally, the reasoning which is called, ”inherent two-way interdependence”. Grobler maintains that we are able to work out a meta-theory which shall include both predictions and the possible impact of those predictions on the theory’s object. This proposal is rejected in the paper.
PL
Nauka - spełniając swoje trzy podstawowe funkcje: deskryptywną, eksplanacyjną i prognostyczną - ma na celu dostarczanie takiej wiedzy naukowej, która może być zweryfikowana. Wiedza taka pomaga zrozumieć i wyjaśnić badane zjawiska empiryczne oraz cechy lub zachowania rzeczy, procesów i idei, którymi zajmuje się określona dziedzina. Owo poznanie, zrozumienie i wyjaśnienie jest możliwe tylko wtedy, gdy jest prowadzone zgodnie z przyjętymi w nauce procedurami metodologicznymi. Warunek ten może być spełniony jedynie przez badaczy/naukowców, którzy dysponują rzetelną wiedzą i umiejętnościami jej obiektywnego wykorzystania w praktyce. W niniejszym artykule podjęto próbę przybliżenia i scharakteryzowania istoty niektórych modeli naukowego rozumienia i wyjaśniania, wskazując na ich użyteczność oraz ograniczenia wykorzystania w naukach społecznych.
EN
Science - fulfilling its three basic functions: descriptive, explanatory and predictive - aims to provide such scientific knowledge that can be verified. Such knowledge helps to understand and explain the researched empirical phenomena and the characteristics or behavior of things, processes and ideas that are dealt with in a specific field. This knowledge, understanding and clarification is possible only when it is carried out in accordance with scientifically accepted methodological procedures. This condition can be met only by researchers / scientists who have reliable knowledge and skills to use it objectively in practice. This article attempts to introduce and characterize the essence of some models of scientific understanding and explanation, pointing to their usefulness and limitations use in social sciences.
EN
Distinctively mathematical explanations (DMEs) explain natural phenomena primarily by appeal to mathematical facts. One important question is whether there can be an ontic account of DME. An ontic account of DME would treat the explananda and explanantia of DMEs as ontic items (ontic objects, properties, structures, etc.) and the explanatory relation between them as an ontic relation (e.g., Pincock, 2015; Povich, 2021). Here I present a conventionalist account of DME, defend it against objections, and argue that it should be considered ontic. Notably, if indeed it is ontic, the conventionalist account seems to avoid a convincing objection to other ontic accounts (Kuorikoski, 2021).
PL
In this review-paper, I focus on biopsychological foundations of geometric cognition. Starting from the Kant’s views on mathematics, I attempt to show that contemporary cognitive scientists, alike the famous philosopher, recognize mutual relationships of visuospatial processing and geometric cognition. What I defend is a claim that Tinbergen’s explanatory questions are the most fruitful tool for explaining our “hardwired,” and thus shared with other animals, Euclidean intuitions, which manifest themselves in spatial navigation and shape recognition. I claim, however, that these “hardwired intuitions” cannot capture full-blooded Euclidean geometry, which demands practice with cultural artifacts in various time-scales.
PL
The main purpose of this paper is to investigate and reconstruct the philosophical thoughts in Marian Smoluchowski’s papers (in his publications and in unknown manuscripts as well). He was an outstanding Polish physicist, who lived at the turn of the XIX and XX century. Smoluchowski was particularly interested in theoretical physics. His achievements in this discipline, some even very significant, have caused him to be perceived mainly as a physicist. His work in the theory of fluctuations and kinetic theory of gases, especially in the theory of Brownian motions, is well known to physicists. My attention in this paper is focused on the metascientific problems which dominated his philosophical reflections. His analysis in the fields on philosophy of science (the concept of hypothesis, theory) ought to be perceived in light of physics. Philosophical reflections were at the margins of science which he practiced – physics was always the background to his deliberations. An important limit to our deliberations was set by concentrating on issues typical of the philosophy of science. In Smoluchowski’s case, however, it is difficult to say that his branch of philosophy is characterized by systematic reflection. It is difficult to classify his reflection in the framework of any given philosophical trend.
EN
The aim of the article is to present the strategy of rehabilitation of the teleology in the conception of Robert Spaemann, Reinchard Löw and Hans Jonas. That strategy takes into account a few aspects: (1) the analysis of general conception of explanation in which the character of questions activating the process of explanation is taking into consideration, (2) the analysis of the conception of the teleological and causal explanations (according to the Hempel-Oppenheim model), (3) determination of the pragmatic background of the controversies concerning teleology, (4) determination of the domain and aims of teleological and causal interpretations.
XX
Celem artykułu jest prezentacja strategii rehabilitacji teleologii w ujęciu: Roberta Spaemanna, Reincharda Löwa oraz Hansa Jonasa. Strategia ta uwzględnia kilka aspektów: (1) analizę ogólnej koncepcji wyjaśniania, w której jest uwzględniony charakter pytań uruchamiających proces wyjaśniania, (2) analizę koncepcji wyjaśniania teleologicznego i przyczynowego (według modelu Hempla- Oppenheima) (3) określenie pragmatycznego tła sporu o teleologię, (4) określenie domeny i celów interpretacji teleologicznych i przyczynowych.
EN
The text deals with one of the challenges of linguistics, which is to effectively combine description and explanation in linguistics.It is necessary that linguistic theories are not only capable of adequately describing their object of study within their framework, but they must also have a suitable explanatory power.Linguistics centred around the explanation of the why of the system is called here ‘explanatory’ or ‘non-autonomous’, in contrast to ‘descriptive’ or ‘autonomous’ linguistics, which is focused on the description of the system, the distinction being based on the difference in the objects of study, the goals and the descriptive and explanatory possibilities of the theories.From the point of view presented here, a comprehensive study of language has three main components: a general theory of what language is, a resulting theory and description, which is a function of this theory, of how language is organised, functions and has evolved in the human brain, and an explanation of the properties of language found.The explanatory value of a general linguistic theory is a function of various elements, among others, the quantity of the primitive elements of the theory adopted and the effectiveness of Ockham’s razor principle of simplicity. It is also a function of the quality of those elements which can be drawn not only from within the system, but also from outside the system becoming in this situation logically prior to the object under study.In science, in linguistics, one naturally needs two types of approach, two types of linguistics, descriptive/autonomous and explanatory/non-autonomous, one must first describe reality in order to explain it. But it is also certain that since the aim of science is to explain in order to reach that higher level of scientificity above pure description, it is necessary that this aim be realized in different linguistic theories within different research programs, uniting descriptivist and explanatory approaches.
first rewind previous Page / 1 next fast forward last
JavaScript is turned off in your web browser. Turn it on to take full advantage of this site, then refresh the page.