Full-text resources of CEJSH and other databases are now available in the new Library of Science.
Visit https://bibliotekanauki.pl

Results found: 4

first rewind previous Page / 1 next fast forward last

Search results

Search:
in the keywords:  existential risk
help Sort By:

help Limit search:
first rewind previous Page / 1 next fast forward last
1
Publication available in full text mode
Content available

The Problem with Longtermism

100%
Ethics in Progress
|
2023
|
vol. 14
|
issue 2
130-152
EN
Moral circle expansion has been occurring faster than ever before in the last forty years, with moral agency fully extended to all humans regardless of their ethnicity, and regardless of their geographical location, as well as to animals, plants, ecosystems and even artificial intelligence. This process has made even more headway in recent years with the establishment of moral obligations towards future generations. Responsible for this development is the moral theory – and its associated movement – of longtermism, the bible of which is What We Owe the Future (London: Oneworld, 2022) by William MacAskill, whose book Doing Good Better (London: Guardian Faber, 2015) set the cornerstone of the effective altruist movement of which longtermism forms a part. With its novelty comes great excitement, but longtermism and the arguments on its behalf are not yet well thought out, suffering from various problems and entailing various uncomfortable positions on population axiology and the philosophy of history. This essay advances a number of novel criticisms of longtermism; its aim is to identify further avenues for research required by longtermists, and to establish a standard for the future development of the movement if it is to ever be widely considered as sound. Some of the issues raised here are about the arguments for the moral value of the future; the quantification of that value with the longtermist ethical calculus – or the conjunction of expected value theory with the ‘significance, persistence, contingency’ (SPC) framework; the moral value of making happy people; and our ability to affect the future and the fragility of history. Perhaps the most significant finding of this study is that longtermism currently constitutes a shorterm view on the longterm future, and that a properly longterm view reduces to absurdity.
EN
Methods of improving the state and rate of progress within the domain of philosophy using collective intelligence systems are considered. By applying mASI systems superintelligence, debiasing, and humanity’s current sum of knowledge may be applied to this domain in novel ways. Such systems may also serve to strongly facilitate new forms and degrees of cooperation and understanding between different philosophies and cultures. The integration of these philosophies directly into their own machine intelligence seeds as cornerstones could further serve to reduce existential risk while improving both ethical quality and performance.
EN
Methods of improving the state and rate of progress within the domain of philosophy using collective intelligence systems are considered. By applying mASI systems superintelligence, debiasing, and humanity’s current sum of knowledge may be applied to this domain in novel ways. Such systems may also serve to strongly facilitate new forms and degrees of cooperation and understanding between different philosophies and cultures. The integration of these philosophies directly into their own machine intelligence seeds as cornerstones could further serve to reduce existential risk while improving both ethical quality and performance.
EN
The article undertakes the problem of AGI (Artificial General Intelligence) research with reference to Nick Bostrom’s concept of existential risk and Ingmar Persson’s/Julian Savulescu’s proposal of bio­medical moral enhancement from a pedagogical-anthropological perspective. A major focus will be put on the absence of pedagogical paradigms within the techno-progressive discourse, which results in a very reduced idea of education and human development. In order to prevent future existential risks, the techno-progressive discourse should at least to some extent refer to the qualitative approaches of humanities. Especially pedagogical anthropology reflects the presupposed and therefore frequently unarticulated images of man within the various scientific disciplines and should hence be recognized as a challenge to the solely quantitative perspective of AGI researches and transhumanism. I will argue that instead of forcing man to adapt physically to artificial devices, as the techno-progressive discourses suggest, the most efficient way of avoiding future existential risks concerning the relationship between mankind and highly advanced technology would be—as John Gray Cox proposes—making AGIs adopt crucial human values, which would integrate their activity into the social interactions of the lifeworld (Lebenswelt).
PL
Artykuł podejmuje problematykę badań nad ogólną sztuczną inteligencją (OSI) w odniesieniu do koncepcji ryzyka egzystencjalnego Nicka Bostroma oraz propozycji moralnego ulepszenia człowieka Ingmara Perssona i Juliana Savulescu z perspektywy pedagogiczno-antropologicznej. Główny nacisk zostanie położony na nieobecność paradygmatu pedagogicznego w dyskursie techno-progresywnym, co wiąże się z bardzo ograniczoną ideą edukacji i rozwoju ludzkiego. Aby móc zapobiec przyszłym ryzykom egzystencjalnym, dyskurs techno-progresywny powinien przynajmniej w jakimś stopniu odnieść się do jakościowego podejścia humanistyki. Antropologia pedagogiczna jest tą dziedziną wiedzy, która w szczególności podejmuje refleksję nad z góry przyjmowanymi i przez to często niewyartykułowanymi obrazami człowieka, funkcjonującymi w obrębie rozmaitych dyscyplin naukowych, i to właśnie ona powinna z tego powodu być postrzegana jako wyzwanie dla wyłącznie ilościowej perspektywy badań nad OSI i transhumanizmem. Będę argumentować, iż zamiast zmuszać człowieka do fizycznej adaptacji do sztucznych urządzeń, jak to sugerują dyskursy techno-progresywne, najbardziej skuteczna droga uniknięcia przyszłego ryzyka egzystencjalnego wynikającego z relacji między człowiekiem a wysoko rozwiniętą technologią mogłaby polegać – jak proponuje John Gray Cox – na sprawieniu, aby OSI przyjęła podstawowe wartości humanistyczne, które zintegrowałyby jej aktywność w społecznych interakcjach zachodzących w obrębie świata życia (Lebenswelt).
first rewind previous Page / 1 next fast forward last
JavaScript is turned off in your web browser. Turn it on to take full advantage of this site, then refresh the page.