Full-text resources of CEJSH and other databases are now available in the new Library of Science.
Visit https://bibliotekanauki.pl

Refine search results

Results found: 1

first rewind previous Page / 1 next fast forward last

Search results

Search:
in the keywords:  algorithmic bias
help Sort By:

help Limit search:
first rewind previous Page / 1 next fast forward last
EN
This article explores how contemporary text-to-image (T2I) systems routinely minimise or “correct” aquiline noses in AI-generated images, a phenomenon the authors term “non-consensual rhinoplasty”. Despite explicit prompts for pronounced nasal features, many models systematically smooth out dorsal humps, with 92% of generated images displaying a non-convex profile. Situating these findings in a broader cultural and historical context, the article examines how entrenched beauty standards and physiognomic biases shape both AI training data and societal perceptions. It highlights how content moderation, algorithmic “beautification,” and dataset limitations further erase natural variation. To address this bias, the article proposes solutions such as community-led awareness campaigns, petitions for greater transparency in AI development, and technical refinements like prompt sliders for nasal prominence. By outlining these strategies, it advocates for AI innovation that prioritises cultural sensitivity and equitable representation.
first rewind previous Page / 1 next fast forward last
JavaScript is turned off in your web browser. Turn it on to take full advantage of this site, then refresh the page.