PL EN


2019 | 4 |
Article title

Is the Artificial Intelligent? A Perspective on AI-based Natural Language Processors

Content
Title variants
Languages of publication
EN
Abstracts
EN
The issue of the relation between AI and human mind has been riddling the scientific world since ages. Being the mother lode of research, AI can be scrutinised from a plethora of perspectives. One of them is a linguistic perspective, which encompasses AI’s capability to understand language. Having been an innate and exclusive faculty of human mind, language is now manifested in a countless number of ways, transcending beyond the human-only production. There are applications that can not only understand what is meant by an utterance, but also engage in a quasi-humane discourse. The manner of their operating is perfectly organised and can be accounted for by incorporating linguistic theories. The main theory used in this article is Fluid Construction Grammar, which has been developed by Luc Steels. It is concerned with parsing and segmentation of any utterance – two processes that are pivotal in AI’s understanding and production of language. This theory, in addition with five main facets of languages (phonological, morphological, semantic, syntactic and pragmatic) provides a valuable insight into the discrepancies between natural and artificial perception of language. Though there are similarities between them, the article shall conclude with what makes two adjacent capabilities different. The aim of this paper is to display the mechanisms of AI natural language processors with the aid of contemporary linguistic theories, and present possible issues which may ensue from using artificial language-recognising systems.
Year
Issue
4
Physical description
Dates
published
2019
online
2019-09-13
Contributors
References
  • 1. Brugman, Claudia, and George Lakoff. 1988. “Cognitive Topology and Lexical Networks.” In Lexical Ambiguity Resolution, edited by Steven L. Small, Garrison W. Cottrell, and Michael K. Tanenhaus, 477–508. Morgan Kaufmann. http://www.sciencedirect.com/science/article/pii/B9780080510132500227.
  • 2. Dautriche, Isabelle, Emmanuel Chemla, and Anne Christophe. 2016. “Word Learning: Homophony and the Distribution of Learning Exemplars.” Language Learning and Development 12 (3): 231–251.
  • 3. Finkel, Jenny Rose. 2010. “Holistic Language Processing: Joint Models of Linguistic Structure.” PhD Thesis, Stanford University.
  • 4. Geller, Tom. 2012. “Talking to Machines.” Communications of the ACM 55 (4): 14. https://doi.org/10.1145/2133806.2133812.
  • 5. Liang, Percy. 2014. “Talking to Computers in Natural Language.” XRDS: Crossroads, The ACM Magazine for Students 21 (1): 18–21. https://doi.org/10.1145/2659831.
  • 6. Liddy, Elizabeth D. 2001. “Natural Language Processing.”
  • 7. Rifaie, Mohammad Majid al-, and Mark Bishop. 2015. “Weak and Strong Computational Creativity.” In Computational Creativity Research: Towards Creative Machines, edited by Tarek R. Besold, Marco Schorlemmer, and Alan Smaill, 7:37–49. Paris: Atlantis Press. http://link.springer.com/10.2991/978-94-6239-085-0_2.
  • 8. Steels, Luc. 2011a. “A Design Pattern for Phrasal Constructions.” Design Patterns in Fluid Construction Grammar. Amsterdam: John Benjamins.
  • 9. ———. 2011b. Introducing Fluid Construction Grammar.
  • 10. ———. 2016. “Basics of Fluid Construction Grammar Paper under Review for Constructions and Frames.”
  • 11. Turing, Alan Mathison. 1950. “Computing Machinery and Intelligence.” Brian Physiology and Psychology 213.
  • 12. Winston. 1992. Artificial Intelligence. 3 edition. Reading, Mass: Pearson.
Document Type
Publication order reference
Identifiers
YADDA identifier
bwmeta1.element.ojs-doi-10_17951_nh_2019_4_19-34
JavaScript is turned off in your web browser. Turn it on to take full advantage of this site, then refresh the page.