Estimated read time 2 min read

Natural Language Processing (NLP) is a subfield of computer science, artificial intelligence, information engineering, and human-computer interaction. This field focuses on how to program computers to process and analyse large amounts of natural language data. This article focuses on the current state of arts in the field of computational linguistics. It begins by briefly monitoring relevant trends in morphology, syntax, lexicology, semantics, stylistics, and pragmatics. Then, the chapter describes changes or special accents within formal Arabic and English syntax. After some evaluative remarks about the approach opted for, it continues with a linguistic description of literary Arabic for analysis purposes as well as an introduction to a formal description, pointing to some early results. The article hints at further perspectives for ongoing research and possible spinoffs such as a formalized description of Arabic syntax in formalized dependency rules as well as a subset thereof for information retrieval purposes.

Sentences with similar words can have completely different meanings or nuances depending on the way the words are placed and structured. This step is fundamental in text analytics, as we cannot afford to misinterpret the deeper meaning of a sentence if we want to gather truthful insights. A parser is able to determine, for example, the subject, the action, and the object in a sentence; for example, in the sentence “The company filed a lawsuit,” it should recognize that “the company” is the subject, “filed” is the verb, and “a lawsuit” is the object.

What is Text Analysis?

Widely used by knowledge-driven organizations, text Analysis is the process of converting large volumes of unstructured texts into meaningful content in order to extract useful information from it. The process can be thought of as slicing heaps of unstructured documents then interpret those text pieces to identify facts and relationships. The purpose of Text Analysis is to measure customer opinions, product reviews and feedback and provide search facility, sentimental analysis to support fact-based decision making.

Text analysis involves the use of linguistic, statistical and machine learning techniques to extract information, evaluate and interpret the output then structure it into databases, data warehouses for the purpose of deriving patterns and topics of interest. Text analysis also involves syntactic analysis, lexical analysis, categorisation and clustering, tagging/annotation. It determines keywords, topics, categories and entities from millions of documents.

Chakir Mahjoubi https://lexsense.net

Linguist with expertise in corpus creation, syntactic and semantic annotation, information architecture. My experience spans content strategy, software localisation and statistical analysis of experimentally-obtained results, I apply NLP features to analyse, evaluate and refine language models to improve NLP from a machine learning perspective.

I consult for, invest in, and advise businesses in language technology. My chief areas of concentration are semantics and syntax. My works have covered a wide range of topics taxonomies, translation, and localisation in different sectors; However, my research on linguistic and cultural studies has offered me the opportunity to draw connections between different cultures to communicate difficult concepts. Contact me directly here on LinkedIn, or send me an email to cmahjoubi@lexsense.net.


Text Structure; Functional Linguistics; Knowledge Tree; Natural Language Processing; Semantics; Syntax, Hands-on languages and architectures. Deep understanding of advanced algorithms, particularly with respect to artificial intelligence.

You May Also Like

More From Author

+ There are no comments

Add yours