Decoding AI: An Exploration of Natural Language Processing

Decoding AI

Decoding AI is a podcast-style series that guides listeners through how computers perceive, analyze, and work with human language, from everyday text to complex speech signals. It is designed as an accessible yet rigorous companion for students, developers, and language professionals who want a solid, conceptual understanding of language technologies without being overwhelmed by jargon or excessive mathematics.

Core focus
The series explores Natural Language Processing (NLP) as a key branch of AI that studies how computers model and interact with human language, explaining how machines can segment text, recognize structure, capture meaning, and generate coherent responses. It walks listeners from foundational building blocks—tokens, corpora, annotation, part-of-speech tags, and syntactic structure—to more advanced ideas such as language models, semantic representations, discourse, and dialogue management. Episodes often show how these concepts appear in real systems, from search engines and chatbots to translation platforms and content moderation tools.

Format and style
Each installment is organized like a short, focused lesson rather than a loose conversation, with clear learning objectives, concise explanations, and concrete examples drawn from real data or well-known applications. The tone is friendly but precise, aimed at technically curious listeners who may come from linguistics, computer science, translation, or digital content backgrounds and who want to connect theory with hands-on practice. To support progressive learning, episodes frequently build on one another, revisiting terms, contrasting approaches, and suggesting small experiments or further readings that early-stage NLP practitioners can explore on their own.

Topics and themes
Recurring themes include the full pipeline of text and speech processing: normalization, tokenization, tagging, parsing, classification, information extraction, machine translation, summarization, and conversational agents. The series balances linguistic perspectives (grammar, syntax, semantics, pragmatics, terminology and ontology work) with computational methods such as statistical models, neural architectures, and large language models, always emphasizing what problems a method solves and where its limits lie. Ethical and societal dimensions are woven throughout, including bias and fairness in language models, the treatment of under-resourced and multilingual languages, data privacy, and the responsible deployment of language technologies in education, business, and public services.

Presenter and audience
“Decoding AI” is presented by Chakir Mahjoubi, who introduces natural language processing and language science concepts in a structured, incremental way that respects both linguistic depth and engineering realities. The podcast breaks down core components such as grammar, syntax, semantics, terminology, and knowledge representation, making them understandable to listeners with very different levels of technical expertise—from non-technical language professionals and content creators to software engineers and data scientists entering NLP for the first time. By combining conceptual clarity with practical insight, Decoding AI aims to demystify NLP and make modern language technologies more transparent, trustworthy, and approachable for a broad, global audience.

Links:
https://lexsense.podbean.com/
https://www.youtube.com/watch?v=iVSQorHdMxc
https://www.boomplay.com/podcasts/129928   
https://www.podchaser.com/podcasts/decoding-ai-6030294