The release of ChatGPT has changed the way AI technologies are perceived as possible for the next years. In this keynote, we will dive deep into the inner workings of ChatGPT and more particularly of (large) language models. We will explore the backbone architecture of current models called Transformer and its interesting properties. We will discuss why such language models can be considered as more than word prediction models, and explore their reasoning capabilities and their generalization property. We will conclude our presentation with a discussion about limitations of large language models.