+ Model-driven engineering is undergoing a major resurgence, driven by advances in large language models (LLMs) that significantly reduce the effort required to create, evolve, and maintain models from natural-language descriptions. In this talk, I draw on two industry collaborations, one with Kinaxis on optimization modelling and another with Ciena on system design, to examine whether LLM-assisted model construction is mature enough to make modelling more efficient and practical in real-world settings. Through these collaborations, we ask not only whether LLMs can create models of sufficient quality, but also, given that LLMs have been exposed to considerably more code than models during pre-training, whether the cost–benefit still favours explicit model building or if code can increasingly serve as a pragmatic substitute. Several empirically grounded observations emerge from our studies: (1) iterative model improvement that combines formal checks with LLM-based semantic refinement consistently yields higher quality models; (2) reasoning LLMs outperform instruction-following ones for tasks involving abstraction and constraint reasoning; and (3) executable models can match code in correctness and executability while offering the inherent advantages of modelling such as abstraction and understandability.
0 commit comments