In this video, I test the brand new Ministral 3 models (3B, 8B, and 14B) on the M4 Mac Mini Pro with 24GB of RAM. I’ll show you exactly how to set these up using LM Studio, how much memory they actually use with high context windows, and whether they are any good for coding tasks. 00:00 Intro to Local AI on Mac Mini M4 00:27 Ministral 3 Parameters (3B, 8B, 14B) 02:00 Why use LM Studio 03:49 Downloading Models from Catalog 05:04 Configuring Context Length & GPU Offload 09:05 Testing Mistral 3B Performance 14:14 Coding with Local LLMs (VS Code Integration) If you want to run a private, uncensored AI locally on your Mac without a subscription, this guide is for you.