[C1] What Machines Actually Do (And What They Don’t)
Every time you use Google Maps at 5:30 PM, something remarkable happens — and it has nothing to do with intelligence. The app doesn’t “know” traffic the way a local cabbie knows the city. It has no mental map, no concept of rush hour,…
Read More[ML 1] AI Paradigm Shift: From Rules to Patterns
Every piece of software you’ve ever shipped works the same way. A developer thinks through the logic and writes explicit rules — if the user clicks here, do this; if the input exceeds 100, reject it; if the date passes the deadline, send an email.…
Read More[ML 1.a] ML Foundations – Linear Combinations to Logistic Regression
Every machine learning model — from simple house price predictors to neural networks with billions of parameters — starts with the same fundamental building block: the linear combination. Take some inputs, multiply each by a weight, and add them up.…
Read More[ML 1.b] Teaching AI Models: Gradient Descent
In the last post, we established the big idea: machine learning is about finding patterns from data instead of writing rules by hand. But we skipped a critical question — how does the machine actually find the patterns? When someone says “we…
Read More[ML 2] Making Sense Of Embeddings
When you search on Amazon for “running shoes,” the system doesn’t just look for those exact words – it also shows you “jogging sneakers,” “athletic footwear,” and “marathon trainers.” When…
Read More[ML 2.a] Word2Vec: Start of Dense Embeddings
When you type a search query into Google or ask Spotify to find “chill acoustic covers,” the system doesn’t just look for those exact letters. It understands that “chill” is related to “relaxing” and…
Read More[ML 2.b] Measuring Meaning: Cosine Similarity
In the previous posts, we established that embeddings turn everything into points in space and that Word2Vec showed how to learn those points from context. But we glossed over something critical: how do you actually measure “closeness”? When…
Read More[ML 2.c] Needle in the Haystack: Embedding Training and Context Rot
You’ve probably experienced this: you paste a 50-page document into ChatGPT or Claude, ask a specific question about something buried on page 37, and the model either ignores it, gives a vague answer, or confidently cites something from page 2…
Read MoreHow Smart Vector Search Works
In the ever-evolving world, the art of forging genuine connections remains timeless. Whether it’s with colleagues, clients, or partners, establishing a genuine rapport paves the way for collaborative success.
Read More