Teaching AI Models: Gradient Descent
Post 1b/N In the last post, we established the big idea: machine learning is about finding patterns from data instead of writing rules by hand. But we skipped a critical question — how does the machine actually find the patterns? When someone says…
Read MoreNeedle in the Haystack: Embedding Training and Context Rot
Post 2c/N You’ve probably experienced this: you paste a 50-page document into ChatGPT or Claude, ask a specific question about something buried on page 37, and the model either ignores it, gives a vague answer, or confidently cites something from…
Read MoreMeasuring Meaning: Cosine Similarity
Post 2b/N In the previous posts, we established that embeddings turn everything into points in space and that Word2Vec showed how to learn those points from context. But we glossed over something critical: how do you actually measure…
Read MoreAI Paradigm Shift: From Rules to Patterns
Post 1/N Every piece of software you’ve ever shipped or have seen shipped works the same way. A developer sits down, thinks through the logic, and writes explicit rules. If the user clicks here, do this. If the input is greater than 100, reject it.…
Read MoreSeq2Seq Models: Basics behind LLMs
When you use Google Translate to turn a complex English sentence into Spanish, or when you ask Gemini to summarize a long email, the computer isn’t just looking at individual words. It’s following a path. It’s remembering where the sentence started…
Read MoreWord2Vec: Start of Dense Embeddings
Post 2a/N When you type a search query into Google or ask Spotify to find “chill acoustic covers,” the system doesn’t just look for those exact letters. It understands that “chill” is related to “relaxing” and…
Read MoreBreaking the “Unbreakable” Encryption – Part 1
If you’ve spent any time in tech, you’ve heard of AES, RSA, and Diffie-Hellman. We treat them like digital duct tape—they just work, they keep our data safe, and we don’t think about them. But there’s a massive “Quantum…
Read MoreML Foundations – Linear Combinations to Logistic Regression
Post 1a/N Every machine learning model — from simple house price predictors to neural networks with billions of parameters — starts with the same fundamental building block: the linear combination. Take some inputs, multiply each by a weight, and add…
Read MoreMaking Sense Of Embeddings
Post 2/N When you search on Amazon for “running shoes,” the system doesn’t just look for those exact words – it also shows you “jogging sneakers,” “athletic footwear,” and “marathon trainers.”…
Read MoreHow Smart Vector Search Works
In the ever-evolving world, the art of forging genuine connections remains timeless. Whether it’s with colleagues, clients, or partners, establishing a genuine rapport paves the way for collaborative success.
Read More