Category: Curiosity

  • From Catching a Ball to Catching Time: A Journey Through the Brain’s Perception Engine

    From Catching a Ball to Catching Time: A Journey Through the Brain’s Perception Engine

    It began with a simple game of catch. A ball arcing through the air, hands stretching forward almost reflexively, eyes tracing the curve, and feet adjusting just enough to be in place at the right time. This ordinary act, repeated across parks, playgrounds, and backyards, hides a remarkable cognitive feat.

    Catching a ball is not just a motor skill—it’s a quiet symphony of perception, prediction, and action. In that instant, the brain isn’t merely reacting; it’s forecasting. It models trajectories, calculates timing, and coordinates motion with a precision that rivals even engineered systems.

    Hand-Eye Coordination: The Brain’s Real-Time Algorithm

    At the core of this ability lies hand-eye coordination—a demonstration of the brain’s internal prediction engine.

    When you see a ball approaching, your eyes gather visual data. Your brain uses this to predict its path, then triggers movement so your hands arrive just in time. This is called a forward model in neuroscience—a mental simulation of how the world behaves in the next few moments.

    Unlike machines that often require vast training data, the human brain learns from relatively few examples, combining vision, touch, balance, memory, and past experience in real time.

    Depth Perception: Building the Third Dimension

    The reason we can play catch at all is because we perceive depth. Our two eyes capture slightly different images (a phenomenon called parallax), and the brain fuses them into a single 3D model.

    But this process is more than just geometry—it’s inference. The brain uses motion cues, lighting, context, and prior experience to refine our sense of space.

    Close one eye and the world becomes noticeably flatter. Depth is not directly perceived; it is constructed through various depth cues.

    Five Senses, One Map: Stitching Reality

    We don’t rely only on vision, but also incorporate information from other senses. Sound offers spatial cues (e.g., how you know where someone is calling from), touch defines boundaries, smell signals proximity, and the vestibular system in the inner ear gives a sense of balance and orientation.

    The brain weaves all this into a unified map of 3D space.

    We don’t just perceive space—we build it, moment to moment.

    Why Stop at Three Dimensions?

    This raises a fascinating question: if the brain can model three dimensions so easily, why not more?

    Physics suggests the possibility of extra dimensions (e.g., string theory posits up to 11). But evolution shaped our brains for a world governed by three. Our tools, limbs, and languages reflect this geometry.

    Just as a flatworm cannot imagine a sphere, perhaps we’re bound by our perceptual design.

    Time: The Silent Sixth Sense?

    And then there’s time—our most elusive dimension.

    We don’t “see” time. We sense it—through change, memory, and motion. Time perception, or chronoception, is the brain’s way of experiencing the passage of time, and it’s not a single, unified process. Instead, it’s a distributed function involving multiple brain regions and cognitive processes. 

    What’s clear is that our experience of time is elastic. Fear slows it down, boredom stretches it, joy compresses it. The brain’s “clock” is shaped by attention, emotion, and memory.

    Our clocks and calendars derive their timekeeping from established references: the defined duration of a second and the Earth’s rotation.

    In a way, we don’t just experience time—we construct it.

    A somewhat old meme on how aliens might view our New Year’s celebrations. Source: the internet.

    The Final Catch: Cognition as Window and Wall

    And so we return to the ball in midair.

    In that brief moment, your brain:

    • Gathers depth cues
    • Recalls previous throws
    • Predicts the arc
    • Commands your limbs
    • Syncs it all with an invisible clock

    All of this happens without conscious thought.

    What feels like instinct is actually the result of layered learning, shaped by biology, refined by evolution, and fed by every lived moment.

    Catching a ball is not about sport—it’s a reminder of the brain’s quiet genius. It shows how we:

    • Perceive without direct input,
    • Build our reality from fragments, and
    • Operate within the elegant limits of our design.

    And perhaps most beautifully, it reminds us that cognition is both a window to the world—and a wall that defines its shape.

  • Language, Logic, and the Brain: Why We Read Mistakes and Think in Metaphors

    Language, Logic, and the Brain: Why We Read Mistakes and Think in Metaphors

    You might have seen this sentence before: “Aoccdrnig to a rscheearch at Cmabrigde Uinervtisy…”—and remarkably, you can still read it. Even when letters are jumbled, our brains don’t get stuck. They compensate. Predict. Fill in the blanks. Language, it turns out, is not a rigid code. It’s a dance of patterns, expectations, and clever shortcuts. From deciphering typos to understanding metaphors like “time flies,” our brain is constantly negotiating meaning. But what if these quirks—our ability to read wrongly spelled words, our love for figures of speech, or even our habit of saying “Company XYZ” without blinking—aren’t just linguistic fun facts, but a window into how the brain processes, predicts, and communicates?

    Let’s begin with the jumbled letters. The classic example tells us that if the first and last letters of a word are in the right place, the brain can still interpret it correctly. It’s not magic—it’s pattern recognition. The brain doesn’t read each letter in isolation. Instead, it uses expectations, frequency of word use, and context to decode meaning. This process is called top-down processing, where your brain’s prior knowledge shapes your interpretation of incoming information. In this case, you’re not reading a sentence letter-by-letter but rather word-by-word, or even in phrase-sized chunks.

    This trick of the mind has become so well-known that word processors have tried to mimic it. Tools like Google Docs and Grammarly incorporate algorithms that attempt to do what our brains do effortlessly: recognize imperfect inputs and still extract coherent meaning. But here’s the catch: even the best AI systems today are still far less capable than your brain. The real advantage of the human brain lies not just in how much information it can process, but in how deeply connected that information is. When a child learns what a “dog” is, it’s not just by seeing images of dogs a million times—it’s by associating it with barks heard at night, cartoons watched, a friend’s pet, a memory of being chased. These diverse experiences create rich, interconnected neural pathways. AI models, even when trained on huge datasets, lack that richness of context. They learn patterns, yes—but not through living, sensing, and emotionally experiencing those patterns.

    In a recent episode of The Future with Hannah Fry on Bloomberg Originals, Sergey Levine, Associate Professor at UC Berkeley, highlights the importance of this richness of connections in the learning process. He explains that while a language model might respond to the question “What happens when you drop an object?” with “It falls,” this response lacks true understanding. “Knowing what it means for something to fall—the impact it has on the world—only comes through actual experience,” he notes. Levine further explains that models like ChatGPT can describe gravity based on their training data, but this understanding is essentially “a reflection of a reflection.” In contrast, human learning—through physical interaction with the world—offers direct, firsthand insight: “When you experience something, you understand it at the source.”

    This is also why humans excel at figurative language. Consider the phrase “time flies.” You instantly understand that it’s not about a flying clock—it means time passes quickly. Our brains store idioms and metaphors as familiar, pre-learned concepts. They become shortcuts for meaning. What’s more interesting is how universal this behavior is. In English, it’s “time flies”; in Hindi, one might say samay haath se nikal gaya—“time slipped through the fingers.” Different languages, different cultures, same cognitive blueprint. The human brain has evolved to think not just in facts, but in metaphors. Culture and language may vary, but the underlying cognitive mechanisms remain strikingly similar.

    This is also where placeholders like “Company XYZ,” “Professor X,” or “Chemical X” come in. These are not just convenient linguistic tools—they’re mental cues that signal abstraction. Nearly every language has its own way of doing this. In Hindi, one might use “फलां आदमी” (falaan aadmi) to mean a generic person. In Arabic, the term “fulan” serves a similar purpose. These generic stand-ins may look different, but they serve a similar function: they help us conceptualize hypothetical or unknown entities.

    It is in this nuance that the contrast between human learning and AI becomes clearer. The human brain is not just a storage unit—it’s a meaning-maker. Our neural networks are constantly rewiring in response to diverse stimuli—cultural, environmental, social. The same word can evoke a different response in different people because of their unique mental associations. This is the beauty and complexity of language processing in the brain: it’s influenced by everything we are—where we live, what we’ve seen, what we believe.

    Take this example: You’re reading a tweet. It ends with “Well, that went great. 🙃” You immediately detect the sarcasm, sense the irony, maybe even imagine the tone of voice. In that single moment, your brain is tapping into language, context, emotion, culture, and memory—simultaneously. That’s not just processing—that’s holistic understanding. AI, even at its most advanced, still learns in silos: grammar in one layer, sentiment in another, tone in a third. While it can produce human-like responses, it doesn’t feel or experience them. And that gap matters.

    Language, ultimately, is not just about words—it’s about how our brains stitch together meaning using experience, expectation, and context. The way we process spelling errors, understand metaphors, and employ abstract placeholders all reflect the extraordinary adaptability of human cognition.

    So next time you read a scrambled sentence, casually refer to “Company XYZ,” or instinctively interpret irony in a message, take a moment to appreciate the genius of your own mind. Beneath the words lies a web of perception, memory, and imagination—far more complex than any machine. Our words may differ, our accents may change, but the shared architecture of understanding binds us all. And in that, perhaps, lies the truest magic of language.

  • The Story of the Numbers We All Use

    The Story of the Numbers We All Use

    Every time we count, calculate, or type a number into our phones or laptops, we’re using symbols that feel almost as ancient as time itself: 1, 2, 3, 4… They’re called Arabic numerals today, but their story doesn’t begin in Arabia. Nor does it end there. In fact, it winds through ancient India, travels across the Islamic world, and eventually arrives in Europe, where it quietly replaces the clunky Roman numerals we now associate with monuments and clocks.

    This is the story of numbers—not just as symbols, but as carriers of knowledge across civilizations.

    Where it really began

    The earliest evidence of a positional decimal number system—the idea that the place of a digit determines its value—comes from India. Ancient Indian mathematicians not only developed this positional system but also introduced the concept of zero as a number with its own mathematical identity. This might seem trivial to us today, but at the time, it was nothing short of revolutionary.

    The oldest known inscription of the numeral “0” as a placeholder is found in the Bakhshali manuscript, a mathematical text dated by radiocarbon methods to as early as the 3rd or 4th century CE. And long before that, Indian mathematicians like Pingala (who worked on binary-like systems) and later Aryabhata and Brahmagupta explored place value, equations, and operations that were deeply reliant on this numerical framework.

    Brahmagupta, in particular, formalized rules for zero and negative numbers—ideas that would go on to influence generations of mathematicians.

    Image: Hindu astronomer, 19th-century illustration. CC-BY-ND.

    How the knowledge traveled

    The number system, along with broader Indian mathematical knowledge, spread to the Islamic world sometime around the 7th to 9th centuries CE. Scholars in Persia and the Arab world translated Sanskrit texts into Arabic, absorbed their ideas, and extended them further.

    One such scholar was the Persian mathematician Al-Khwarizmi, who is often called the “father of algebra.” His treatises introduced the Indian system of numerals to the Arabic-speaking world. In fact, the Latin word “algorithm” comes from his name, and his writings became the main channel through which Indian mathematics entered Europe.

    Image: Muḥammad ibn Mūsā al-Ḵwārizmī. (He is on a Soviet Union commemorative stamp, issued September 6, 1983. The stamp bears his name and says “1200 years”, referring to the approximate anniversary of his birth). Source: Wikimedia Commons

    When these ideas were translated into Latin in medieval Spain—particularly in cities like Toledo, where Christian, Muslim, and Jewish scholars collaborated—they began influencing European mathematics. By the 12th century, the numerals had been introduced to the West as the “Modus Indorum“—the Indian method. This system was introduced to Europe by Leonardo Pisano, also known as Fibonacci, in his book Liber Abaci (1202). Fibonacci himself credited the system to the “Indians,” highlighting its superiority over the Roman numeral system then in use.

    Image: Monument of Leonardo da Pisa (Fibonacci), by Giovanni Paganucci, completed in 1863, in the Camposanto di Pisa, Pisa, Italy. Source: Wikimedia Commons.

    Why we call them Arabic numerals

    Given this journey, why are they called Arabic numerals? The answer is simple: Europeans learned them through Arabic texts. To them, the knowledge had come from the Arab world—even if it had deeper roots in India. It’s a reminder that the names we give to ideas often reflect where we encounter them, not necessarily where they were born.

    It’s also a reminder of something deeper: the way knowledge flows across time and geography. It does not always come with citations or acknowledgments. What survives is the idea, not the backstory.

    The quiet revolution over Roman numerals

    The so-called “Hindu-Arabic” numerals slowly replaced Roman numerals in Europe. The transition wasn’t immediate—resistance to change, especially in something as fundamental as arithmetic, was strong. Roman numerals, while elegant in stone carvings, were unwieldy for calculations. Try multiplying LXIV by XXIII without converting them first.

    In contrast, the Indian system—with its zero, place value, and compact notation—was built for mathematics. Over time, merchants, accountants, and scientists realized its efficiency. By the time of the Renaissance, this “foreign” number system had become the foundation of modern mathematics in Europe.

    Image: Comparison between five different styles of writing Arabic numerals. The terms (“European”, “Arabic-Indic”, etc.) are written in Arial Unicode MS and still are changeable. Source: Wikimedia Commons.

    A quiet lesson about knowledge

    This history of numbers is simply a story—a reminder of how knowledge has always been global. Ideas are born, shaped, transmitted, forgotten, revived. Borders don’t hold them. Languages don’t confine them. The numerals we use today are not Indian or Arabic or Western—they’re human.

    What we often lose, though, are the stories. As knowledge becomes embedded into daily life, its origins fade. But perhaps it’s worth pausing every now and then—not just to marvel at how far we’ve come, but to acknowledge the many quiet contributors whose names don’t make it into the textbooks.

    So the next time you type a “0” or balance a spreadsheet, remember: that little circle carries a long and winding history—one that connects forests in ancient India to the libraries of Baghdad, and the towers of medieval Europe to the circuits in your device.


    The Guradian has done an excellent job capturing the larger history. Check here.

  • The Animal in Us: Questioning the Myth of Human Superiority

    The Animal in Us: Questioning the Myth of Human Superiority

    In many cultures and conversations, one phrase stands out when someone acts impulsively, selfishly, or violently: “Don’t behave like an animal.” It’s meant to be a reprimand, a reminder to act with decorum, to exercise restraint, to live by some higher moral code. But what’s embedded in that phrase is something more telling — the assumption that animals are primitive, lesser, and somehow below the moral plane that humans claim to occupy.

    This idea is so deeply rooted that we rarely question it. But perhaps it’s time we did.

    At the heart of this assumption is a belief in human exceptionalism — the idea that we are fundamentally different from, and superior to, other living beings. Our capacity for abstract thought, the development of complex languages, our ability to shape civilizations, all reinforce this idea. But if we look closer, this sense of moral and intellectual superiority begins to blur.

    Much of what we do — our desires, our fears, our social bonds, our instinct for survival — isn’t very different from what drives the behavior of animals. Our social structures mirror hierarchies found in packs, troops, or flocks. Our hunger for belonging is as primal as a bird’s search for a mate or a lion’s protection of its pride. Even the biochemical triggers that influence our decisions — from dopamine surges to stress hormones — are shared across species. The difference, then, is not one of kind, but of degree.

    What we call “instinct” in animals, we often call “emotion,” “impulse,” or “intuition” in ourselves. But these are, at their core, manifestations of the same biological machinery — neurons firing, hormones circulating, environmental signals interpreted and acted upon. Our brains may have evolved more complexity, but they are still made of the same building blocks, governed by the same laws of biology and chemistry.

    Morality, too, is often seen as a uniquely human domain. But this overlooks the rich tapestry of behaviors in the animal world that echo our own moral codes: cooperation, empathy, fairness, even sacrifice. Elephants mourning their dead, primates sharing food with the weak, wolves caring for the injured — these aren’t anomalies. They are reminders that the roots of what we call morality run deep into the evolutionary past.

    So why do we resist this comparison so strongly? Perhaps because acknowledging our animal nature forces us to reckon with a truth we often avoid — that we are not outside or above nature, but inextricably part of it. And that realization can be unsettling. It collapses the pedestal we’ve built for ourselves.

    Interestingly, this human tendency to create hierarchies among life forms is mirrored in how we create hierarchies within our own species. Just as we place animals on a scale of perceived intelligence or usefulness — a dog is noble, a rat is vermin — we have historically created social, racial, and caste-based hierarchies that serve to dehumanize and exclude. Calling someone “animalistic” isn’t just about comparing them to another species — it’s often about stripping them of status, of dignity, of personhood. It’s a tool of marginalization.

    But when we begin to see behavior — all behavior — as a product of context, biology, and survival, the lines between human and animal begin to fade. And perhaps that’s the humbling realization we need. We are not the center of the universe, nor are we the moral compass of the biosphere. We are part of a vast, interconnected system governed by laws far older than us, forces that operate with or without our recognition.

    What we call choice, morality, or culture may simply be nature expressing itself in a more complex form. And that complexity should not make us arrogant. It should make us more responsible, more curious, and more empathetic — toward each other, and toward the creatures we share this world with.

    In the end, the phrase “Don’t behave like an animal” may need a revision. Maybe the real challenge is: Can we learn to respect the animal within us — and in doing so, respect all forms of life around us?