Why 4 Minutes Matter: The Hidden Cost of Imprecise Data in AI

Featured

Most of us grow up believing that a day is exactly 24 hours long. It’s tidy, convenient, and feels close enough to reality. But strictly speaking, the Earth completes one rotation on its axis in 23 hours and 56 minutes — what astronomers call a sidereal day. The extra four minutes come from the Earth’s simultaneous orbit around the Sun. If we ignored this subtlety, our sense of time would slowly drift out of sync with the Sun itself. Noon would stop being “midday.”

Those four minutes are a small detail — but they matter.


The Data Analogy

This is exactly what happens when organisations feed “close enough” data into AI systems. At first, the model might seem fine. Predictions look reasonable. The dashboards tick over. But just like those four missing minutes, tiny inaccuracies and fuzzy definitions build up. Over weeks, months, or years, the system drifts further from reality.

Suddenly, your AI isn’t aligned with the world as it actually is. Recommendations miss the mark. Bias creeps in. Customers lose trust.

The lesson? Precision in data is not pedantry. It’s the difference between alignment and drift.


Why Precision Matters

  • Compounding effect: Small errors accumulate over time. Like four minutes a day becoming hours, days, and months of misalignment.
  • AI is literal: Models take inputs as ground truth. A vague definition or inconsistent label isn’t “good enough.” It’s an anchor point for bad predictions.
  • Trust is fragile: Once stakeholders see AI outputs wobble, confidence in the entire system erodes.

The Needle Framework: Finding the Signal

Getting data right is about finding the needle in the haystack: the clear, sharp definition hidden among the fuzz. When you sharpen the data — consistent labels, correct units, precise categories — you give AI a sidereal day to lock onto. A stable reference point. A system that stays in sync instead of drifting.


So What?

AI isn’t magic; it’s alignment. And alignment starts with data. Just as astronomers can’t afford to ignore the missing four minutes, companies can’t afford to wave away small inconsistencies. The cost of “close enough” is hidden drift.

The sharper your data, the sharper your AI. And that’s where the real value emerges.


Four minutes matter in astronomy. And they matter in AI. Get your data precise, and your systems won’t just work today — they’ll stay aligned tomorrow.

What AI Sees When It Looks at Your Business (And Why You Should Look First)

Before an investor opens your deck, before a customer compares you to a competitor, before a potential partner decides whether a meeting is worth their time — something has already been there. It has read what you’ve published, compared your model to thousands of others, compressed your entire value proposition into a handful of signals, and formed a view. That view now shapes what comes next, almost always without your involvement, and almost always before you’ve said a word.

Most business owners, when they hear this, assume the risk is that AI might get things badly wrong — a hallucination, a fabricated fact, a botched summary. In practice, that’s not where the real danger lives.

The problem isn’t when AI gets it wrong. It’s when AI gets it almost right

AI doesn’t just read what’s there. It has to resolve what it sees into a coherent version of your business. If your materials leave gaps — things implied but not clearly stated, logic that isn’t fully connected, assumptions that are never made explicit — it doesn’t pause and ask for clarification. It fills those gaps itself, confidently, based on what similar-looking businesses usually look like.

The result gets passed along. It becomes the mental model investors carry into the room, the one you’re now working against rather than building on. And because it sounds right — organised, plausible, no obvious red flags — it’s surprisingly hard to correct once it’s formed. You’re no longer defending your business model. You’re defending it against a version of your business model that someone else finds more believable.

This is the shift that matters: an untested assumption used to be an internal risk. A gap that might surface during due diligence, in a hard conversation with a board member, or in the market eventually. Now it becomes an external perception risk almost immediately — because AI reads your business, draws the inference you left implicit, and distributes that reading to anyone who asks, before you’re ever in the room.

What this looks like when you’re sitting in the meeting

Take a scenario most founders will recognise. A company positions itself around customer retention as its core value driver — it’s in the copy, the investor updates, the product narrative. But no one has been fully explicit about what retention actually means here: what the signal is, what the benchmark is, what the underlying mechanism relies on.

Externally, an AI has to decide what retention means in this context, and it decides based on what retention usually looks like in comparable businesses. So you walk into a conversation where someone has already formed a view. They’re not hostile. They’re not confused. They just have a slightly wrong picture in their head, and neither of you knows it yet. You spend the first twenty minutes of a meeting you needed to go well quietly realising you’re not building on a foundation — you’re correcting one.

The iGaming industry knows this problem intimately. A casino operator spots what looks like a valuable VIP segment: strong spend, consistent behaviour, a pattern that looks like genuine loyalty. They build reward structures and marketing investment around it. But the signal was distorted from the start: multiple accounts that were, in reality, the same person. The dashboard still looked fine. All the numbers still looked actionable. But the strategy was built on a false assumption, which meant wasted spend, misplaced confidence, and the long grind of trying to improve performance using data that was never telling the whole truth. The gap wasn’t obvious from the inside — it never is. But it would have been visible immediately to anyone reading the business from the outside.

The same process that exposes you can protect you

Here’s where most people stop — at the risk. But the more important realisation is that the same mechanism works in your favour, if you use it first.

A few teams have already worked this out. VENDOR.Energy, a deep tech company navigating complex investor due diligence, didn’t leave their interpretation to chance. They built a custom evaluation prompt, a structured set of instructions that tells any AI analysing them what to read first, in what order, and what conceptual framework to apply before drawing conclusions. They’re not waiting to be misread, they’re briefing the reader before it reads.

You don’t need to be in deep tech for this approach to work. The principle is the same for any business: use AI to analyse your own materials before anyone else does. Watch where it makes assumptions. Notice where it fills in gaps you didn’t know you’d left open. Find the assumption it confidently makes that isn’t actually true — and then decide whether to close that gap in your logic, your communications, or both.

The value of doing this isn’t primarily about controlling the message. It’s about seeing your business the way everyone else already is. Founders are too close to what they’ve built to spot the assumptions that are load-bearing but untested. An AI reading your materials has no such familiarity. It just reads what’s there, infers what isn’t, and hands you back a version of your business that might be the most honest outside perspective you’ve ever received.

What actually changes the outcome

Most instincts here run toward a content fix — better copy, a cleaner whitepaper, tighter messaging. Those things matter at the margin. But the more fundamental question is: what does this business actually depend on?

Almost every strategy is held together by a small number of assumptions. Often one that matters more than the rest — the thing that, if wrong, makes the rest of the logic stop holding. The purpose of finding it isn’t to write a better pitch. It’s to stress-test whether the commercial logic is actually sound, without the over-familiarity that comes from having built the thing yourself.

Once it holds under your own scrutiny, communication becomes straightforward. And once it’s communicated clearly, the reading that circulates — through AI tools, through analysts, through anyone who encounters your business before they meet you — is far more likely to be one you’d recognise.

That’s the real opportunity. Not reputation management. Not better positioning. The chance to see your own business more clearly than you have before, fix what doesn’t hold, and walk into every room knowing that the version of you that arrived first is one you shaped.

The only question is who reads your business first

For a long time, controlling the narrative meant being good at telling your story. That still matters. But the version of that control that holds up now, in a world where AI is reading your business before most humans do, comes from having a logic that’s clear enough and tested enough that it doesn’t depend on your presence to be understood correctly.

Your business is already being read, compared, and broken down at speed, by tools being used by the people whose decisions matter most to you. The only question is whether you’ve reviewed your own business first, and whether what you found made you stronger or just more surprised.

AI is a bubble that is not going to burst (but it’s still a bubble)

AI: A Bubble, But Not a Bubble (written with help from ChatGPT-5)

Or: Why the “bubble” narrative around AI misses something deeper

The question keeps coming up: is artificial intelligence (AI) currently in a speculative bubble that’s destined to burst, or is this something more enduring — a transformational wave disguised in bubble clothing?

Let’s unpack the paradox: on one hand, many of the classic hallmarks of a bubble are present. On the other, there are structural, strategic and systemic factors that suggest this may be a bubble that isn’t just a bubble. Below is a needle-scan breakdown of key indicators.


1. Bubble-Warning Signals

These are the red flags: the parts of the story that say: yes, this does look like a bubble.

  • Hype vs returns: Global investment into AI startups exceeded $50 billion in 2023, yet very few of these companies are profitable or generating recurring revenue (BuiltIn).
  • Valuations stretching history: Nvidia’s market cap crossed $2 trillion in 2024 — the fastest growth in tech history — stoking concerns that AI-driven valuation multiples are disconnected from current fundamentals.
  • Warnings from the top: OpenAI CEO Sam Altman himself said in 2024, “Yes, it’s a bubble… and that’s OK.”
  • Concentration & fragility: As of mid-2025, the top five AI companies (Nvidia, Microsoft, Alphabet, Meta, and Amazon) control over 85% of the global AI compute infrastructure.
  • Speculative patterning: Startups with no product and mere “AI wrappers” around ChatGPT are raising millions in pre-seed funding, echoing dot-com era exuberance.
  • AI to the rescue? Not so fast. SVB cleverly asks in the title of this third chart, “Chat, What’s Another Word for Bubble?”

2. But It’s Not Just a Bubble

Now the other side: what suggests AI is more than froth and fear.

  • Infrastructure build-out: $200B+ projected spending on AI data centers between 2024-2027 (McKinsey). These aren’t ephemeral assets; they’re physical, long-term capital investments.
  • Government policy shifts: The EU, U.S., China, and UAE have all declared national AI strategies. The UK launched “Frontier AI Taskforce” with a £100M fund. These are state-level stakes.
  • Societal adoption: ChatGPT reached 100 million users in two months — the fastest adoption of any consumer app in history. It’s now integrated into Office365, Shopify, Duolingo, and dozens of platforms.
  • Cross-system integration: AI is now used in logistics, drug discovery, customer service, legal contracts, climate modeling, and more. It’s not one vertical; it’s multi-sectoral.
  • Decentralised movement: Projects like SingularityNET, Bittensor, and Fetch.ai aim to provide counterweights to centralized AI monopolies. Though small in market cap, their ideology is sticky and increasingly resonant.
  • Talent pipeline: Top universities report record-breaking enrolment in machine learning & data science tracks. MIT saw a 73% increase in AI-related thesis topics from 2021 to 2024.
  • Powell says that, unlike the dotcom boom, AI spending isn’t a bubble: ‘I won’t go into particular names, but they actually have earnings’.

3. Why This Matters

Because how one interprets this moment drives strategy.

  • Treating AI as just a bubble? You risk ignoring long-term infrastructure and missing the strategic layer.
  • Treating AI as only hype-free? You risk capital misallocation and being blindsided by volatility.

Instead, the correct lens may be dual-layered: short-term froth, long-term wave.


4. Key Signals That It’s Not Just a Bubble

Here are seven indicators that suggest AI is here for the long haul:

  1. $200B+ in AI infra spend (2024-2027) — Source: McKinsey
  2. 40+ nations with national AI plans — Source: OECD AI Policy Observatory
  3. 100M+ users for ChatGPT within 60 days of launch
  4. AI cited in 60%+ of S&P 500 earnings calls in 2024 (Goldman Sachs)
  5. 5,000+ AI-related job listings on LinkedIn UK in July 2025 alone
  6. AI + Crypto projects growing: Over $4.3B market cap in AI-token sector (CoinGecko, Q2 2025)
  7. Cross-sector resilience: AI use cases now span healthcare, finance, media, law, education, and urban planning

5. Strategy for Navigators

If you’re an investor, policymaker, or DeSci founder:

  • Use caution: Recognize speculative behaviour where it exists.
  • Track fundamentals: Focus on infrastructure, partnerships, developer traction.
  • Scan for decentralisation: Keep eyes on AI x Web3 convergence.
  • Measure what matters: User adoption, SDK integrations, compute dependencies, data partnerships.
  • Diversify bets: Don’t just follow LLMs and chips — track edge AI, tokenised compute, AI governance tools.

6. Final Thought

Yes — there is a bubble vibe to AI right now. The hype is real, the valuations are stretched, and not all will survive.

But it’s not just a bubble.

It’s a complex, layered, evolving ecosystem with speculative peaks but deep structural roots. Infrastructure, strategy, adoption and decentralisation all suggest this is not a passing moment. The wave has momentum.

“It may wobble, it may correct, it may reshape — but the foundations are being laid for a long-term wave, not just a feast followed by famine.”


Further Reading: