In March 2024, Google made a quiet change that should have woken up every PM and developer working on web performance: First Input Delay (FID) stopped existing as a Core Web Vital. In its place came Interaction to Next Paint (INP).
It wasn’t a surprise to anyone paying attention. Google gave a year’s notice. But in practice, it’s common to find teams still mentioning FID in performance meetings or running dashboards configured for a metric Google doesn’t even count anymore.
That’s not just technical lag. That’s effort spent on optimizations that don’t move the ranking needle anymore.
What Actually Changed in Core Web Vitals
Core Web Vitals are three metrics Google uses as user experience signals for rankings. Before March 2024, they were:
- LCP (Largest Contentful Paint): time until the largest visible element loads
- FID (First Input Delay): time between the user’s first interaction and the browser’s response
- CLS (Cumulative Layout Shift): visual stability of the page
Now they are:
- LCP — stays the same
- INP (Interaction to Next Paint) — replaces FID
- CLS — stays the same
The difference between FID and INP looks subtle. It’s not. It completely changes what you need to optimize.
Why FID Was a Weak Metric
FID only measured the user’s first interaction. Click a button? FID measured the delay until the browser started processing that click.
The problem: almost every site passed FID easily. Google data showed 93% of pages already hit the “good” threshold (under 100ms). It was too easy to pass.
Think about an e-commerce site: user loads the page (LCP good), clicks the first product (FID good), but when they try to add to cart, filter by price, or navigate between pages—everything locks up. FID saw nothing.
What INP Measures Differently
INP (Interaction to Next Paint) considers all interactions during the session, not just the first one. And it measures the time until the next visual frame renders—in other words, until the user sees something happen on screen.
The threshold changed:
- Good: up to 200ms
- Needs improvement: 200ms to 500ms
- Poor: above 500ms
Looks more forgiving than FID’s 100ms. In practice? Much harder to hit. Because now you need consistent responsiveness across the entire journey, not just on the first click.
FID (retired)
- Measured only the first interaction
- 100ms threshold
- 93% of sites already passed
- Ignored freezes after that
- Easy to optimize point-in-time
INP (current)
- Measures all interactions
- 200ms threshold
- Far fewer sites pass
- Captures the full experience
- Requires consistent performance
The Real Impact on Brazilian Sites
In the audits I run, the pattern is clear: sites that breezed through FID are failing INP.
The usual suspects:
Heavy JavaScript on secondary interactions. Your analytics script loaded after the first interaction, so FID didn’t see it. But when the user clicks other elements, that script is already competing for resources. INP catches it.
Client-side frameworks misconfigured. React, Vue, Angular—any SPA that doesn’t manage hydration and re-renders well is going to suffer with INP. The first render might be fine, but each interaction after that triggers state reconciliation that blocks the main thread.
Third-party scripts stacked up. Chat widgets, retargeting pixels, A/B testing tools. Each one seems harmless alone. Together, they create a backlog of tasks the browser has to process on every interaction.
Unoptimized animations and transitions. CSS transitions forcing layout recalculation. Carousels not using will-change. Modals blocking the main thread when they open.
How to Diagnose an INP Problem
Before you go optimizing, you need to know if you actually have a problem. And where it is.
- Check the Core Web Vitals report in Search Console — these are real user data
- Use PageSpeed Insights with URLs from high-traffic pages, not just the homepage
- Install the Chrome Web Vitals extension and browse your site like a real user would
- Use Chrome DevTools > Performance to identify long tasks during interactions
- Verify your RUM (Real User Monitoring) is capturing INP, not just FID
Search Console is the most reliable source because it uses field data (real users), not lab data (automated tests). Many sites pass Lighthouse but fail field INP because synthetic tests don’t simulate the diversity of devices and connections in Brazil.
What to Optimize First
Once you’ve confirmed you have an INP problem, prioritization matters. It’s not about doing everything—it’s about doing what moves the needle.
1. Identify which interactions are problematic
Use DevTools to record a real usage session. Look for “long tasks” (anything over 50ms) that happen during clicks and taps. INP is determined by the worst case (or close to it), so you need to find the outliers.
2. Break up long JavaScript tasks
If a function runs for 300ms, the browser can’t respond to anything during that time. The fix is to break it into smaller chunks using requestIdleCallback, setTimeout, or the browser’s Scheduler API. The user doesn’t need everything done—they just need to see something is happening.
3. Cut down third-party script impact
Load non-critical scripts with defer or async. Consider loading some only after the first interaction. Chat widgets, for example, rarely need to be ready in the first second.
4. Optimize event handlers
Handlers that do heavy synchronous work are the main culprits. Move expensive logic to web workers when possible. Use debounce on scroll and resize events. Avoid forcing layout synchronization (reading layout properties right after modifying the DOM).
5. Audit your hydration strategy
If you’re using SSR with frameworks like Next.js or Nuxt, hydration might be blocking interactions. Techniques like partial hydration, progressive hydration, or island architecture drastically cut INP in SPAs.
The Mistake of Optimizing for Lighthouse Instead of Real Users
It’s common to see teams celebrating a Lighthouse score of 100 while Search Console shows INP in the red.
Lab data (Lighthouse, PageSpeed Insights in lab mode) is useful for development. But Google uses field data (CrUX - Chrome User Experience Report) for rankings. They’re not the same thing.
If you only look at Lighthouse, you’re optimizing for a scenario that doesn’t represent your users.
When INP Shouldn’t Be Your Priority
Performance matters. But context matters more.
If your site is mostly static content (blog, corporate site, documentation), INP is probably already fine. There aren’t many complex interactions to measure. Your effort pays off more with LCP and CLS.
If competitors in your space also have poor INP, the competitive edge might be somewhere else. Google uses Core Web Vitals as a tiebreaker, not a dominant ranking factor. Relevant content still beats a fast page with weak content.
If most of your traffic comes from non-organic channels (paid, social, email), optimizing INP for SEO might not be the right lever.
I’m not saying ignore performance. I’m saying prioritize based on your context, not generic best practices.
What Changes in Your Strategy
Swapping FID for INP isn’t just a metric swap. It’s a mindset shift about what performance means.
With FID, the question was: “does the page load fast?” With INP, it’s: “does the page respond fast across the entire journey?”
That has real implications:
- Code splitting becomes more critical than total bundle size
- Lazy loading needs to be smarter to not cause delays during interactions
- Third-party scripts need continuous auditing, not just at implementation
- Performance tests need to simulate full user journeys, not just page load
- Monitoring needs field metrics, not just synthetic checks
Concrete Next Steps
If you read this far and realized your team still treats FID as gospel, here’s what to do tomorrow:
- Update your dashboards and reports to show INP instead of FID
- Check Search Console to understand your current field INP
- Identify the 3 highest-traffic pages with poor INP
- For each one, run DevTools Performance on a typical interaction
- Prioritize the most impactful long tasks and create specific tickets
Don’t try to fix everything at once. INP is about consistency, not perfection. Moving from “poor” to “needs improvement” already counts. Moving from “needs improvement” to “good” is the goal, but it doesn’t have to happen in the first sprint.
Author
Raphael Pereira
Designer & strategist focused on performance-led digital experiences.
Related posts
The Prompt Isn't the Interface: Why Designers Are Rethinking AI UX
Natural language seemed like the future of interaction. Until we realized that asking users to write what they want isn't design.
Continue reading
How to Evaluate if Your AI in Production Is Actually Working
Most companies ship AI without knowing how to measure if it's working. This guide turns technical monitoring into strategic decision-making.
Continue reading