Why This Matters
Web performance in 2025 directly affects your search rankings and user retention. Google’s Core Web Vitals are a ranking factor, and studies keep confirming that even 100ms delays hurt conversion rates. But let’s be honest, the real reason performance matters is simpler than that. Users don’t want to wait. Every second they spend staring at a loading spinner is a second they’re reconsidering whether your site is worth their time.
I spent a while optimizing my portfolio site with Astro 5 and Netlify, and the difference was noticeable immediately. Pages that felt “good enough” now feel instant. This article covers exactly what I did, why it works, and how you can do the same thing. No theoretical hand waving, just practical techniques that ship.
🎯 What You'll Learn
- ✓ Server Islands for deferred rendering with skeleton fallbacks
- ✓ View Transitions API for SPA navigation without the JavaScript bloat
- ✓ Session based animation skipping for returning visitors
- ✓ Prefetching with the Speculation Rules API
- ✓ Edge caching strategies with Netlify
- ✓ Vite chunking for better browser caching
The Audit: What Was Actually Slow
Before throwing optimizations at the wall, I audited what was actually wrong. The answer surprised me. Some things I expected to be problems weren’t. Some things I thought were fine were actually terrible. Here’s what the audit revealed.
The hero animation played every single time someone visited the homepage. Navigate away, come back, watch the whole entrance animation again. It looked great the first time. By the third navigation back to home, it was annoying. The view transitions ran at 300ms, which sounds fast on paper but feels sluggish in practice. Your brain knows when something should be instant and 300ms is not instant. The blog and projects pages were prerendered, which sounds like the right call for performance, but it created a problem. All the data had to be fetched before any HTML could be sent. Users saw nothing, then suddenly everything. No intermediate state, no progressive loading, just a blank screen followed by a complete page. Links weren’t being prefetched at all, so every navigation required a full network round trip. And server generated content had no cache headers, which meant every request hit the origin server even when the content hadn’t changed.
Prerendering sounds optimal for performance, but it can actually hurt perceived performance for content heavy pages. Users see nothing until the entire page is ready, whereas streaming can show the shell immediately while content loads.
Server Islands: Show Something Immediately
Astro 5 introduced Server Islands, and this was the single biggest improvement I made. The concept is simple. Instead of waiting for everything to render on the server before sending HTML, you send the page shell immediately and stream in the expensive parts when they’re ready. Users see a skeleton placeholder instead of a blank screen, and then the real content fades in.
Traditional SSR works like this: the server fetches all your data, renders the complete HTML, then sends it to the browser. The user sees nothing until everything is done. Server Islands flip that model. The page shell and static content render immediately. Dynamic sections render in parallel and stream to the client when ready. The psychological difference is huge. A skeleton that loads in 100ms and fills in over the next 500ms feels faster than a complete page that loads in 400ms, even though the total time is longer.
How Server Islands Work
Here’s what the before and after looks like in practice. The old approach blocked on data fetching:
---
// Before: Everything blocks on data fetching
import BlogListVue from "@/components/blog/BlogListVue.vue";
import { getCollection } from "astro:content";
const allBlogs = await getCollection("blogs"); // Blocks here!
// ... more data processing
---
<Layout>
<BlogListVue client:load posts={blogs} />
</Layout>
The new approach renders the shell immediately and defers the expensive part:
---
// After: Shell renders immediately, content streams in
import BlogListIsland from "@/components/blog/BlogListIsland.astro";
import BlogListSkeleton from "@/components/skeletons/BlogListSkeleton.astro";
---
<Layout>
<!-- Server island with skeleton fallback -->
<BlogListIsland server:defer>
<BlogListSkeleton slot="fallback" />
</BlogListIsland>
</Layout>
Building the Island Component
The trick is separating data fetching into a dedicated Astro component. This component does all the async work, and because it’s wrapped in server:defer, the main page doesn’t wait for it. The page shell ships immediately while this component renders in parallel on the server.
---
// src/components/blog/BlogListIsland.astro
import BlogListVue from "@/components/blog/BlogListVue.vue";
import { getCollection } from "astro:content";
// This data fetching happens in parallel, not blocking the main page
const allBlogs = await getCollection("blogs");
const publishedBlogs = allBlogs
.filter((blog) => blog.data.status === "published")
.sort((a, b) =>
new Date(b.data.publishedAt).getTime() -
new Date(a.data.publishedAt).getTime()
)
.map((blog) => ({
id: blog.id,
title: blog.data.title,
// ... map to serializable props
}));
const allTags = [...new Set(publishedBlogs.flatMap((blog) => blog.tags))].sort();
const allCategories = [...new Set(publishedBlogs.map((blog) => blog.category))].sort();
---
<BlogListVue
client:load
posts={publishedBlogs}
allTags={allTags}
allCategories={allCategories}
/>
Props passed to server islands must be serializable. No functions, class instances, or circular references. Astro supports plain objects, numbers, strings, Arrays, Maps, Sets, RegExp, Date, BigInt, URL, and typed arrays.
Skeleton Loaders That Don’t Cause Layout Shift
Your skeleton needs to match the actual content layout. If the skeleton is 200px tall and the real content is 600px, you’ll get a jarring jump when the content loads. That’s Cumulative Layout Shift, and Google penalizes it. The fix is simple: design skeletons that occupy the same space as the real thing.
Here’s an example using shadcn’s Skeleton component. Notice how the structure mirrors the actual blog card grid:
---
// src/components/skeletons/BlogListSkeleton.astro
import { Skeleton } from "@/components/ui/skeleton";
---
<div class="space-y-8">
<!-- Search and filters skeleton -->
<div class="flex flex-col lg:flex-row gap-4 lg:items-center lg:justify-between">
<Skeleton class="h-10 w-full max-w-md rounded-lg" />
<div class="flex flex-wrap gap-2">
<Skeleton class="h-8 w-24 rounded-full" />
<Skeleton class="h-8 w-20 rounded-full" />
<Skeleton class="h-8 w-28 rounded-full" />
</div>
</div>
<!-- Blog cards grid -->
<div class="grid gap-6 md:grid-cols-2 lg:grid-cols-3">
{[1, 2, 3, 4, 5, 6].map(() => (
<div class="rounded-xl border border-border/50 bg-card/50">
<Skeleton class="aspect-video w-full" />
<div class="p-5 space-y-3">
<Skeleton class="h-6 w-full" />
<Skeleton class="h-4 w-5/6" />
<div class="flex gap-2">
<Skeleton class="h-5 w-14 rounded-full" />
<Skeleton class="h-5 w-18 rounded-full" />
</div>
</div>
</div>
))}
</div>
</div>
| Approach | Time to First Byte | Time to Interactive | User Experience |
|---|---|---|---|
| Prerender (SSG) | Instant (CDN) | Delayed (all or nothing) | Blank screen, then full page |
| Standard SSR | Slow (server processing) | Delayed | Blank screen while processing |
| Server Islands | Fast (shell only) | Progressive | Skeleton, then content streams in |
Server Islands provide the best perceived performance by showing content progressively
View Transitions That Feel Instant
Astro’s View Transitions API gives you SPA navigation without the JavaScript overhead of a full client side router. The browser handles the animation natively, which means smooth 60fps transitions without shipping a routing library. The catch is that the default configuration feels slow.
Faster Transition Timing
The default slide animation runs at 300ms. That doesn’t sound like much, but your brain notices. The fix is simple: drop it to 150ms for main content and 100ms for peripheral elements like the footer. The difference is immediately noticeable.
---
// src/layouts/Layout.astro
import { ClientRouter, fade } from "astro:transitions";
---
<head>
<!-- Enable client-side routing with view transitions -->
<ClientRouter />
</head>
<body>
<Navbar transition:persist />
<main transition:animate={fade({ duration: "0.15s" })}>
<slot />
</main>
<Footer transition:animate={fade({ duration: "0.1s" })} />
</body>
Persisting Stateful Components
The transition:persist directive tells Astro to keep a component’s DOM and state intact across navigations. This is exactly what you want for navigation bars, audio players, or form inputs. Without it, the navbar would re-render and potentially replay entrance animations on every page change.
<!-- The navbar won't re-render during page transitions -->
<Navbar transition:persist />
Use transition:persist for components that should maintain state. Use transition:name when you want to animate between two different elements that represent the same conceptual thing, like a blog card morphing into a blog header.
Skipping Animations for Returning Visitors
This was one of those changes that made me wonder why I didn’t do it sooner. The hero section has a nice entrance animation. Text fades in, elements slide up, the whole thing looks polished. The problem is that it plays every single time you visit the homepage. Navigate to the blog, come back, watch the full animation again. Navigate to contact, come back, animation again. It goes from impressive to annoying fast.
The fix uses session storage to track whether the animation has played. First visit in a session gets the animation. Every subsequent visit shows the content immediately. When the user closes the tab and comes back later, they get the animation again, which keeps the first impression fresh without being repetitive.
The Pattern
<script setup lang="ts">
import { ref, onMounted, nextTick } from 'vue';
const isReady = ref(false);
const skipAnimation = ref(false);
const HERO_ANIMATED_KEY = 'hero-animated-session';
onMounted(async () => {
// Check if animation has already played this session
const hasAnimated = sessionStorage.getItem(HERO_ANIMATED_KEY);
if (hasAnimated === 'true') {
// Skip animation - show content immediately
skipAnimation.value = true;
isReady.value = true;
return;
}
// First visit - play animation and mark as done
await nextTick();
requestAnimationFrame(() => {
requestAnimationFrame(() => {
isReady.value = true;
sessionStorage.setItem(HERO_ANIMATED_KEY, 'true');
});
});
});
</script>
<template>
<div :class="skipAnimation ? 'hero-instant' : (isReady ? 'hero-animate' : 'hero-initial')">
<!-- Hero content -->
</div>
</template>
<style scoped>
/* Instant state - no animation */
.hero-instant .hero-element {
opacity: 1;
transform: translateY(0);
}
/* Initial state - hidden, ready for animation */
.hero-initial .hero-element {
opacity: 0;
transform: translateY(20px);
}
/* Animate state - trigger staggered animations */
.hero-animate .hero-element {
animation: hero-fade-in-up 0.6s cubic-bezier(0.16, 1, 0.3, 1) forwards;
}
</style>
Why Session Storage Over Local Storage
I went with session storage deliberately. Session storage clears when the browser tab closes. That means users see the animation again in new sessions, which keeps the first impression fresh for people who haven’t visited in a while. Local storage persists forever, which means users might never see the animation again, even after months. That felt wrong for something designed to make a good first impression.
The double requestAnimationFrame call ensures the browser has finished painting before triggering animations. Without it, you can get a flash of unstyled content when JavaScript runs before the initial render completes.
Prefetching: Load Pages Before Users Click
Astro’s prefetching system, combined with the experimental Speculation Rules API, makes navigation feel instant. The idea is simple: when you can predict what page the user will visit next, start loading it before they click. By the time they actually navigate, the page is already in the browser cache.
Configuration
// astro.config.mjs
export default defineConfig({
prefetch: {
prefetchAll: true,
defaultStrategy: 'viewport', // Prefetch when links enter viewport
},
experimental: {
clientPrerender: true, // Enable Speculation Rules API for Chrome
},
});
I went with viewport as the default strategy. That means any link that scrolls into view starts prefetching. For most sites, this is the sweet spot between aggressive prefetching and wasting bandwidth. If you have a massive site with hundreds of links per page, you might want to drop down to hover.
| Strategy | When It Triggers | Best For |
|---|---|---|
| hover | Mouse hovers over link | Most links (default) |
| tap | Just before click | Expensive pages, uncertain navigation |
| viewport | Link enters viewport | Important navigation, infinite scroll |
| load | Page load completes | Critical paths, small sites |
Choose prefetch strategies based on user intent signals and page cost
The Speculation Rules API
For Chrome users, clientPrerender: true unlocks something even better than prefetching. The Speculation Rules API actually prerenders pages in a hidden tab. This goes beyond fetching the HTML. It executes JavaScript, loads images, the whole thing. When the user clicks, they’re not navigating to a new page, they’re just swapping in a page that already exists.
// Programmatic prefetching with eagerness control
import { prefetch } from 'astro:prefetch';
// Critical path - prerender immediately
prefetch('/getting-started', { eagerness: 'immediate' });
// Expensive page - be conservative
prefetch('/data-heavy-dashboard', { eagerness: 'conservative' });
// May not be visited - let browser decide
prefetch('/terms-of-service', { eagerness: 'moderate' });
The Speculation Rules API is Chrome only for now. Astro gracefully falls back to standard prefetching in other browsers. Safari requires proper cache headers for prefetching to work at all.
Edge Caching with Netlify
Even with all the rendering optimizations, network latency still matters. If every request hits your origin server, users far from your server region will have slow experiences. Edge caching solves this by serving content from nodes geographically close to users. The tricky part is getting the cache headers right.
The Cache Strategy
Different content types need different caching approaches. Static assets with hashed filenames can be cached forever because if the content changes, the filename changes. Dynamic content needs shorter caches with background revalidation. HTML pages should always check for updates but can use ETags for efficient validation.
# netlify.toml
# Immutable assets (hashed filenames) - cache forever
[[headers]]
for = "/_astro/*"
[headers.values]
Cache-Control = "public, max-age=31536000, immutable"
# Fonts - cache aggressively
[[headers]]
for = "/*.woff2"
[headers.values]
Cache-Control = "public, max-age=31536000, immutable"
# Images - long cache with background revalidation
[[headers]]
for = "/*.webp"
[headers.values]
Cache-Control = "public, max-age=86400, stale-while-revalidate=604800"
# Server islands - short cache, background refresh
[[headers]]
for = "/_server-islands/*"
[headers.values]
Cache-Control = "public, max-age=60, stale-while-revalidate=300"
# HTML pages - always revalidate
[[headers]]
for = "/*.html"
[headers.values]
Cache-Control = "public, max-age=0, must-revalidate"
What These Directives Mean
The immutable directive tells browsers the resource will never change. Don’t bother revalidating, don’t send conditional requests, just use the cached version until it expires. This is perfect for hashed assets because the filename itself is the version. The stale-while-revalidate directive lets browsers serve stale content immediately while fetching a fresh version in the background. Users get instant responses, and the cache stays fresh. The max-age=0, must-revalidate combo says always check for updates, but use ETags or Last-Modified headers for efficient validation instead of downloading the whole thing again.
Server islands fetch via GET with encrypted props in the query string, making them cacheable. Set short max-age with longer stale-while-revalidate to balance freshness with speed.
Vite Build Optimization
Code splitting is one of those things that sounds complicated but has a simple goal: keep frequently changing code separate from rarely changing code. When you update your app logic, you don’t want to invalidate the cached copy of Vue or your icon library. Users should only download what actually changed.
Vite’s manual chunks configuration lets you control exactly how code gets split:
// astro.config.mjs
export default defineConfig({
vite: {
build: {
rollupOptions: {
output: {
manualChunks: {
'vue-vendor': ['vue', '@vueuse/core'],
'lucide': ['lucide-vue-next'],
},
},
},
},
},
});
This configuration does two things. It separates Vue’s runtime from your application code. Vue updates are rare, so this chunk stays cached across deployments. It also isolates the icon library. Lucide icons are surprisingly large in aggregate, and keeping them separate prevents cache busting when you change unrelated code. The end result is that most deployments only require users to download your actual application changes, not the entire bundle.
The Results
After implementing all of this, I measured the improvements using Lighthouse and real world testing. The numbers were better than I expected:
| Metric | Before | After | Improvement |
|---|---|---|---|
| Time to First Byte (TTFB) | ~400ms | ~120ms | 70% faster |
| First Contentful Paint (FCP) | ~1.2s | ~0.4s | 67% faster |
| Largest Contentful Paint (LCP) | ~2.1s | ~0.9s | 57% faster |
| Cumulative Layout Shift (CLS) | 0.12 | 0.02 | 83% reduction |
| Page Navigation (perceived) | ~500ms | ~100ms | Feels instant |
Real world performance improvements measured on the production site
The most interesting result isn’t in that table. It’s the perceived navigation time. Pages don’t just load faster, they feel instant. That’s the combination of prefetching, fast view transitions, and server islands working together. The skeleton appears immediately, content streams in progressively, and by the time you’re thinking about clicking the next link, it’s already prefetched.
Quick Reference
🎯 Performance Optimization Checklist
- ✓ Use Server Islands for any page with data fetching, show skeletons immediately
- ✓ Reduce view transition durations to 100 to 150ms for snappier feel
- ✓ Implement session based animation skipping for returning visitors
- ✓ Enable viewport prefetching and the Speculation Rules API
- ✓ Configure proper cache headers: immutable for hashed assets, stale-while-revalidate for dynamic content
- ✓ Split vendor chunks to improve long term caching
- ✓ Always design skeletons that match actual content layout to prevent CLS
Wrapping Up
Perceived performance matters more than raw metrics. Users don’t care about your TTFB if they’re staring at a blank screen. The principle behind everything here is the same: show something immediately, then progressively enhance. Server islands show skeletons while content loads. View transitions animate between states instead of flashing. Prefetching loads pages before users click. Cache headers serve content from the edge instead of the origin.
None of these techniques are particularly complicated on their own. The value comes from combining them. Start with server islands on your heaviest pages. Add proper cache headers. Enable prefetching. Reduce your transition durations. Each improvement compounds on the others.
Don’t implement all of this at once. Start with server islands for your heaviest pages, measure the impact, then iterate. Use Lighthouse, Web Vitals, and real user monitoring to guide your decisions.
Further Reading
If you want to dig deeper into any of these topics, here are the official docs:
Astro Server Islands Documentation Astro View Transitions Guide Astro Prefetch Configuration Netlify Caching Headers Chrome Speculation Rules API Web.dev Core Web Vitals Streaming HTML for PerformanceEnjoyed this article? Share it with others!