
To handle file uploads in Next.js 15, send small files (under 4MB) through a Route Handler or Server Action that reads FormData, and offload anything larger to a pre-signed URL the browser PUTs directly to S3, Cloudflare R2, or Tigris. The server never touches the bytes for big files; it only signs the URL, validates the metadata, and records the object key in your database.
That two-track pattern (server-side for small, client-direct for large) is the only thing that scales without surprise bills, surprise timeouts, or surprise truncation. Everything else in this post is a variation on it.
File uploads used to be a backend problem. In a Next.js app, they are a serverless problem, a streaming problem, an egress problem, and a quota problem all at once. The defaults bite hard.
Vercel's serverless functions cap request bodies at 4.5MB on the Hobby and Pro plans. AWS Lambda caps at 6MB synchronous payload. A user dragging a 30MB MP4 into your form will hit a 413 before your validation code runs. And on Next.js 15.5+, there is a quieter failure mode: the new internal proxy silently truncates binary FormData over 1MB unless you set proxyClientMaxBodySize AND serverActions.bodySizeLimit in next.config.js (the fix officially landed in Next.js 16, per the GitHub discussion). You can ship a feature, watch QA upload a 2MB image, and find a 50-byte file in your bucket the next day.
Fixing all of this is a 2-day job for an engineer who has done it before, and a 2-week job for an engineer who hasn't.
Most Next.js tutorials show this pattern:
// app/api/upload/route.ts
export async function POST(req: Request) {
const formData = await req.formData()
const file = formData.get('file') as File
const buffer = Buffer.from(await file.arrayBuffer())
await fs.writeFile(`./uploads/${file.name}`, buffer)
return Response.json({ ok: true })
}
This works on next dev. It breaks the moment you deploy:
./uploads/ does not persist on Vercel, AWS Lambda, or Cloudflare Workers. Your file is gone the instant the function returns.file.type from the client. The client lied. You just stored an .exe named cat.png.The fix is not "make the function bigger." The fix is to stop putting the file through your function.
If the file is under 4MB and you need to do something with it server-side immediately (parse a CSV, generate a thumbnail, send to OpenAI), use a Route Handler or Server Action. If it is larger than 4MB or you don't need to touch the bytes, use a pre-signed URL. Pick one, document it, move on.
// app/api/upload/route.ts
import { fileTypeFromBuffer } from 'file-type'
export const runtime = 'nodejs'
export const maxDuration = 30
export async function POST(req: Request) {
const form = await req.formData()
const file = form.get('file')
if (!(file instanceof File)) return new Response('bad request', { status: 400 })
if (file.size > 4 * 1024 * 1024) return new Response('too large', { status: 413 })
const buf = Buffer.from(await file.arrayBuffer())
const sniffed = await fileTypeFromBuffer(buf)
const allowed = ['image/png', 'image/jpeg', 'image/webp']
if (!sniffed || !allowed.includes(sniffed.mime)) {
return new Response('unsupported type', { status: 415 })
}
// ... store buf in S3 / R2 / Blob ...
return Response.json({ ok: true })
}
The file-type package reads the first few bytes (magic numbers) and tells you what the file actually is, not what the client claimed it was. That single check kills 90% of the abuse vectors. The same library is what Tigris and UploadThing use under the hood for their own validation.
The browser asks your server for a URL. The server signs it (10 lines of code with the AWS SDK). The browser PUTs the bytes directly to the bucket. Your function does maybe 50ms of work and never touches the file body.
// app/api/sign-upload/route.ts
import { S3Client } from '@aws-sdk/client-s3'
import { PutObjectCommand } from '@aws-sdk/client-s3'
import { getSignedUrl } from '@aws-sdk/s3-request-presigner'
const s3 = new S3Client({
region: 'auto',
endpoint: process.env.R2_ENDPOINT, // or omit for AWS S3
credentials: { accessKeyId: process.env.R2_KEY!, secretAccessKey: process.env.R2_SECRET! },
})
export async function POST(req: Request) {
const { filename, contentType } = await req.json()
const key = `uploads/${crypto.randomUUID()}-${filename}`
const url = await getSignedUrl(
s3,
new PutObjectCommand({ Bucket: 'my-bucket', Key: key, ContentType: contentType }),
{ expiresIn: 60 }
)
return Response.json({ url, key })
}
Client side:
const { url, key } = await fetch('/api/sign-upload', {
method: 'POST',
body: JSON.stringify({ filename: file.name, contentType: file.type }),
}).then(r => r.json())
await fetch(url, { method: 'PUT', body: file, headers: { 'Content-Type': file.type } })
This pattern works identically across S3, Cloudflare R2, Tigris, MinIO, Backblaze B2, and Wasabi. Swap the endpoint, keep the code. The same idea applies whenever you would otherwise pass user data through your server unnecessarily, similar to the hot-path thinking in REST API design for 2026.
Above ~100MB, a single PUT is fragile. Use multipart. The S3 SDK exposes CreateMultipartUploadCommand, then you sign each part URL (UploadPartCommand), the client PUTs parts in parallel, and finishes with CompleteMultipartUploadCommand. Tigris and R2 both speak the S3 multipart API verbatim.
The library @aws-sdk/lib-storage (Upload class) handles this automatically if you proxy through your server, but for true client-direct uploads you sign the parts yourself or use a managed wrapper like UploadThing or Uppy.
Native fetch does not expose upload progress. Use XMLHttpRequest with upload.onprogress, or the axios onUploadProgress callback, or the tus protocol for resumability. For most apps, XHR + a progress bar is enough. For heavy use (video editors, CAD tools), tus + uppy is worth the extra week of integration.
Don't make scanning synchronous. Drop the object in a pending/ prefix, fire an SQS / Inngest / QStash event, scan with ClamAV (or a hosted service like Cloudmersive), and atomically move to clean/ on pass or quarantine/ on fail. The user sees an "uploaded, processing" state. This is the same async-job thinking you'd use for adding rate limiting to an API, where you separate the cheap signed handshake from the expensive backend work.
Once the file is in storage, serve it through next/image with a loader pointing at your CDN (Cloudflare Images, Imgix, Cloudinary, or Vercel's built-in optimizer). Don't store 12 resized variants in your bucket. Store the original; resize on the fly at the edge. R2 has a Cloudflare Images binding that does this for $5 per million transformations.
If you don't want to wire any of the above by hand, three services dominate the Next.js space in 2026.
| Service | Storage | Egress | Free tier | Best for |
|---|---|---|---|---|
| Vercel Blob | $0.023/GB | $0.05/GB outbound | 1GB storage, 10GB bandwidth | Teams already on Vercel Pro |
| Cloudflare R2 | $0.015/GB | $0 | 10GB storage, 1M Class A ops | High-traffic media, downloads |
| Tigris | $0.02/GB | $0 | 5GB storage | S3-compat with global replication |
| UploadThing | bundled | bundled | 2GB | Fastest path to a working dropzone |
The egress column is where bills are made or saved. At 1TB of monthly bandwidth (a modest podcast or video site), Vercel Blob runs about $50/month in egress alone. R2 runs $0. At 10TB you're comparing $500 to $0. R2's tradeoff is that Class A writes cost $4.50 per million and Class B reads cost $0.36 per million, so a workload of millions of tiny objects can flip the math, but for typical user uploads R2 wins on cost by an order of magnitude.
UploadThing is the fastest "I have a dropzone in production by lunchtime" path. Their React component handles auth, signing, progress, and validation in about 20 lines. The tradeoff is that you are renting their infrastructure and pricing model; migrating off later means rewriting the upload flow.
Vercel Blob's killer feature is the @vercel/blob/client SDK, which handles the signed-URL dance for you and bypasses the 4.5MB body limit automatically. If you are already paying for Vercel Pro and your egress is modest, it is the path of least resistance.
file.type from the browser. The MIME string in FormData is whatever the client sent. Always sniff with file-type. Symptom: random application/octet-stream or wildly wrong types in your bucket logs.PUT, Content-Type, and your origin. Symptom: signed URL works in curl, fails in the browser with a CORS error.serverActions.bodySizeLimit but not proxyClientMaxBodySize. On Next.js 15.5+, your 10MB upload silently arrives as 1MB of garbage. Symptom: tiny corrupt files in production, works fine locally.uploads/1.jpg, uploads/2.jpg, etc. Use UUIDs in the key and serve via signed GET URLs or a CDN with token auth.If your app uploads avatars and PDFs at low volume (say, under 100 uploads a day, all under 5MB), Vercel Blob with the client SDK is a 2-hour job and you should not over-engineer it. Skip pre-signed URLs, skip multipart, skip the queue. Add them when traffic forces you to.
Same for virus scanning. If your uploads are user avatars displayed only to the uploading user, the blast radius of a malicious file is one user. If they are documents shared across an organization or rendered for other users, scanning is non-negotiable. Match the controls to the threat model, not to the checklist.
file-type sniffing on the server. Always.proxyClientMaxBodySize and serverActions.bodySizeLimit in next.config.js if you are on Next.js 15.5+.If your team has never built this before, the rollout typically takes a week and produces three subtle bugs in production over the next month. Every engineer on Cadence is AI-native by default, vetted on Cursor, Claude Code, and Copilot fluency before they unlock bookings, and a senior at $1,500/week will usually ship the full pre-signed URL pipeline (S3 client, validation, CORS, lifecycle rules, progress UI) inside a 48-hour trial. Cadence's median time to first commit across 12,800 vetted engineers is 27 hours, so you'll see the first PR before the trial ends.
Want a second opinion on your upload stack before you ship? Run Ship-or-Skip for a free, honest grade on the architecture, or book a senior Next.js engineer for a 48-hour trial. Replace any week, no notice period.
The same principle that makes uploads sane (offload anything you don't need to touch) is the principle behind every other API design best practice for 2026: your server's job is to coordinate, not to carry. File bytes are coordination's worst nightmare. Sign the URL and get out of the way.
By default, 1MB. You can raise it with serverActions: { bodySizeLimit: '10mb' } in next.config.js, but on Next.js 15.5+ you also need experimental: { proxyClientMaxBodySize: '10mb' } or the proxy will silently truncate binary data. For anything over 4MB, prefer a pre-signed URL upload to S3 or R2 instead of pushing through the action.
R2 if you serve a lot of bandwidth (R2 has $0 egress, Vercel Blob is $0.05/GB). Vercel Blob if you want the simplest possible integration on a Vercel deployment and your egress is under 100GB/month. UploadThing if you want a working dropzone in an afternoon and don't mind a managed pricing model.
If users only see their own uploads (avatars, personal documents): probably no, the blast radius is one user. If uploads are shared across users or rendered in other browsers (forum attachments, shared workspaces): yes, scan asynchronously after the file lands in a pending/ prefix, then move to clean/ on pass.
Native fetch doesn't expose upload progress. Use XMLHttpRequest with xhr.upload.onprogress, the axios client's onUploadProgress callback, or the tus-js-client library if you also need resumability. For multi-gigabyte uploads (video, CAD), use Uppy with a tus server or the multipart S3 API with parallel part uploads.
Vercel Blob: about $50/month in egress ($0.05/GB times 1024GB) plus storage. Cloudflare R2: $0 egress, plus $15/month for 1TB of storage and pennies in operations. The crossover gets steeper as you grow: at 10TB/month, R2 saves you roughly $500/month against Vercel Blob and roughly $900/month against raw AWS S3.