Cloudflare R2 Presigned URLs: Direct Browser Uploads That Bypass Server Limits
Cloudflare R2 Presigned URLs: Direct Browser Uploads That Bypass Server Limits
When building FlowSub, I hit a wall: Vercel's serverless functions have a 4.5MB request body limit. Our users upload video files that can be 500MB+. The standard approach — upload to the server, then to storage — simply doesn't work.
The Problem
Serverless platforms like Vercel enforce request body size limits. For Vercel Pro, that's 4.5MB. Video files are typically 10MB–500MB. Even short 30-second clips exceed the limit.
The Solution: Presigned URLs
Cloudflare R2 (S3-compatible storage) supports presigned URLs. The flow:
1. Browser requests an upload URL from our API
2. Server generates a presigned PUT URL scoped to a specific object key
3. Browser uploads directly to R2 using the presigned URL
4. R2 accepts the file — no server body limits involved
Implementation
The upload endpoint (`src/app/api/upload/route.ts`) generates a presigned URL using the AWS SDK:
import { S3Client } from "@aws-sdk/client-s3";
import { getSignedUrl } from "@aws-sdk/s3-request-presigner";
import { PutObjectCommand } from "@aws-sdk/client-s3";
const r2 = new S3Client({
region: "auto",
endpoint: `https://${process.env.CF_ACCOUNT_ID}.r2.cloudflarestorage.com`,
credentials: {
accessKeyId: process.env.R2_ACCESS_KEY_ID!,
secretAccessKey: process.env.R2_SECRET_ACCESS_KEY!,
},
});
const url = await getSignedUrl(
r2,
new PutObjectCommand({
Bucket: "flowsub-uploads",
Key: `uploads/${jobId}/${fileName}`,
ContentType: fileType,
}),
{ expiresIn: 3600 }
);
Why This Architecture Works
The Full Pipeline
1. User selects a file → browser requests presigned URL
2. Browser uploads to R2 directly
3. On completion, browser notifies our API with the object key
4. API triggers transcription — FFmpeg reads from R2 URL, streams to Groq Whisper
5. Result stored back in R2, subtitles served to user
This architecture is what lets FlowSub process 50,000+ jobs per week without breaking a sweat.