Skip to content

Large File Uploads

Upload files up to 5 GB using a two-step presigned URL flow that bypasses the standard 10 MB request limit.

When to use this

File sizeUpload path
≤ 7 MBUse regular POST /upload/new with base64 content
> 7 MB and ≤ 5 GBUse the presigned URL flow documented here
> 5 GBSplit into multiple uploads or use the S3-compatible API with multipart

Why two steps?

The single-step /upload/new endpoint runs through AWS API Gateway, which caps request payloads at 10 MB (≈7.5 MB after base64 encoding). The presigned URL lets your client upload the file binary directly to S3 storage, bypassing that limit entirely.

How It Works

┌─────────┐   1. POST /upload/init   ┌───────────┐
│ Client  │ ─────────────────────→   │    API    │
│         │ ←────────────────────    │  Gateway  │
│         │   uploadId + uploadUrl   └───────────┘
│         │
│         │   2. PUT binary          ┌───────────┐
│         │ ─────────────────────→   │    S3     │
│         │ ←────────────────────    │  staging  │
│         │   200 OK                 └───────────┘
│         │
│         │   3. POST /upload/complete/{uploadId}
│         │ ─────────────────────→   ┌───────────┐
│         │                          │  Lambda   │
│         │                          │  safety + │
│         │                          │  IPFS     │
│         │ ←────────────────────    └───────────┘
│         │   { cid, sizeMB, uris }
└─────────┘
  1. Init — client tells us how big the file is; we return a time-limited presigned S3 URL
  2. Upload — client PUTs the raw binary directly to S3 (no API Gateway involved)
  3. Complete — we read the file from S3, run safety scans, upload to IPFS, and record it in your account

Step by Step

Step 1: Initialize the upload

Tell the API how big your file is and any metadata.

bash
curl -X POST https://api.ipfs.ninja/upload/init \
  -H "X-Api-Key: bws_your_api_key" \
  -H "Content-Type: application/json" \
  -d '{
    "sizeMB": 52.86,
    "contentType": "video/mp4",
    "description": "My large video file"
  }'

Response:

json
{
  "uploadId": "a3f1b2c4-d5e6-47f8-8901-2a3b4c5d6e7f",
  "uploadUrl": "https://s3.amazonaws.com/staging/uploads/...?X-Amz-Signature=...",
  "expiresIn": 3600,
  "maxSizeBytes": 5368709120
}

Step 2: Upload the binary to the presigned URL

The uploadUrl is valid for 1 hour. PUT your raw file binary there — no auth header needed, the URL is pre-signed.

bash
curl -X PUT "<uploadUrl from step 1>" \
  -H "Content-Type: video/mp4" \
  --data-binary @my-video.mp4

Step 3: Complete the upload

Tell us the upload is done. We'll fetch it from S3, safety-scan it, push it to IPFS, and return the CID.

bash
curl -X POST "https://api.ipfs.ninja/upload/complete/a3f1b2c4-d5e6-47f8-8901-2a3b4c5d6e7f" \
  -H "X-Api-Key: bws_your_api_key"

Response:

json
{
  "cid": "bafybeig...",
  "sizeMB": 52.86,
  "fileType": "video",
  "uris": {
    "ipfs": "ipfs://bafybeig...",
    "url": "https://ipfs.ninja/ipfs/bafybeig..."
  }
}

Full JavaScript Example

javascript
import fs from "fs";

const API = "https://api.ipfs.ninja";
const API_KEY = "bws_your_api_key";

async function uploadLargeFile(filePath, description) {
  const stats = fs.statSync(filePath);
  const sizeMB = stats.size / (1024 * 1024);
  const contentType = filePath.endsWith(".mp4") ? "video/mp4" :
                      filePath.endsWith(".pdf") ? "application/pdf" :
                      "application/octet-stream";

  // 1. Init
  const initRes = await fetch(`${API}/upload/init`, {
    method: "POST",
    headers: {
      "X-Api-Key": API_KEY,
      "Content-Type": "application/json"
    },
    body: JSON.stringify({ sizeMB, contentType, description })
  });
  const { uploadId, uploadUrl } = await initRes.json();
  console.log(`Got upload URL, uploading ${sizeMB.toFixed(1)} MB...`);

  // 2. Upload binary directly to S3
  const fileBuffer = fs.readFileSync(filePath);
  const putRes = await fetch(uploadUrl, {
    method: "PUT",
    headers: { "Content-Type": contentType },
    body: fileBuffer
  });
  if (!putRes.ok) throw new Error(`Upload failed: ${putRes.status}`);

  // 3. Complete
  const completeRes = await fetch(`${API}/upload/complete/${uploadId}`, {
    method: "POST",
    headers: { "X-Api-Key": API_KEY }
  });
  const result = await completeRes.json();
  console.log(`Done! CID: ${result.cid}`);
  return result;
}

// Usage
const result = await uploadLargeFile("./my-video.mp4", "My video");
console.log(`View it at: ${result.uris.url}`);

Python Example

python
import requests
import os

API = "https://api.ipfs.ninja"
API_KEY = "bws_your_api_key"

def upload_large_file(path, description=""):
    size_mb = os.path.getsize(path) / (1024 * 1024)
    content_type = "video/mp4" if path.endswith(".mp4") else "application/octet-stream"

    # 1. Init
    init_res = requests.post(
        f"{API}/upload/init",
        headers={"X-Api-Key": API_KEY, "Content-Type": "application/json"},
        json={"sizeMB": size_mb, "contentType": content_type, "description": description}
    )
    init_data = init_res.json()
    upload_id = init_data["uploadId"]
    upload_url = init_data["uploadUrl"]
    print(f"Uploading {size_mb:.1f} MB...")

    # 2. Upload binary
    with open(path, "rb") as f:
        put_res = requests.put(upload_url, data=f, headers={"Content-Type": content_type})
    put_res.raise_for_status()

    # 3. Complete
    complete_res = requests.post(
        f"{API}/upload/complete/{upload_id}",
        headers={"X-Api-Key": API_KEY}
    )
    result = complete_res.json()
    print(f"Done! CID: {result['cid']}")
    return result

result = upload_large_file("./my-video.mp4", "My video")
print(f"View it at: {result['uris']['url']}")

Browser Example

Using fetch in the browser — the PUT request is CORS-enabled so it works from any origin.

javascript
async function uploadLargeFile(file, description = "") {
  // 1. Init (needs your API key — do this server-side in production, or use a signed token)
  const initRes = await fetch("https://api.ipfs.ninja/upload/init", {
    method: "POST",
    headers: {
      "X-Api-Key": "bws_your_api_key",
      "Content-Type": "application/json"
    },
    body: JSON.stringify({
      sizeMB: file.size / (1024 * 1024),
      contentType: file.type,
      description
    })
  });
  const { uploadId, uploadUrl } = await initRes.json();

  // 2. Upload with progress
  await new Promise((resolve, reject) => {
    const xhr = new XMLHttpRequest();
    xhr.upload.onprogress = (e) => {
      if (e.lengthComputable) {
        const pct = (e.loaded / e.total) * 100;
        console.log(`Upload: ${pct.toFixed(1)}%`);
      }
    };
    xhr.onload = () => xhr.status === 200 ? resolve() : reject(new Error(`Upload failed: ${xhr.status}`));
    xhr.onerror = () => reject(new Error("Upload failed"));
    xhr.open("PUT", uploadUrl);
    xhr.setRequestHeader("Content-Type", file.type);
    xhr.send(file);
  });

  // 3. Complete
  const completeRes = await fetch(`https://api.ipfs.ninja/upload/complete/${uploadId}`, {
    method: "POST",
    headers: { "X-Api-Key": "bws_your_api_key" }
  });
  return completeRes.json();
}

// Usage with a file input
document.getElementById("fileInput").onchange = async (e) => {
  const result = await uploadLargeFile(e.target.files[0], "Uploaded from browser");
  console.log("CID:", result.cid);
};

Large CAR File Imports

The same flow supports CAR files. Just set car: true in step 1:

bash
# Step 1: Init with car flag
curl -X POST https://api.ipfs.ninja/upload/init \
  -H "X-Api-Key: bws_your_api_key" \
  -H "Content-Type: application/json" \
  -d '{"sizeMB": 250, "contentType": "application/vnd.ipld.car", "car": true}'

# Step 2: PUT the CAR binary
curl -X PUT "<uploadUrl>" \
  -H "Content-Type: application/vnd.ipld.car" \
  --data-binary @my-archive.car

# Step 3: Complete (returns fileCount for CARs)
curl -X POST "https://api.ipfs.ninja/upload/complete/<uploadId>" \
  -H "X-Api-Key: bws_your_api_key"
# { "cid": "bafy...", "car": true, "fileCount": 327, ... }

MCP Server

The MCP Server automatically uses this flow for files >6 MB. Just pass the binary — the client handles the init/upload/complete steps transparently.

Limits

LimitValue
Max single upload size5 GB
Presigned URL expiry1 hour
Staging bucket retention24 hours (unused uploads auto-expire)
AvailabilityAll plans (Dharma, Bodhi, Nirvana)

Storage limits from your plan still apply.

Troubleshooting

"File too large. Maximum upload size is 5 GB."

The file exceeds 5 GB. For larger files, split into chunks and upload separately, or use the S3-compatible API which supports multipart uploads up to 100 MB per part.

"upload not found — did you PUT to the uploadUrl?"

You called /upload/complete before actually uploading the binary. Make sure step 2 (PUT to the presigned URL) succeeded before calling complete.

"403 Forbidden" from the S3 URL

The presigned URL expired (1 hour limit). Call /upload/init again to get a fresh URL.

"SignatureDoesNotMatch" when PUTting

The Content-Type header in your PUT request must exactly match the contentType you sent in step 1.

Upload succeeded but /upload/complete returns 400

The file failed safety scanning (malware, phishing patterns, etc.). The staging file is automatically deleted. Check the error message for details.

"not enough storage"

Your plan's storage limit has been reached. Delete unused files or upgrade at ipfs.ninja/pricing.