English
English
Appearance
English
English
Appearance
Use the AWS SDK to upload, download, and manage files on IPFS Ninja with the same code you use for Amazon S3.
https://s3.ipfs.ninjaThe S3 API uses your IPFS Ninja API key for authentication. Your API key serves as both the access key and the secret key.
Your key looks like this:
bws_628bba35e9e0079d9ff9c392b1b55a7b
├──────────┘└──────────────────────────┘
prefix (12 chars) rest of key| AWS Parameter | Value | Example |
|---|---|---|
accessKeyId | First 12 characters of your API key | bws_628bba35 |
secretAccessKey | The full API key (all 36 characters) | bws_628bba35e9e0079d9ff9c392b1b55a7b |
region | Always us-east-1 | us-east-1 |
WARNING
The full API key is only shown once when you create it. If you lose it, delete the key and create a new one from the API Keys page.
import { S3Client, PutObjectCommand, GetObjectCommand } from "@aws-sdk/client-s3";
const s3 = new S3Client({
endpoint: "https://s3.ipfs.ninja",
credentials: {
accessKeyId: "bws_628bba35",
secretAccessKey: "bws_628bba35e9e0079d9ff9c392b1b55a7b"
},
region: "us-east-1",
forcePathStyle: true
});
// Upload a file
const put = await s3.send(new PutObjectCommand({
Bucket: "my-project",
Key: "hello.json",
Body: JSON.stringify({ hello: "IPFS" }),
ContentType: "application/json"
}));
console.log("CID:", put.Metadata?.cid);
// CID: QmXnnyufdzAWL5CqZ2RnSNgPbvCc1ALT73s6epPrRnZ1XyS3 buckets map to your IPFS Ninja folders. When you upload a file to a bucket, it's stored in the corresponding folder. When you list objects in a bucket, you see the files in that folder.
| S3 Operation | IPFS Ninja Equivalent |
|---|---|
CreateBucket | Create a new folder |
ListBuckets | List your folders |
DeleteBucket | Delete a folder and all files in it |
PutObject to bucket | Upload file into the folder |
ListObjectsV2 on bucket | List files in the folder |
import { ListBucketsCommand, CreateBucketCommand, PutObjectCommand } from "@aws-sdk/client-s3";
// Create a bucket (= create a folder)
await s3.send(new CreateBucketCommand({ Bucket: "nft-metadata" }));
// Upload a file into the folder
await s3.send(new PutObjectCommand({
Bucket: "nft-metadata", // ← folder name
Key: "token-42.json", // ← filename within the folder
Body: JSON.stringify({ name: "My NFT #42" })
}));
// List buckets (= list your folders)
const { Buckets } = await s3.send(new ListBucketsCommand({}));
console.log(Buckets);
// [{ Name: "nft-metadata", CreationDate: "2026-04-13T..." }]TIP
Folders created via the S3 API are the same folders visible in your Dashboard. You can organize files from either the S3 API, the REST API, or the web interface — they all share the same folder system.
INFO
Unlike Amazon S3, IPFS Ninja folders are flat by default. To create nested structures, use the REST API's folder endpoints with parentFolderId. From the S3 API, use key prefixes (e.g. images/photo.png) to organize within a folder.
Upload a file to IPFS. The file is pinned, safety-scanned, and the CID is returned in the ETag and x-amz-meta-cid headers.
To import a CAR file instead of a regular file, add the x-amz-meta-import: car metadata header. See CAR Import for details.
import { PutObjectCommand } from "@aws-sdk/client-s3";
import fs from "fs";
const result = await s3.send(new PutObjectCommand({
Bucket: "my-project",
Key: "photo.png",
Body: fs.readFileSync("photo.png"),
ContentType: "image/png"
}));
console.log("CID:", result.ETag);# curl equivalent
curl -X PUT "https://s3.ipfs.ninja/my-project/photo.png" \
--data-binary @photo.png \
-H "Content-Type: image/png" \
--aws-sigv4 "aws:amz:us-east-1:s3" \
--user "bws_628bba35:bws_628bba35e9e0079d9ff9c392b1b55a7b"Download a file by its key (filename) or CID.
import { GetObjectCommand } from "@aws-sdk/client-s3";
const result = await s3.send(new GetObjectCommand({
Bucket: "my-project",
Key: "photo.png"
}));
const body = await result.Body.transformToByteArray();
console.log("Size:", body.length);
console.log("CID:", result.Metadata?.cid);Get file metadata without downloading the content.
import { HeadObjectCommand } from "@aws-sdk/client-s3";
const head = await s3.send(new HeadObjectCommand({
Bucket: "my-project",
Key: "photo.png"
}));
console.log("Size:", head.ContentLength);
console.log("Type:", head.ContentType);
console.log("CID:", head.Metadata?.cid);Unpin a file from IPFS and delete it from your account.
import { DeleteObjectCommand } from "@aws-sdk/client-s3";
await s3.send(new DeleteObjectCommand({
Bucket: "my-project",
Key: "photo.png"
}));List files in a bucket with optional prefix filtering and pagination.
import { ListObjectsV2Command } from "@aws-sdk/client-s3";
const list = await s3.send(new ListObjectsV2Command({
Bucket: "my-project",
Prefix: "images/",
MaxKeys: 100
}));
for (const obj of list.Contents ?? []) {
console.log(obj.Key, obj.Size, obj.ETag); // ETag = CID
}Upload large files (up to 5 GB) using multipart upload. The AWS SDK handles this automatically:
import { Upload } from "@aws-sdk/lib-storage";
import fs from "fs";
const upload = new Upload({
client: s3,
params: {
Bucket: "my-project",
Key: "large-dataset.tar.gz",
Body: fs.createReadStream("large-dataset.tar.gz"),
ContentType: "application/gzip"
},
partSize: 10 * 1024 * 1024, // 10 MB per part
});
upload.on("httpUploadProgress", (progress) => {
console.log(`Uploaded ${progress.loaded} of ${progress.total} bytes`);
});
const result = await upload.done();
console.log("CID:", result.ETag);Or manually control parts:
import {
CreateMultipartUploadCommand,
UploadPartCommand,
CompleteMultipartUploadCommand
} from "@aws-sdk/client-s3";
// 1. Start
const { UploadId } = await s3.send(new CreateMultipartUploadCommand({
Bucket: "my-project",
Key: "big-file.bin"
}));
// 2. Upload parts
const part1 = await s3.send(new UploadPartCommand({
Bucket: "my-project",
Key: "big-file.bin",
UploadId,
PartNumber: 1,
Body: chunk1
}));
// 3. Complete
const result = await s3.send(new CompleteMultipartUploadCommand({
Bucket: "my-project",
Key: "big-file.bin",
UploadId,
MultipartUpload: {
Parts: [{ PartNumber: 1, ETag: part1.ETag }]
}
}));import boto3
s3 = boto3.client(
"s3",
endpoint_url="https://s3.ipfs.ninja",
aws_access_key_id="bws_628bba35",
aws_secret_access_key="bws_628bba35e9e0079d9ff9c392b1b55a7b",
region_name="us-east-1"
)
# Upload
s3.put_object(
Bucket="my-project",
Key="data.json",
Body=b'{"hello": "IPFS"}',
ContentType="application/json"
)
# List files
response = s3.list_objects_v2(Bucket="my-project")
for obj in response.get("Contents", []):
print(obj["Key"], obj["Size"])
# Download
result = s3.get_object(Bucket="my-project", Key="data.json")
print(result["Body"].read())package main
import (
"context"
"fmt"
"strings"
"github.com/aws/aws-sdk-go-v2/aws"
"github.com/aws/aws-sdk-go-v2/credentials"
"github.com/aws/aws-sdk-go-v2/service/s3"
)
func main() {
client := s3.New(s3.Options{
BaseEndpoint: aws.String("https://s3.ipfs.ninja"),
Region: "us-east-1",
Credentials: credentials.NewStaticCredentialsProvider("bws_628bba35", "bws_628bba35e9e0...", ""),
UsePathStyle: true,
})
_, err := client.PutObject(context.TODO(), &s3.PutObjectInput{
Bucket: aws.String("my-project"),
Key: aws.String("hello.txt"),
Body: strings.NewReader("Hello, IPFS!"),
ContentType: aws.String("text/plain"),
})
if err != nil {
panic(err)
}
fmt.Println("Uploaded!")
}| Feature | Amazon S3 | IPFS Ninja S3 |
|---|---|---|
| Storage model | Mutable objects | Content-addressed (immutable CIDs) |
| Overwrite behavior | Replaces object in-place | Creates new CID, old CID still accessible |
| Versioning | Supported | Not supported (use CIDs for versioning) |
| Server-side encryption | Supported | Not supported (content is on IPFS) |
| Lifecycle policies | Supported | Not supported |
| Bucket policies / ACLs | Supported | Use gateway access modes |
| Presigned URLs | Supported | Use signed upload tokens |
| Max object size | 5 TB | 5 GB (multipart), 100 MB (single PUT) |
| Regions | Multi-region | us-east-1 only |
ETag value | MD5 hash | IPFS CID |
| Extra headers | Standard S3 | x-amz-meta-cid (IPFS CID) |
Replace your S3 client configuration:
const s3 = new S3Client({
+ endpoint: "https://s3.ipfs.ninja",
credentials: {
- accessKeyId: "AKIA...",
- secretAccessKey: "wJalrX..."
+ accessKeyId: "bws_628bba35",
+ secretAccessKey: "bws_628bba35e9e0..."
},
region: "us-east-1",
+ forcePathStyle: true
});Your existing PutObject, GetObject, ListObjectsV2, and DeleteObject calls work unchanged.
Replace the endpoint URL:
const s3 = new S3Client({
- endpoint: "https://s3.filebase.com",
+ endpoint: "https://s3.ipfs.ninja",
credentials: {
- accessKeyId: "FILEBASE_KEY",
- secretAccessKey: "FILEBASE_SECRET"
+ accessKeyId: "bws_628bba35",
+ secretAccessKey: "bws_628bba35e9e0..."
},
region: "us-east-1",
forcePathStyle: true
});