Skip to content

Kompatibilita s S3

Pouzite AWS SDK na nahravanie, stahovanie a spravu suborov na IPFS Ninja s rovnakym kodom, aky pouzivate pre Amazon S3.

Endpoint

https://s3.ipfs.ninja

Prihlasovacie udaje

API S3 pouziva vas API kluc IPFS Ninja na autentifikaciu. Vas API kluc sluzi ako access key aj ako secret key.

Ako ziskat prihlasovacie udaje

  1. Prejdite na Dashboard > API Keys
  2. Kliknite na Create API key a zadajte nazov (napr. "S3 access")
  3. Okamzite skopirujte cely kluc — zobrazi sa iba raz a neskor ho nie je mozne ziskat

Vas kluc vyzera takto:

bws_628bba35e9e0079d9ff9c392b1b55a7b
├──────────┘└──────────────────────────┘
 prefix (12 chars)    rest of key

Mapovanie na AWS prihlasovacie udaje

Parameter AWSHodnotaPriklad
accessKeyIdPrvych 12 znakov vasho API klucabws_628bba35
secretAccessKeyCely API kluc (vsetkych 36 znakov)bws_628bba35e9e0079d9ff9c392b1b55a7b
regionVzdy us-east-1us-east-1

WARNING

Cely API kluc sa zobrazi iba raz pri vytvoreni. Ak ho stratite, vymazte kluc a vytvorte novy na stranke API Keys.

Rychly start

javascript
import { S3Client, PutObjectCommand, GetObjectCommand } from "@aws-sdk/client-s3";

const s3 = new S3Client({
  endpoint: "https://s3.ipfs.ninja",
  credentials: {
    accessKeyId: "bws_628bba35",
    secretAccessKey: "bws_628bba35e9e0079d9ff9c392b1b55a7b"
  },
  region: "us-east-1",
  forcePathStyle: true
});

// Upload a file
const put = await s3.send(new PutObjectCommand({
  Bucket: "my-project",
  Key: "hello.json",
  Body: JSON.stringify({ hello: "IPFS" }),
  ContentType: "application/json"
}));

console.log("CID:", put.Metadata?.cid);
// CID: QmXnnyufdzAWL5CqZ2RnSNgPbvCc1ALT73s6epPrRnZ1Xy

Buckets = Priecinky

S3 bucket-y zodpovedaju vasim priecinkom v IPFS Ninja. Ked nahrate subor do bucket-u, ulozi sa do zodpovedajuceho priecinka. Ked vypisete objekty v bucket-e, vidite subory v danom priecinku.

Operacia S3Ekvivalent v IPFS Ninja
CreateBucketVytvorit novy priecinok
ListBucketsVypisat vase priecinky
DeleteBucketVymazat priecinok a vsetky subory v nom
PutObject do bucket-uNahrat subor do priecinka
ListObjectsV2 na bucket-eVypisat subory v priecinku
javascript
import { ListBucketsCommand, CreateBucketCommand, PutObjectCommand } from "@aws-sdk/client-s3";

// Create a bucket (= create a folder)
await s3.send(new CreateBucketCommand({ Bucket: "nft-metadata" }));

// Upload a file into the folder
await s3.send(new PutObjectCommand({
  Bucket: "nft-metadata",      // ← folder name
  Key: "token-42.json",        // ← filename within the folder
  Body: JSON.stringify({ name: "My NFT #42" })
}));

// List buckets (= list your folders)
const { Buckets } = await s3.send(new ListBucketsCommand({}));
console.log(Buckets);
// [{ Name: "nft-metadata", CreationDate: "2026-04-13T..." }]

TIP

Priecinky vytvorene cez API S3 su tie iste priecinky, ktore vidite vo vasom Dashboard. Subory mozete organizovat cez API S3, REST API alebo webove rozhranie — vsetky zdielaju rovnaky system priecinkov.

INFO

Na rozdiel od Amazon S3 su priecinky IPFS Ninja standardne ploche. Pre vytvorenie vnorenych struktur pouzite endpoint-y priecinkov REST API s parentFolderId. Cez API S3 pouzite prefixy klucov (napr. images/photo.png) na organizaciu v ramci priecinka.

Podporovane operacie

PutObject

Nahra subor na IPFS. Subor je pripnuty (pinned), bezpecnostne overeny a CID je vrateny v hlavickach ETag a x-amz-meta-cid.

javascript
import { PutObjectCommand } from "@aws-sdk/client-s3";
import fs from "fs";

const result = await s3.send(new PutObjectCommand({
  Bucket: "my-project",
  Key: "photo.png",
  Body: fs.readFileSync("photo.png"),
  ContentType: "image/png"
}));

console.log("CID:", result.ETag);
bash
# curl equivalent
curl -X PUT "https://s3.ipfs.ninja/my-project/photo.png" \
  --data-binary @photo.png \
  -H "Content-Type: image/png" \
  --aws-sigv4 "aws:amz:us-east-1:s3" \
  --user "bws_628bba35:bws_628bba35e9e0079d9ff9c392b1b55a7b"

GetObject

Stiahne subor podla jeho kluca (nazvu suboru) alebo CID.

javascript
import { GetObjectCommand } from "@aws-sdk/client-s3";

const result = await s3.send(new GetObjectCommand({
  Bucket: "my-project",
  Key: "photo.png"
}));

const body = await result.Body.transformToByteArray();
console.log("Size:", body.length);
console.log("CID:", result.Metadata?.cid);

HeadObject

Ziska metadata suboru bez stahovania obsahu.

javascript
import { HeadObjectCommand } from "@aws-sdk/client-s3";

const head = await s3.send(new HeadObjectCommand({
  Bucket: "my-project",
  Key: "photo.png"
}));

console.log("Size:", head.ContentLength);
console.log("Type:", head.ContentType);
console.log("CID:", head.Metadata?.cid);

DeleteObject

Odpripne (unpin) subor z IPFS a vymaze ho z vasho uctu.

javascript
import { DeleteObjectCommand } from "@aws-sdk/client-s3";

await s3.send(new DeleteObjectCommand({
  Bucket: "my-project",
  Key: "photo.png"
}));

ListObjectsV2

Vypise subory v bucket-e s volitelnym filtrovanim podla prefixu a strankovanim.

javascript
import { ListObjectsV2Command } from "@aws-sdk/client-s3";

const list = await s3.send(new ListObjectsV2Command({
  Bucket: "my-project",
  Prefix: "images/",
  MaxKeys: 100
}));

for (const obj of list.Contents ?? []) {
  console.log(obj.Key, obj.Size, obj.ETag); // ETag = CID
}

Multipart Upload

Nahravajte velke subory (az do 5 GB) pomocou multipart upload. AWS SDK to spracuje automaticky:

javascript
import { Upload } from "@aws-sdk/lib-storage";
import fs from "fs";

const upload = new Upload({
  client: s3,
  params: {
    Bucket: "my-project",
    Key: "large-dataset.tar.gz",
    Body: fs.createReadStream("large-dataset.tar.gz"),
    ContentType: "application/gzip"
  },
  partSize: 10 * 1024 * 1024, // 10 MB per part
});

upload.on("httpUploadProgress", (progress) => {
  console.log(`Uploaded ${progress.loaded} of ${progress.total} bytes`);
});

const result = await upload.done();
console.log("CID:", result.ETag);

Alebo ovladajte casti manualne:

javascript
import {
  CreateMultipartUploadCommand,
  UploadPartCommand,
  CompleteMultipartUploadCommand
} from "@aws-sdk/client-s3";

// 1. Start
const { UploadId } = await s3.send(new CreateMultipartUploadCommand({
  Bucket: "my-project",
  Key: "big-file.bin"
}));

// 2. Upload parts
const part1 = await s3.send(new UploadPartCommand({
  Bucket: "my-project",
  Key: "big-file.bin",
  UploadId,
  PartNumber: 1,
  Body: chunk1
}));

// 3. Complete
const result = await s3.send(new CompleteMultipartUploadCommand({
  Bucket: "my-project",
  Key: "big-file.bin",
  UploadId,
  MultipartUpload: {
    Parts: [{ PartNumber: 1, ETag: part1.ETag }]
  }
}));

Priklad v Pythone

python
import boto3

s3 = boto3.client(
    "s3",
    endpoint_url="https://s3.ipfs.ninja",
    aws_access_key_id="bws_628bba35",
    aws_secret_access_key="bws_628bba35e9e0079d9ff9c392b1b55a7b",
    region_name="us-east-1"
)

# Upload
s3.put_object(
    Bucket="my-project",
    Key="data.json",
    Body=b'{"hello": "IPFS"}',
    ContentType="application/json"
)

# List files
response = s3.list_objects_v2(Bucket="my-project")
for obj in response.get("Contents", []):
    print(obj["Key"], obj["Size"])

# Download
result = s3.get_object(Bucket="my-project", Key="data.json")
print(result["Body"].read())

Priklad v Go

go
package main

import (
    "context"
    "fmt"
    "strings"

    "github.com/aws/aws-sdk-go-v2/aws"
    "github.com/aws/aws-sdk-go-v2/credentials"
    "github.com/aws/aws-sdk-go-v2/service/s3"
)

func main() {
    client := s3.New(s3.Options{
        BaseEndpoint: aws.String("https://s3.ipfs.ninja"),
        Region:       "us-east-1",
        Credentials:  credentials.NewStaticCredentialsProvider("bws_628bba35", "bws_628bba35e9e0...", ""),
        UsePathStyle: true,
    })

    _, err := client.PutObject(context.TODO(), &s3.PutObjectInput{
        Bucket:      aws.String("my-project"),
        Key:         aws.String("hello.txt"),
        Body:        strings.NewReader("Hello, IPFS!"),
        ContentType: aws.String("text/plain"),
    })
    if err != nil {
        panic(err)
    }
    fmt.Println("Uploaded!")
}

Rozdiely oproti Amazon S3

FunkciaAmazon S3IPFS Ninja S3
Model uloziskaMenitelne objektyAdresovanie podla obsahu (nemenne CID)
Spravanie pri prepisaniNahradi objekt na miesteVytvori novy CID, stary CID zostava pristupny
VerziovaniePodporovaneNepodporovane (pouzite CID na verziovanie)
Sifrovanie na serveriPodporovaneNepodporovane (obsah je na IPFS)
Politiky zivotneho cykluPodporovaneNepodporovane
Politiky bucket-ov / ACLPodporovanePouzite rezimy pristupu gateway
Predpodpisane URLPodporovanePouzite podpisane tokeny nahravania
Maximalna velkost objektu5 TB5 GB (multipart), 100 MB (jeden PUT)
RegionyMultiregionIba us-east-1
Hodnota ETagMD5 hashIPFS CID
Dodatocne hlavickyStandardne S3x-amz-meta-cid (IPFS CID)

Migracia z Amazon S3

Nahradte konfiguraciu vasho S3 klienta:

diff
 const s3 = new S3Client({
+  endpoint: "https://s3.ipfs.ninja",
   credentials: {
-    accessKeyId: "AKIA...",
-    secretAccessKey: "wJalrX..."
+    accessKeyId: "bws_628bba35",
+    secretAccessKey: "bws_628bba35e9e0..."
   },
   region: "us-east-1",
+  forcePathStyle: true
 });

Vase existujuce volania PutObject, GetObject, ListObjectsV2 a DeleteObject funguju bez zmien.

Migracia z Filebase

Nahradte URL endpoint-u:

diff
 const s3 = new S3Client({
-  endpoint: "https://s3.filebase.com",
+  endpoint: "https://s3.ipfs.ninja",
   credentials: {
-    accessKeyId: "FILEBASE_KEY",
-    secretAccessKey: "FILEBASE_SECRET"
+    accessKeyId: "bws_628bba35",
+    secretAccessKey: "bws_628bba35e9e0..."
   },
   region: "us-east-1",
   forcePathStyle: true
 });