Skip to content

S3 suderinamumas

Naudokite AWS SDK failams ikelti, atsisiusti ir tvarkyti IPFS Ninja platformoje su tuo paciu kodu, kuri naudojate Amazon S3.

Endpoint

https://s3.ipfs.ninja

Kredencialai

S3 API autentifikacijai naudoja jusu IPFS Ninja API rakta. Jusu API raktas veikia ir kaip access key, ir kaip secret key.

Kaip gauti kredencialus

  1. Eikite i Dashboard > API Keys
  2. Spustelekite Create API key ir suteikite pavadinima (pvz., "S3 access")
  3. Nedelsdami nukopijuokite visa rakta — jis rodomas tik viena karta ir veliau negali buti atkurtas

Jusu raktas atrodo taip:

bws_628bba35e9e0079d9ff9c392b1b55a7b
├──────────┘└──────────────────────────┘
 prefix (12 chars)    rest of key

Susiejimas su AWS kredencialais

AWS parametrasReiksmePavyzdys
accessKeyIdPirmieji 12 jusu API rakto simboliubws_628bba35
secretAccessKeyVisas API raktas (visi 36 simboliai)bws_628bba35e9e0079d9ff9c392b1b55a7b
regionVisada us-east-1us-east-1

WARNING

Visas API raktas rodomas tik viena karta ji kuriant. Jei ji praradote, istrinkite rakta ir sukurkite nauja API Keys puslapyje.

Greita pradzia

javascript
import { S3Client, PutObjectCommand, GetObjectCommand } from "@aws-sdk/client-s3";

const s3 = new S3Client({
  endpoint: "https://s3.ipfs.ninja",
  credentials: {
    accessKeyId: "bws_628bba35",
    secretAccessKey: "bws_628bba35e9e0079d9ff9c392b1b55a7b"
  },
  region: "us-east-1",
  forcePathStyle: true
});

// Upload a file
const put = await s3.send(new PutObjectCommand({
  Bucket: "my-project",
  Key: "hello.json",
  Body: JSON.stringify({ hello: "IPFS" }),
  ContentType: "application/json"
}));

console.log("CID:", put.Metadata?.cid);
// CID: QmXnnyufdzAWL5CqZ2RnSNgPbvCc1ALT73s6epPrRnZ1Xy

Bucket = aplankai

S3 bucket atitinka jusu IPFS Ninja aplankus. Kai ikeliate faila i bucket, jis issaugomas atitinkamame aplanke. Kai issardasote objektus bucket viduje, matote to aplanko failus.

S3 operacijaIPFS Ninja atitikmuo
CreateBucketSukurti nauja aplanka
ListBucketsIssarasyti jusu aplankus
DeleteBucketIstrinti aplanka ir visus jame esancius failus
PutObject i bucketIkelti faila i aplanka
ListObjectsV2 bucket vidujeIssarasyti failus aplanke
javascript
import { ListBucketsCommand, CreateBucketCommand, PutObjectCommand } from "@aws-sdk/client-s3";

// Create a bucket (= create a folder)
await s3.send(new CreateBucketCommand({ Bucket: "nft-metadata" }));

// Upload a file into the folder
await s3.send(new PutObjectCommand({
  Bucket: "nft-metadata",      // ← folder name
  Key: "token-42.json",        // ← filename within the folder
  Body: JSON.stringify({ name: "My NFT #42" })
}));

// List buckets (= list your folders)
const { Buckets } = await s3.send(new ListBucketsCommand({}));
console.log(Buckets);
// [{ Name: "nft-metadata", CreationDate: "2026-04-13T..." }]

TIP

Per S3 API sukurti aplankai yra tie patys aplankai, matomi jusu Dashboard. Failus galite tvarkyti per S3 API, REST API arba zinyiatinklio sasaja — visi naudoja ta pacia aplanku sistema.

INFO

Skirtingai nuo Amazon S3, IPFS Ninja aplankai pagal numatytuosius nustatymus yra ploksci. Noredami sukurti idetines strukturas, naudokite REST API aplanku endpoint su parentFolderId. Is S3 API naudokite rakto prefiksus (pvz., images/photo.png) failams tvarkyti aplanko viduje.

Palaikomos operacijos

PutObject

Ikelkite faila i IPFS. Failas prisegiamas (pin), patikrinamas del saugumo, o CID grazinamas ETag ir x-amz-meta-cid antemis.

javascript
import { PutObjectCommand } from "@aws-sdk/client-s3";
import fs from "fs";

const result = await s3.send(new PutObjectCommand({
  Bucket: "my-project",
  Key: "photo.png",
  Body: fs.readFileSync("photo.png"),
  ContentType: "image/png"
}));

console.log("CID:", result.ETag);
bash
# curl equivalent
curl -X PUT "https://s3.ipfs.ninja/my-project/photo.png" \
  --data-binary @photo.png \
  -H "Content-Type: image/png" \
  --aws-sigv4 "aws:amz:us-east-1:s3" \
  --user "bws_628bba35:bws_628bba35e9e0079d9ff9c392b1b55a7b"

GetObject

Atsisiuskite faila pagal jo rakta (failo pavadinima) arba CID.

javascript
import { GetObjectCommand } from "@aws-sdk/client-s3";

const result = await s3.send(new GetObjectCommand({
  Bucket: "my-project",
  Key: "photo.png"
}));

const body = await result.Body.transformToByteArray();
console.log("Size:", body.length);
console.log("CID:", result.Metadata?.cid);

HeadObject

Gaukite failo metaduomenis neatsisiunsdami turinio.

javascript
import { HeadObjectCommand } from "@aws-sdk/client-s3";

const head = await s3.send(new HeadObjectCommand({
  Bucket: "my-project",
  Key: "photo.png"
}));

console.log("Size:", head.ContentLength);
console.log("Type:", head.ContentType);
console.log("CID:", head.Metadata?.cid);

DeleteObject

Atsekite failo pin IPFS tinkle ir istrinkite ji is savo paskyros.

javascript
import { DeleteObjectCommand } from "@aws-sdk/client-s3";

await s3.send(new DeleteObjectCommand({
  Bucket: "my-project",
  Key: "photo.png"
}));

ListObjectsV2

Issarasykite failus bucket viduje su pasirenkamu prefikso filtru ir puslapiavimu.

javascript
import { ListObjectsV2Command } from "@aws-sdk/client-s3";

const list = await s3.send(new ListObjectsV2Command({
  Bucket: "my-project",
  Prefix: "images/",
  MaxKeys: 100
}));

for (const obj of list.Contents ?? []) {
  console.log(obj.Key, obj.Size, obj.ETag); // ETag = CID
}

Multipart Upload

Ikelkite didelius failus (iki 5 GB) naudodami multipart upload. AWS SDK tai atlieka automatiskai:

javascript
import { Upload } from "@aws-sdk/lib-storage";
import fs from "fs";

const upload = new Upload({
  client: s3,
  params: {
    Bucket: "my-project",
    Key: "large-dataset.tar.gz",
    Body: fs.createReadStream("large-dataset.tar.gz"),
    ContentType: "application/gzip"
  },
  partSize: 10 * 1024 * 1024, // 10 MB per part
});

upload.on("httpUploadProgress", (progress) => {
  console.log(`Uploaded ${progress.loaded} of ${progress.total} bytes`);
});

const result = await upload.done();
console.log("CID:", result.ETag);

Arba valdykite dalis rankiniu budu:

javascript
import {
  CreateMultipartUploadCommand,
  UploadPartCommand,
  CompleteMultipartUploadCommand
} from "@aws-sdk/client-s3";

// 1. Start
const { UploadId } = await s3.send(new CreateMultipartUploadCommand({
  Bucket: "my-project",
  Key: "big-file.bin"
}));

// 2. Upload parts
const part1 = await s3.send(new UploadPartCommand({
  Bucket: "my-project",
  Key: "big-file.bin",
  UploadId,
  PartNumber: 1,
  Body: chunk1
}));

// 3. Complete
const result = await s3.send(new CompleteMultipartUploadCommand({
  Bucket: "my-project",
  Key: "big-file.bin",
  UploadId,
  MultipartUpload: {
    Parts: [{ PartNumber: 1, ETag: part1.ETag }]
  }
}));

Python pavyzdys

python
import boto3

s3 = boto3.client(
    "s3",
    endpoint_url="https://s3.ipfs.ninja",
    aws_access_key_id="bws_628bba35",
    aws_secret_access_key="bws_628bba35e9e0079d9ff9c392b1b55a7b",
    region_name="us-east-1"
)

# Upload
s3.put_object(
    Bucket="my-project",
    Key="data.json",
    Body=b'{"hello": "IPFS"}',
    ContentType="application/json"
)

# List files
response = s3.list_objects_v2(Bucket="my-project")
for obj in response.get("Contents", []):
    print(obj["Key"], obj["Size"])

# Download
result = s3.get_object(Bucket="my-project", Key="data.json")
print(result["Body"].read())

Go pavyzdys

go
package main

import (
    "context"
    "fmt"
    "strings"

    "github.com/aws/aws-sdk-go-v2/aws"
    "github.com/aws/aws-sdk-go-v2/credentials"
    "github.com/aws/aws-sdk-go-v2/service/s3"
)

func main() {
    client := s3.New(s3.Options{
        BaseEndpoint: aws.String("https://s3.ipfs.ninja"),
        Region:       "us-east-1",
        Credentials:  credentials.NewStaticCredentialsProvider("bws_628bba35", "bws_628bba35e9e0...", ""),
        UsePathStyle: true,
    })

    _, err := client.PutObject(context.TODO(), &s3.PutObjectInput{
        Bucket:      aws.String("my-project"),
        Key:         aws.String("hello.txt"),
        Body:        strings.NewReader("Hello, IPFS!"),
        ContentType: aws.String("text/plain"),
    })
    if err != nil {
        panic(err)
    }
    fmt.Println("Uploaded!")
}

Skirtumai nuo Amazon S3

SavybeAmazon S3IPFS Ninja S3
Saugojimo modelisKeiciami objektaiTuriniu adresuojami (nekintami CID)
Perrasymmo elgsenaPakeicia objekta vietojeSukuria nauja CID, senas CID vis dar pasiekiamas
VersioningPalaikomasNepalaikomas (naudokite CID versijavimui)
Serverio puses sifravimasPalaikomasNepalaikomas (turinys yra IPFS tinkle)
Lifecycle policiesPalaikomosNepalaikomos
Bucket policies / ACLPalaikomosNaudokite gateway prieigos rezimus
Presigned URLPalaikomiNaudokite pasirasytu ikelimo tokenus
Maks. objekto dydis5 TB5 GB (multipart), 100 MB (single PUT)
RegionaiKeli regionaiTik us-east-1
ETag reiksmeMD5 maisasIPFS CID
Papildomos antesStandartines S3x-amz-meta-cid (IPFS CID)

Migracija is Amazon S3

Pakeiskite savo S3 kliento konfiguracija:

diff
 const s3 = new S3Client({
+  endpoint: "https://s3.ipfs.ninja",
   credentials: {
-    accessKeyId: "AKIA...",
-    secretAccessKey: "wJalrX..."
+    accessKeyId: "bws_628bba35",
+    secretAccessKey: "bws_628bba35e9e0..."
   },
   region: "us-east-1",
+  forcePathStyle: true
 });

Jusu esami PutObject, GetObject, ListObjectsV2 ir DeleteObject iskvietimai veikia be pakeitimu.

Migracija is Filebase

Pakeiskite endpoint URL:

diff
 const s3 = new S3Client({
-  endpoint: "https://s3.filebase.com",
+  endpoint: "https://s3.ipfs.ninja",
   credentials: {
-    accessKeyId: "FILEBASE_KEY",
-    secretAccessKey: "FILEBASE_SECRET"
+    accessKeyId: "bws_628bba35",
+    secretAccessKey: "bws_628bba35e9e0..."
   },
   region: "us-east-1",
   forcePathStyle: true
 });