File storage (S3)

For production, replace the local MinIO setup with AWS S3 or any S3-compatible service (Cloudflare R2, Backblaze B2, DigitalOcean Spaces, etc.).

Create two S3 buckets

The application uses two buckets:

  • Public bucket — for files with anonymous read access (avatars, logos). Files are served directly via a public URL.
  • Private bucket — for private files. Access requires a signed URL that expires after 1 hour.

AWS S3

  1. Open the S3 console and create two buckets (e.g. myapp-public and myapp-private).
  2. On the public bucket, go to Permissions > Object Ownership and select ACLs enabled (Bucket owner preferred). The app sets public-read ACL on uploaded objects so they are served directly via a public URL, and ACLs are disabled by default on new buckets — uploads will fail with AccessControlListNotSupported if you skip this step.
  3. On the public bucket, go to Permissions > Block public access and turn off "Block all public access".
  4. On the public bucket, add this bucket policy (replace myapp-public with your bucket name):
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "PublicReadGetObject",
      "Effect": "Allow",
      "Principal": "*",
      "Action": "s3:GetObject",
      "Resource": "arn:aws:s3:::myapp-public/*"
    }
  ]
}
  1. Leave the private bucket with default settings (ACLs disabled, all public access blocked).

Cloudflare R2

  1. In the Cloudflare dashboard, go to R2 Object Storage and create two buckets.
  2. For the public bucket, go to Settings > Public access and enable it via an R2.dev subdomain or a custom domain.

Other S3-compatible services

Create two buckets in your provider's dashboard. Set the public bucket to allow anonymous read access.

Create access credentials

AWS S3

  1. Go to IAM > Users > Create user.
  2. Attach a policy that grants s3:* on both buckets (or use AmazonS3FullAccess for simplicity).
  3. Create an access key and save the Access Key ID and Secret Access Key.

A minimal custom policy:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "s3:*",
      "Resource": [
        "arn:aws:s3:::myapp-public",
        "arn:aws:s3:::myapp-public/*",
        "arn:aws:s3:::myapp-private",
        "arn:aws:s3:::myapp-private/*"
      ]
    }
  ]
}

Cloudflare R2

Go to R2 > Manage R2 API Tokens and create a token with Object Read & Write permissions on both buckets.

Configure environment variables

Update packages/backend/.env:

# AWS S3
S3_BUCKET_PUBLIC=myapp-public
S3_BUCKET_PRIVATE=myapp-private
S3_ACCESS_KEY_ID=your-access-key-id
S3_SECRET_ACCESS_KEY=your-secret-access-key
S3_REGION=us-east-1
S3_ENDPOINT=

For S3-compatible services, set S3_ENDPOINT:

# Cloudflare R2
S3_ENDPOINT=https://<account-id>.r2.cloudflarestorage.com
 
# Backblaze B2
S3_ENDPOINT=https://s3.<region>.backblazeb2.com
 
# DigitalOcean Spaces
S3_ENDPOINT=https://<region>.digitaloceanspaces.com

Leave S3_ENDPOINT empty for AWS S3.

Configure CORS

Both buckets must allow uploads from your frontend domain. Without CORS, browser uploads will fail.

AWS S3

  1. Open the bucket in the S3 console.
  2. Go to Permissions > Cross-origin resource sharing (CORS).
  3. Add the following configuration (repeat for both buckets):
[
  {
    "AllowedOrigins": ["https://yourdomain.com"],
    "AllowedMethods": ["GET", "PUT", "POST", "DELETE"],
    "AllowedHeaders": ["*"],
    "ExposeHeaders": ["ETag"]
  }
]

Replace https://yourdomain.com with your actual frontend URL. You can add multiple origins (e.g. http://localhost:3010 for local development).

Cloudflare R2

  1. Open the bucket settings.
  2. Go to CORS Policy and add:
    • Allowed origins: https://yourdomain.com
    • Allowed methods: GET, PUT, POST, DELETE
    • Allowed headers: *
    • Expose headers: ETag