Complete step-by-step plan to migrate from the remote Supabase cloud instance to a fully local Supabase stack on this server, keeping all 26 migrations, RLS policies, seed data, and custom functions intact.
| Factor | Remote (Cloud) | Local (This Server) |
|---|---|---|
| Latency | ~80–200 ms per query | <1 ms (loopback) |
| Cost | Paid plan once limits hit | Free (your hardware) |
| Network dependency | Internet required | Works fully offline |
| Data privacy | Data on Supabase infra | Data stays on-premise |
| Migration testing | Risk to live data | Safe reset anytime |
| Studio UI | Cloud dashboard | localhost:54323 |
supabase start spins up a set of Docker containers that replicate the full Supabase cloud stack on your machine. Every container shares a private Docker network; your apps connect via localhost ports.
| Container | Port | Purpose |
|---|---|---|
| supabase/postgres | 54322 | PostgreSQL 15 database |
| supabase/studio | 54323 | Web-based DB admin UI |
| kong (API gateway) | 54321 | REST + Auth + Storage APIs |
| supabase/auth (GoTrue) | 54321/auth | JWT authentication |
| supabase/storage | 54321/storage | File/blob storage |
| supabase/realtime | 54321/realtime | WebSocket subscriptions |
| inbucket | 54324 | Email catcher (dev SMTP) |
Docker and Supabase CLI are already installed. Confirm Docker is running and has enough resources.
# Confirm Docker daemon is running
docker info | grep "Server Version"
# Confirm Supabase CLI version (need 1.x+)
supabase --version
# Check available disk space (local Supabase needs ~4 GB for images)
df -h /var/lib/docker
The supabase/config.toml file is already configured for this project. Verify these key settings are correct before starting:
# supabase/config.toml — key sections to confirm
[api]
schemas = ["public", "lookups", "personnel", "leave", "finance",
"tna", "workflow", "talent", "compliance", "audit"]
port = 54321
[db]
port = 54322
[studio]
port = 54323
enabled = true
[auth]
# Add these lines if not present:
site_url = "http://localhost:3000"
additional_redirect_urls = ["http://localhost:3000/**"]
Also confirm the JWT hook for custom_access_token_hook is wired up. Add to config.toml if missing:
[auth.hook.custom_access_token]
enabled = true
uri = "pg-functions://postgres/public/custom_access_token_hook"
Run from the repo root. This pulls Docker images (first time only, ~4 GB) then starts all containers.
cd /home/waheed/HRSanad/HRSanad
supabase start
When complete, the CLI prints all connection strings and keys. Copy them — you will need them in Step 5.
# Expected output from supabase start:
Started supabase local development setup.
API URL: http://127.0.0.1:54321
GraphQL URL: http://127.0.0.1:54321/graphql/v1
S3 Storage URL: http://127.0.0.1:54321/storage/v1/s3
DB URL: postgresql://postgres:postgres@127.0.0.1:54322/postgres
Studio URL: http://127.0.0.1:54323
Inbucket URL: http://127.0.0.1:54324
anon key: eyJh... ← save this
service_role key: eyJh... ← save this
JWT secret : super-secret-jwt-token-with-at-least-32-characters-long
Push all migrations to the local database. This creates all 10 schemas, 54 tables, RLS policies, indexes, triggers, and the custom JWT hook function.
supabase db push --local
# Alternatively, do a full reset (applies migrations + seed.sql):
supabase db reset
supabase db reset is the safer option — it applies all 26 migrations in order AND loads supabase/seed.sql (242 nationalities + lookup data) in one step.
Verify migrations ran cleanly:
# Connect to local DB and check schemas exist
psql postgresql://postgres:postgres@127.0.0.1:54322/postgres \
-c "\dn"
# Expected: public, lookups, personnel, leave, finance,
# tna, workflow, talent, compliance, audit
Replace the remote Supabase URLs and keys with the local ones printed in Step 3.
apps/api/.env
apps/web/.env.local
JWT_SECRET for local Supabase is always the literal string shown above (it is the Supabase CLI default). Do not use the remote project's JWT secret here.
If you used supabase db reset in Step 4, seed data is already loaded. If you used db push, load it manually:
psql postgresql://postgres:postgres@127.0.0.1:54322/postgres \
-f supabase/seed.sql
# Verify nationalities loaded
psql postgresql://postgres:postgres@127.0.0.1:54322/postgres \
-c "SELECT COUNT(*) FROM lookups.nationalities;"
# Expected: 242
The local database starts empty (no tenants, no users). Create your first tenant and admin account via the Studio UI or SQL.
Option A — Supabase Studio UI (easiest):
hr_adminOption B — SQL script:
-- Run in Studio SQL editor or psql
-- 1. Create company (tenant)
INSERT INTO public.companies (id, name, trade_name, country_code, currency_code, timezone)
VALUES (
gen_random_uuid(),
'HRSanad Test Co',
'HRSanad',
'AE',
'AED',
'Asia/Dubai'
) RETURNING id; -- note the UUID
-- 2. Admin user is created via Supabase Auth UI above,
-- then linked here:
INSERT INTO public.user_profiles (id, company_id, role, is_active)
VALUES (
'<auth.users UUID from Studio>',
'<company UUID from step 1>',
'hr_admin',
true
);
Start the API and web app. They will now connect to the local Supabase instance.
# From repo root — starts both apps
npm run dev
# Or separately:
npm run dev:api # API on :3001
npm run dev:web # Web on :3000
Quick smoke tests:
# 1. API health check
curl http://localhost:3001/health
# 2. Direct DB connection
psql postgresql://postgres:postgres@127.0.0.1:54322/postgres -c "\dn"
# 3. Supabase REST API
curl http://127.0.0.1:54321/rest/v1/ \
-H "apikey: <anon key>"
# 4. Open Studio
open http://127.0.0.1:54323
Re-generate packages/types/database.ts from the local schema to catch any type drift:
supabase gen types typescript --local \
> packages/types/src/database.ts
# Then type-check everything
npm run typecheck
Use these commands in your daily development cycle:
# Start local Supabase (run once per session)
supabase start
# Stop all containers (preserves data)
supabase stop
# Stop AND wipe all data (clean slate)
supabase stop --no-backup
# Reset DB: re-run all migrations + seed (keeps containers running)
supabase db reset
# Apply a new migration file you just created
supabase db push --local
# Check which containers are running
supabase status
# View Postgres logs
supabase logs db
# Create a new empty migration file
supabase migration new my_feature_name
# Find what is using the port
sudo lsof -i :54321
# Or change ports in supabase/config.toml [api] / [db] / [studio]
# then update .env files to match
sudo systemctl start docker
sudo usermod -aG docker $USER # then log out and back in
Migrations run in filename order. If a migration creates a schema that a later one depends on, ensure the order is correct. Run a full reset to replay from scratch:
supabase db reset
The local JWT secret is different from the remote project's secret. Make sure JWT_SECRET in apps/api/.env matches exactly what supabase start printed.
# Print the local JWT secret again at any time:
supabase status | grep "JWT secret"
RLS requires app.current_tenant_id to be set on the session. The API auth middleware handles this automatically, but direct psql sessions need it set manually:
-- In psql or Studio SQL editor:
SET app.current_tenant_id = '<your-company-uuid>';
SELECT * FROM personnel.employees;
docker info succeeds)supabase start completes without errorssupabase db reset runs all 26 migrations cleanly\dn outputlookups.nationalitiesapps/api/.env updated with local URL, keys, JWT secretapps/web/.env.local updated with local URL and anon keynpm run dev starts both apps without connection errorshttp://localhost:3000http://127.0.0.1:54323http://127.0.0.1:54324| Service | URL | Notes |
|---|---|---|
| Supabase API | http://127.0.0.1:54321 |
Use for SUPABASE_URL in .env |
| PostgreSQL | postgresql://postgres:postgres@127.0.0.1:54322/postgres |
Use for DATABASE_URL |
| Studio (UI) | http://127.0.0.1:54323 |
No login needed locally |
| Inbucket (Email) | http://127.0.0.1:54324 |
Catches all outgoing emails in dev |
| SMTP (for Nodemailer) | host: 127.0.0.1, port: 54325 |
No auth needed, any user/pass works |
| HRSanad Web | http://localhost:3000 |
Next.js dev server |
| HRSanad API | http://localhost:3001/v1 |
Fastify dev server |