A vibe-coded app sitting on localhost is just an experiment. Deployment is where the discipline shows, where AI defaults are weakest, and where most solo projects die before reaching a single user. The model writes code that runs on your laptop because that is the easiest place to verify it works. The moment you push that same code into a real environment, every assumption it baked in starts to crack. Filesystem paths that worked locally are read-only on the host. The database the model assumed was running on localhost:5432 does not exist. Environment variables that were sitting in your shell are missing. Cron jobs do not run because the host kills idle processes. And the SSL certificate that the AI swore was handled is actually expired.
This is the gap between code that runs and an application that lives. Closing it requires unglamorous decisions about hosts, domains, certificates, containers, pipelines, databases, monitoring, and backups. Every one of those decisions has cheap defaults that work for years and expensive defaults that punish you within weeks. The AI does not know which is which on your stack. You have to.
This page walks through the deployment stack that actually matters in 2026 for a solo developer or small team shipping AI-built applications. The stack favors managed services, sensible defaults, and the boring choices that survive scaling. The opinions are blunt and the recommendations have prices attached.
Section 1: Choosing Infrastructure
The first deployment decision is also the most consequential. Pick wrong and you spend the next year fighting your host instead of shipping features. Pick right and you forget the host exists. There are three serious categories of hosting in 2026 for web applications: virtual private servers (VPS), platform-as-a-service (PaaS), and edge runtimes. Each one trades cost, control, and operational burden in a different ratio.
A VPS is a Linux box you rent by the hour. Hetzner sells a 4-core, 8 GB ARM box for around 4 euros per month. DigitalOcean sells a 2-core, 4 GB box for $24 per month. Linode and Vultr sit in similar territory. A VPS gives you root access to a real machine. You can install anything, run anything, and configure anything. The cost is that you are now a system administrator. You patch the kernel. You configure the firewall. You set up the reverse proxy. You renew the certs. You debug the cron jobs. If you enjoy this work or you have unusual requirements, a VPS is the cheapest serious hosting on the planet. If you do not, a VPS is a tax on your shipping velocity.
A PaaS sits on top of cloud infrastructure and abstracts the operational layer away. Vercel, Render, Railway, and Fly.io are the four serious options for general web applications. You connect a Git repository, push code, and the platform builds and runs it. SSL is automatic. Custom domains take five minutes. Branch deploys give you a live preview URL for every pull request. Logs are streamed to a web dashboard. The platform handles the patches, the proxy, the certs, and the cron jobs. You pay 5 to 10 times the price of a comparable VPS, but you get back the time it would have taken to administer it.
An edge runtime runs your code at the network edge, close to the user, in a heavily restricted JavaScript environment. Cloudflare Workers, Vercel Edge Functions, and Deno Deploy are the three serious options. The advantages are global low-latency response times (50 ms cold start, 5 ms warm) and aggressive auto-scaling without ops work. The cost is real: edge runtimes have hard CPU limits (typically 50 ms per request), no persistent filesystem, no long-running processes, no native TCP sockets, and a JavaScript-only or WebAssembly-only execution model. They are excellent for stateless API edges, geo-routing, A/B testing logic, and request rewrites. They are the wrong choice for anything that needs a long compute burst, a stateful connection, or a Node.js library that calls into native bindings.
Hetzner CX22 at 4 euros per month gives you 4 cores, 8 GB RAM, 80 GB SSD. Cheapest serious hosting on the planet. You manage everything: Linux updates, firewall, reverse proxy, SSL renewals, log rotation, backups. Best when you have unusual needs (custom binaries, GPU workloads, niche databases) or when you have built ops muscle and want full control. Worst when your goal is to ship product and you have no patience for sysadmin work.
Vercel, Render, and Railway start free and run a small production app for $20 to $50 per month. Connect Git, push code, get a deploy. SSL, domains, branch previews, logs all handled. Best for solo devs and small teams shipping web apps that fit standard runtime models. Worst when you have unusual binary or service requirements the platform does not support, or when scale gets large enough that the per-unit cost dominates.
Cloudflare Workers, Vercel Edge, Deno Deploy. Sub-50 ms latency anywhere in the world. Auto-scales without thought. Best for stateless API edges, geo-aware routing, request rewrites, and lightweight JavaScript handlers that do not need persistent state or long compute. Worst for anything that needs filesystem, native libraries, long-running connections, or compute bursts beyond ~50 ms per request.
The honest take for a solo developer shipping a Next.js or Remix application: pick Vercel or Render and do not optimize cost until the bill becomes a real concern. The hours you save by not running a VPS dwarf the cost difference for the first 12 months of any serious project. Hetzner becomes attractive once you have multiple services to consolidate, predictable load, and the operational competence to run a Linux box without leaking your day to it. Edge becomes attractive when you measure latency in milliseconds and you have requests cheap enough to handle in a 50 ms compute budget.
One trap to avoid: do not host on a hyperscaler (AWS, GCP, Azure) for a small project unless you have a specific reason to. The free tiers are designed to upsell you. The pricing is opaque. The egress fees are punishing. The console has 200 services that you do not need. A solo dev who deploys a small app on AWS spends more on architecture diagrams than on shipping. Use a hyperscaler when you have employees who specialize in it or when you have a workload that genuinely needs one of the 200 services.
Section 2: Domain and DNS
A domain name is the cheapest piece of permanent infrastructure you will ever buy. It is also the piece most likely to be configured wrong on launch day. Domains live at registrars. The three honest options in 2026 are Cloudflare, Porkbun, and Namecheap. Cloudflare sells domains at registry cost (no markup), bundles free WHOIS privacy, and integrates with the rest of the Cloudflare stack you are probably going to use anyway. Porkbun is a close second on price and has a slightly friendlier dashboard. Namecheap is fine but has historically marked up renewals after the first year. Avoid GoDaddy. Avoid any registrar that bundles parking pages, SEO upsells, or aggressive renewal pop-ups.
Once you own the domain, you point it somewhere with DNS records. Most apps need a small set of records and no more. An A record points your apex domain (example.com) at an IP address. A CNAME record points a subdomain (www.example.com or api.example.com) at another domain. An MX record points your domain at a mail server, so the world knows where to send email. TXT records hold verification strings and authentication policies (SPF, DKIM, DMARC for email; ownership verification for various services).
The pattern that survives in 2026 for a typical app is: A record for apex pointing at the host or proxy IP, CNAME for www pointing back to apex, MX records pointing at a managed mail provider (Google Workspace, Fastmail, Migadu, or transactional providers like Resend or Postmark for outbound), and a small set of TXT records for SPF, DKIM, DMARC, and ownership verification. That is the entire DNS surface for most projects.
Putting Cloudflare in front of your site is the highest-leverage deployment decision you can make. Cloudflare offers free CDN, free DDoS protection, free DNS hosting, and a free TLS edge certificate. It caches static assets globally. It absorbs malicious traffic before it reaches your origin. It hides your origin IP from the public internet, which keeps you out of trouble when someone targets you with a flood. The setup is two changes: point your nameservers at Cloudflare, then enable proxying on the records that should sit behind the proxy. Five minutes of work, real protection, no monthly cost.
One pitfall to flag: when you put Cloudflare in front of your origin, you have two TLS legs. The browser-to-Cloudflare leg is handled automatically with a free Cloudflare-issued cert. The Cloudflare-to-origin leg needs its own certificate or it falls back to plaintext. Cloudflare offers an "origin certificate" that you install on your origin and trust at the edge: this is the right answer for VPS deployments. PaaS providers handle this for you. Set the encryption mode to "Full (strict)" and never to "Flexible" - flexible mode encrypts the leg the user can see and ignores the leg the attacker can see, which is exactly the wrong tradeoff.
If you skip Cloudflare and run your own DNS at the registrar, that is also fine for a small project. You lose the CDN, the DDoS shield, and the IP hiding, but you keep your stack simpler. The threshold to move to Cloudflare is the moment any user complains about latency or any malicious traffic shows up in your logs.
Section 3: SSL
SSL/TLS in 2026 is solved at the protocol level and almost solved at the operations level. Almost. The default for any new project is Let's Encrypt: a free, automated certificate authority that issues 90-day certs to anyone who can prove control of a domain. Every PaaS handles Let's Encrypt automatically. Cloudflare handles its edge certs automatically. Caddy, the modern reverse proxy, handles Let's Encrypt automatically with literally one line of configuration. Nginx handles it via Certbot, with a setup that is well-documented but still has rough edges.
The single rule that matters: certs auto-renew or you have an outage. A 90-day cert that you renewed manually at month one will expire at month three. The expiration produces a browser warning that breaks every login, every API call, every webhook, and every CI job that hits your domain. The fix takes ten minutes. The damage takes hours to detect because monitoring tools that ignore cert errors are common. Set up auto-renewal on day one or do not deploy.
Caddy is the right default in 2026. Two-line config gets you HTTPS on a domain with auto-renewal handled. Nginx is fine but the config is older, more verbose, and the cert workflow involves Certbot. Traefik is a middle ground if you are running Docker Compose and want auto-discovery.
Add A records for the apex and any subdomains you want HTTPS on. Wait for propagation (usually under 5 minutes if your TTL is 300 seconds, longer if your TTL is the default 24 hours).
A Caddyfile of three lines per site is enough: domain name, reverse_proxy directive pointing at your app port, and Caddy handles the rest. Caddy talks to Let's Encrypt, completes the ACME challenge, and installs the cert.
Caddy renews 30 days before expiry by default. Check the logs after 60 days to confirm the renewal cycle worked. Set a calendar reminder for day 89 of your first cert as an explicit safety net.
Wildcard certs cover *.example.com. They require DNS-01 ACME validation, which means your DNS provider needs an API token Caddy can call. Cloudflare, Route 53, and most managed DNS providers expose this. Without DNS-01, you need a separate cert per subdomain.
UptimeRobot and Better Stack both check cert expiry as part of HTTPS monitoring. Configure alerts at 14 days and 7 days before expiry. Auto-renewal is the answer; cert expiry monitoring is the safety net behind it.
For Cloudflare deployments, the second TLS leg matters. The Cloudflare-to-origin certificate can come from two places. Option one is a Let's Encrypt cert on the origin, which Cloudflare validates against. Option two is a Cloudflare-issued "Origin Certificate" that is trusted only by Cloudflare and lasts 15 years. Origin certs are easier (no auto-renewal cycle on the origin) but tie you to Cloudflare. Let's Encrypt origin certs are portable but require origin renewal cycles. Either is correct; pick based on whether you want flexibility to remove Cloudflare in the future.
One more pitfall: AI tools occasionally suggest "self-signed certificates" for development or testing. A self-signed cert is fine in localhost development. A self-signed cert in production is a browser warning that nobody trusts, and configuring exceptions in clients is a security anti-pattern that bites you for years. If you see a self-signed cert in a deployment script generated by AI, delete it and use Let's Encrypt.
Section 4: Containers vs Bare-Metal Node Processes
The next decision is how the application process runs on the host. Two camps exist. The container camp ships the app inside a Docker image that includes the runtime, the dependencies, and the configuration. The bare-metal camp installs the runtime on the host (Node 22, Python 3.13, Go 1.24, whatever) and runs the app directly as a process.
Containers win on reproducibility. The same image runs the same way on your laptop, on staging, and in production. Dependencies cannot drift. Operating system differences cannot trip you up. A new developer can run the app with one docker compose up command. The image is the build artifact. You ship one thing and that one thing works.
Bare-metal wins on simplicity. The process is a process. You can attach a debugger directly. You can read the file paths without translating between container and host. You can run a single command (pm2 start, systemd, supervisor) and the app is up. Memory is real memory, not a cgroup. CPU is real CPU, not a quota. Logs are stdout, not docker logs. There are fewer layers between your code and the kernel, which means fewer places to look when something is wrong.
Reproducible builds. One image, runs anywhere. Easy to ship to a registry, easy to roll back to a previous tag, easy to standardize across staging and production. Cost: a Docker daemon, a build pipeline, a registry, and an extra layer in every debug session. Best when you have multiple services to coordinate, multiple environments to keep consistent, or a team where laptop drift is a real problem.
Process is the artifact. Fewer layers, easier to debug, lower overhead. Direct access to filesystem and network. Process managers (pm2, systemd) handle restarts and logging. Cost: dependency drift between machines, harder to onboard new devs, no quick rollback if you break the runtime. Best when you have a single service, a single host, and ops experience to keep the runtime locked.
The pragmatic answer for most solo projects: bare-metal in development for the simplicity, containers in staging and production for the reproducibility. This is a real and useful split. Local dev runs the app with npm run dev or go run . on your laptop. Staging and production run the same Docker image. The Dockerfile lives in the repo. The image gets built in CI and deployed to the host. You debug locally without a container, and you ship to production with one. The discipline is real but the productivity wins are real too.
One edge case: PaaS providers handle this for you. Vercel, Render, and Railway all build a container under the hood, but they hide the Dockerfile. You ship a Git repo, they handle the rest. This is fine until you need a binary the platform does not include or a runtime version they do not support. At that point, you write a Dockerfile and ship the container yourself. Render, Railway, and Fly.io support custom Dockerfiles natively. Vercel does not (yet, in 2026, for most plans).
If you go containers, three rules: use multi-stage builds (one stage for building, a slimmer stage for running), pin the base image to a specific digest (not just a tag, which can shift under you), and tag your built images with the Git commit SHA so you can correlate a running container back to source. AI tools often skip these defaults. Multi-stage builds typically cut image size by 5 to 10x. Digest pinning prevents the silent base-image substitution that has bitten teams in the past.
Section 5: CI/CD With AI Assistance
Continuous integration is the pipeline that runs every time you push code. Continuous deployment is the pipeline that ships to production every time CI passes. Together they form the single most underrated piece of solo developer infrastructure. A working CI/CD pipeline turns a deploy from a 30-minute ceremony into a git push. The savings compound across hundreds of deploys.
The standard pipeline in 2026 looks like: a developer pushes to a branch, the CI runs tests and a build, on green the deploy happens, on red the deploy is blocked. GitHub Actions is the dominant CI platform for projects on GitHub (which is most projects). GitLab CI is the equivalent on GitLab. CircleCI exists. The Vercel and Render PaaS platforms include CI as part of the deploy flow, which is enough for many projects. For anything beyond a basic build-and-deploy, GitHub Actions is the right default.
AI tools are good at writing CI configs. They are also good at writing CI configs that leak environment variables, skip security checks, and accidentally publish secrets to a public registry. The CI pipeline is one of the highest-risk surfaces for AI-generated configuration because mistakes get committed and re-run automatically. Always read the CI config the AI generated, line by line, before merging.
The most common AI-generated CI mistakes worth flagging: putting secrets in the YAML directly instead of in encrypted secret stores; running tests against production credentials; using `pull_request_target` triggers (which run with secret access on attacker-controlled code) without understanding the implications; checking out the wrong commit and running tests on stale code; and using third-party Actions from anonymous publishers instead of pinning to a verified version. Each of these has produced real breaches in 2024 and 2025.
Branch deploys (also called preview environments) are the killer feature of modern PaaS platforms and the second-best feature of GitHub Actions plus a custom hook. Every pull request gets its own live URL. The reviewer can click the URL, exercise the change in a real browser, and verify that it does what the diff claims. The cost is paid in deploy infrastructure (each branch costs something) but the engineering velocity gain is huge. Vercel and Render both do this automatically. On a VPS, you can build it with Docker and a path-based reverse proxy.
The workflow runs when commits land on any branch and when pull requests are opened or updated. Avoid pull_request_target unless you understand the security model.
Use the cache action to store node_modules between runs. Cache hits cut a 90 second install down to 10 seconds. The cache key should depend on the lockfile so the cache invalidates when dependencies change.
Parallel jobs cut the wall-clock time. A failing lint job does not need to wait for a slow test job to finish before failing the build.
For Node apps, this is npm run build. For Docker, this is docker build with cache mounts and an output to a registry. The build artifact is what gets deployed; do not rebuild on the deploy step.
The deploy step only runs on pushes to the main branch and only after tests pass. Use a deployment provider's CLI (vercel deploy, render deploy, flyctl deploy) or push to a Docker registry that triggers a webhook on your host.
After deploy, hit the homepage and a critical API endpoint with curl. If either fails, alert the deployer and consider an automatic rollback.
One discipline rule: the CI runs on every push. The deploy only happens from main. Never let the CI of a feature branch deploy to production. This is a common AI-generated mistake when configuring pipelines because the model sees "on push, deploy" as a simple pattern and does not differentiate between branches.
Section 6: Database Hosting Choices
The database is the part of the deployment most likely to lose your data and least likely to be reversible if you make the wrong choice. Pick carefully and you will not touch this decision for years. Pick badly and you will spend a quarter migrating.
For relational data in 2026, the choice space is small and well-understood. Postgres is the default. SQLite is the right answer for a surprising number of small apps. MySQL exists. Almost every other database is a niche tool for a specific workload that you probably do not have.
Managed Postgres is what you pick when you want zero operations work. Supabase, Neon, Railway, and the cloud providers (RDS on AWS, Cloud SQL on GCP) all sell managed Postgres. The pricing varies wildly depending on plan structure and load profile. Supabase has a generous free tier (500 MB storage, 2 GB egress, paused after a week of inactivity) and a $25 per month tier for serious work. Neon has a similar shape with a generous free tier and per-compute-second pricing on their paid plans. Railway charges based on usage, typically $5 to $20 per month for small apps. RDS is the enterprise option, easily 2 to 5x the price of the indie tier providers, and reasonable when you scale.
Self-hosted Postgres on a VPS is cheap but real work. You install Postgres, configure pg_hba, set up users, configure WAL archiving, set up replication if you want it, and run pg_dump on a schedule. You also keep up with security patches. The cost difference between $5 per month for self-hosted on Hetzner and $25 per month for managed Supabase is real, but the time difference is bigger than $20 worth at any reasonable hourly rate. Self-hosted makes sense when you have multiple databases on one host or when you have a workload that does not fit the managed plans.
SQLite plus Litestream is the unsung answer for a surprising number of apps. SQLite runs in your application process, with zero network round-trip per query. Litestream replicates the database to S3 (or any S3-compatible object store) in real time. The combination gives you a database with sub-millisecond reads and a continuous off-site backup, for the cost of object storage (typically pennies per month for small databases). It works for single-instance applications. It does not work when you have multiple application servers writing to the same database. It is the default for apps where read throughput matters more than write concurrency, and where the data fits comfortably on disk.
Whichever you pick, the deploy story has a non-negotiable rule: backup before migration. Every schema change ships with a fresh backup taken minutes before. Every managed provider has a snapshot button or API. Every self-hosted Postgres should have pg_dump running before any structural change. The model will write a migration. The model will not always remember to back up first. You have to.
One pitfall: connection pooling. AI tools often write code that opens a fresh connection per request, which works locally and falls over under any real load. Use a pooler. PgBouncer is the standard for self-hosted. Supabase, Neon, and Render include connection poolers as part of their managed offering, but you have to use the pooler-specific connection string, which is different from the direct connection string. Reading the connection string twice on launch day saves an outage on launch day.
Section 7: Observability Stack
An application without observability is a black box that you only debug by interviewing users about what they saw on screen. An application with good observability is one where you know what is wrong before the user does and you have the data to fix it. The minimum stack in 2026 has three layers: logs, metrics, and errors. Plus an uptime check on top.
Logs are what your application says about itself. Pino is the standard structured logger for Node. Zap is the standard for Go. The Python ecosystem uses the stdlib logging module with structured formatters. The format is JSON, with one object per line. Each line includes a timestamp, a level, a message, and any contextual fields (request ID, user ID, route, latency). JSON logs are searchable. Plaintext logs are not.
Logs need a place to live where you can search them. The cheap option is to write them to disk and search with grep. This works until your fleet is more than one machine or your retention crosses a few weeks. The right option is to ship them to a hosted log search system. Better Stack (formerly Logtail), Papertrail, Datadog Logs, and Grafana Loki are the serious choices. Better Stack has a generous free tier (1 GB per month) and is a sensible default for a solo project. Datadog is the enterprise answer that you will pay enterprise prices for.
Metrics are numerical measurements over time: request rate, error rate, latency, memory, CPU. The default open-source stack is Prometheus for collection plus Grafana for visualization. The default managed stack is Datadog or New Relic or Honeycomb. For solo projects, the right answer is often "ignore metrics for now" because the time investment in setting up Prometheus is not worth it until you have something to graph. Error rate and latency get you 90% of the value, and they show up in your error tracker and your hosting platform's dashboard for free.
Errors are the single most leveraged piece of observability you will set up. Sentry is the default. The free tier handles 5,000 errors per month, which is more than enough for a small app, and the integration is one line of code in most frameworks (Sentry.init with your DSN). Sentry captures the stack trace, the request context, the user context (if you set it), and the breadcrumb trail leading up to the error. It deduplicates errors so a thousand instances of the same crash become one issue. It alerts you on new errors via email, Slack, or webhook. Setting up Sentry takes 30 minutes. The first time you debug a production crash without leaving your laptop, you will wonder why you put it off.
Uptime monitoring is the last layer. UptimeRobot, Better Stack, and Pingdom poll your domain every 1 to 5 minutes and alert you when it stops responding. UptimeRobot has a free tier with 50 monitors at 5-minute intervals. Better Stack has a free tier with 10 monitors at 30-second intervals plus incident management features. Both are fine. The right answer is whichever you set up today rather than next month.
The minimum viable observability for a launch in 2026: Sentry plus an uptime monitor. 30 minutes to set up. Catches the vast majority of post-deploy issues. Costs zero on the free tiers. The full stack (logs, metrics, errors, uptime) takes a day to set up and is the right place to land within a month of launch. Skip none of it once you have real users.
One pitfall AI tools introduce: logging sensitive data. The model will happily log the request body, which often includes passwords, tokens, or PII. Strip these before logging. Pino has a redact option that replaces specified fields with "[Redacted]" before serialization. Use it. The audit trail of "we logged the user's password to a third-party log service" is a regulatory problem and a customer-trust problem.
Section 8: Backups (The Boring Step Everyone Skips)
Backups are the unglamorous discipline that separates the apps that survive a disaster from the ones that die in one. The dirty secret is that almost every app has backups. Almost no app has tested backups. The two are not the same.
The threat model has three categories. Hardware failure: a disk dies, a host gets reclaimed, a region goes down. Human error: someone runs DROP TABLE on the wrong database, deletes the wrong row, runs a migration with a typo. Adversarial action: an attacker gets database credentials and exfiltrates or deletes data. Each one has a different recovery story and each one needs a different backup strategy to handle.
Snapshots are the first layer. Every managed Postgres provider (Supabase, Neon, Railway, RDS) takes daily snapshots automatically and retains them for 7 to 30 days depending on plan. The snapshot is taken at the storage layer, so it is fast, atomic, and consistent. Restore is typically a UI click. Snapshots cover hardware failure cleanly. They cover human error within the retention window. They do not cover an attacker who has access to your provider account and can delete the snapshots.
Off-site backups are the second layer. A snapshot stored in the same provider account as the database is a snapshot that disappears with the account. A real backup lives somewhere else: a different provider, different geography, different access credentials. The pattern that works is a nightly pg_dump piped to an S3-compatible object store under a different account. Backblaze B2 sells object storage at $0.005 per GB per month, with no egress fees if you stay under the egress allowance. A 10 GB nightly backup costs about $0.05 per month. The cost is trivial. The setup is one cron entry and a script. The protection is real.
The third layer is restore drills. A backup that has never been restored is not a backup; it is an artifact you hope is a backup. Real restore drills run on a schedule (monthly is reasonable for small apps) and confirm three things: the backup file exists, the backup file is valid, and a fresh database can be built from it. The drill should produce output you can review. If the drill fails, fix it before you need it.
The only test of a backup is restoring it. Backups that exist but have never been restored have a non-trivial failure rate (corrupted files, expired credentials, incomplete dumps, missing schemas). Set a calendar reminder to run a restore drill on a fresh test database every month. The first drill will surface gaps. Fix them. The second drill will be smoother. By the third, you will trust your backups, which is the entire point of the exercise.
The full backup discipline for a solo project in 2026 looks like: managed snapshots from the database provider (covers most disasters cheaply), nightly pg_dump shipped to an off-site object store under a different account (covers provider account loss), monthly restore drill on a fresh database (proves the backup works), and quarterly review of retention and access (catches drift in the policy). The full setup is two hours of work and a few dollars per month. The downside of skipping it is permanent data loss.
For SQLite plus Litestream, the backup is the replication. Litestream pushes WAL pages to S3 in real time, which means your recovery point is seconds, not the previous day. Restore drills still apply. Run litestream restore on a fresh disk monthly and verify the database is intact.
One AI-generated trap to flag: the model sometimes writes a backup script that overwrites the previous backup file rather than appending a timestamp. The result is a single backup that gets corrupted if the corruption happened before the last run, with no earlier copy to fall back on. Always include a timestamp in the backup filename. Always retain at least 30 days of historical backups. Storage is cheap; lost data is not.
Section 9: The Deploy-Day Checklist
Launch day is when everything you set up, or skipped, gets tested under real conditions for the first time. The teams that ship reliably have a checklist they run through before flipping the switch. The teams that have midnight outages skipped the checklist. Here is the version that catches the failures most likely to matter.
Run dig on the domain and any subdomains from at least two networks (your laptop and a remote box). Confirm A, CNAME, and MX records resolve to the expected values. If the TTL was high, propagation can take hours; lower it to 300 seconds at least 24 hours before launch so changes propagate fast if you need them to.
Hit the site in a browser and confirm the green padlock. Check the cert expiry date. Confirm the renewal mechanism (Caddy logs, Certbot logs, or PaaS dashboard) shows a future renewal scheduled. A cert expiring in 30 days with no renewal scheduled is a 30-day timer to outage.
Run the migration in a staging environment first. Take a fresh snapshot of production immediately before running the migration in production. After the migration, run a smoke query (SELECT COUNT(*) FROM users) to confirm the schema is intact and the data is reachable.
Run a diff between the .env.example in your repo and the actual environment variables on the production host or PaaS. Every variable in the example file should be set with a real value in production. Secrets (API keys, database URLs, JWT signing keys) should never be in the source repo, in git history, or in CI logs.
Trigger a deliberate error in production (a route that throws, a log line at error level) and confirm Sentry or your error tracker captures it within 60 seconds. If it does not arrive, the integration is broken and you are flying blind on launch.
Confirm UptimeRobot or your monitor of choice has a check on the domain, the alert recipient is correct, and a test alert reaches you. Test alerts (most providers have a button) confirm the alert path before launch.
Click through the three or four most important user paths in production with a real account. Sign up. Log in. Submit a form. Hit a paid endpoint. Watch the logs and the error tracker for anything unexpected. Anything that breaks at this stage is a thousand times cheaper to fix than after public launch.
Write down (in a paragraph) how to roll back. Which command. Which deploy ID. Which database snapshot. Where to look first if something goes wrong. Every solo dev with a rollback plan ships better than every solo dev without one.
The pattern is the same across every project. The variations are in the details. The discipline is the same regardless of stack. Run the checklist. Skip nothing. The checklist catches the failures the AI was confident did not exist.
One additional rule worth flagging: never deploy on Friday afternoon. The probability of an issue is the same. The probability of someone being available to fix it is much lower. The probability of you spending Saturday on it is much higher. Deploy mid-week, mid-day, when the support muscle is awake and the patch path is short. AI tools are happy to deploy at midnight on a Friday. You should not be.
Closing
Deployment is not glamorous and AI cannot save you from getting it wrong. The teams that ship reliably are the ones with boring, predictable deploy pipelines. The solo devs who lose nights to outages are the ones who deployed with the AI's first config and never looked back. Every layer in this page (host, DNS, SSL, container, CI, database, observability, backup, checklist) has a lazy default the AI will produce on autopilot and a correct default that takes a few minutes longer. The few minutes are the entire job.
The good news: the correct defaults in 2026 are cheap, well-documented, and largely automatable. A solo dev who picks Vercel or Render, points DNS through Cloudflare, sits behind Let's Encrypt, runs CI on GitHub Actions, hosts Postgres on Supabase or Neon, wires Sentry plus UptimeRobot, ships nightly off-site backups, and runs a launch-day checklist has a deployment that survives almost everything that small projects encounter. The total monthly cost for the full stack is under $50 for early projects. The total setup time is under a day. The savings on outages, data loss, and credential leaks are immeasurable.
The bad news: AI tools confidently generate every one of these layers wrong by default. They suggest hyperscaler defaults that are too expensive. They suggest self-signed certs that break in browsers. They suggest CI configs that leak secrets. They suggest backups that overwrite themselves. They suggest deploy scripts that skip the smoke test. None of these are catastrophic on their own. Stacked together, they produce the kind of slow, grinding outage that kills solo projects in their first six months.
The discipline is to read every line of generated config before merging. To verify the AI's recommended host against your actual workload. To test the backup before you need it. To run the checklist before you flip the switch. The model writes faster than you can. The model also gets the deploy wrong faster than you can. Your job in this stage of the project is to slow down at the moments where speed produces silent failure. That is the entire job.
Ship boring. Ship predictable. Ship with backups. The deployment that nobody notices is the one that is working. The deployment that everybody notices is the one that did not.
