Up to this point, the series focused on organizational foundations: accounts, identity, security, state, and DNS. The original business driver was simpler and more concrete: launch and operate a customer-facing website on AWS without building technical debt on day one.
This post covers that missing piece: the hosting model built around AWS Amplify, with cross-account deployment, controlled DNS changes, and repeatable operations.
Customer Context
This architecture was delivered for a customer workload, not an internal sandbox. That changes the bar:
- Availability matters because it is public-facing.
- DNS and certificate mistakes have immediate business impact.
- Changes need a clear approval path and rollback plan.
- Access has to work for both CI/CD and human operators, without long-lived keys.
Where Website Hosting Fits
The website stack sits on top of the previous posts:
| Layer | Responsibility |
|---|---|
aws-identity-management | Creates the website deployment role and SSO access path |
website | Owns Amplify app, branches, runtime config, and build/deploy workflow |
aws-dns-management | Owns hosted zone records, alias targets, and certificate validation records |
The website repo is new - Part 1 introduced the three infrastructure repositories (aws-bootstrap, aws-identity-management, aws-dns-management). This fourth repo is the first workload-specific one, owning the application infrastructure rather than shared platform concerns.
This separation is intentional. The website repo should not be able to modify unrelated DNS zones, and DNS changes should not require broad access to application infrastructure.
Deployment Model
The delivery flow is a two-hop trust model:
- CI authenticates in the management account using OIDC.
- CI assumes a scoped website deployment role in the website production account.
- OpenTofu applies Amplify resources.
- DNS records are updated in the DNS account using a separate scoped role.
The core provider pattern in the website repo looks like this:
provider "aws" {
region = var.aws_region
assume_role {
role_arn = "arn:aws:iam::<website-account-id>:role/WebsiteDeploymentRole"
external_id = var.website_deployment_external_id
}
}The same pattern appears in DNS with a different role and external ID. Keeping those relationships separate reduces blast radius and makes access reviews clearer.
Amplify Configuration That Matters
The implemented setup uses Amplify for Next.js SSR hosting with explicit branch and domain configuration:
platform = "WEB_COMPUTE"for SSR runtime.- A production branch (for example
release) with auto-build enabled. - Domain association for both apex and
www. - Redirect rules to enforce a canonical URL.
Representative structure:
resource "aws_amplify_app" "website" {
platform = "WEB_COMPUTE"
# build_spec, environment variables, redirects
}
resource "aws_amplify_branch" "release" {
stage = "PRODUCTION"
enable_auto_build = true
}
resource "aws_amplify_domain_association" "website" {
domain_name = var.domain_name
# apex + www sub_domain mappings
}Runtime Secrets and IAM Roles
Amplify SSR workloads still need runtime access controls.
In this setup:
- SSM Parameter Store is used for runtime secret values.
- Amplify service/compute roles get scoped
ssm:Get*access for the app parameter paths. - CloudWatch logging permissions are explicitly attached to support operational debugging.
The important pattern is path scoping, not wildcard everything. The role can read only the parameter namespace the app needs.
Domain and Certificate Handshake
Domain setup is where multi-account designs usually become messy. The working pattern here is:
| Step | Account | Outcome |
|---|---|---|
| Create Amplify domain association | Website account | Generates target domains and validation records |
| Publish alias/CNAME/validation records | DNS account | Points traffic and satisfies certificate checks |
| Verify domain association state | Website account | Confirms branch mappings are active |
This is why DNS and website repos are separate but coordinated. The handoff values come from Amplify outputs, then get applied as DNS records through the DNS pipeline.
CI/CD Runbook
Infrastructure changes for hosting are handled through a dedicated workflow:
- Trigger on changes under
website/tf. - Plan/apply via shared OpenTofu pipeline.
- Manual approval gate before apply.
- Amplify auto-build handles application deploy after infra updates.
This gives a clean audit path: who approved, what changed, and what deployed.
Common Failure Modes
| Issue | Typical cause | Fix |
|---|---|---|
| Domain stuck in pending verification | Missing/incorrect validation record | Re-check Amplify output values and DNS record targets |
| Deploy role assumption fails | Trust policy/external ID mismatch | Validate both sides of trust relationship and CI secret values |
| Runtime feature fails after deploy | Missing SSM parameter or role permission | Verify parameter path and attached IAM policy scope |
| Drift between repos | Website and DNS applied out of sequence | Follow a documented handoff runbook for domain changes |
The Complete Picture
This is the final post in the series. Looking back at what was built across all eleven parts:
We started with a single AWS account and ended with a structured multi-account setup: three accounts with clear boundaries, three infrastructure repositories with specific responsibilities, OIDC-based CI/CD with no long-lived credentials, centralised audit logging, and organisational guardrails enforced through SCPs. The website hosting layer in this post is the workload all of that infrastructure exists to serve.
The account structure was not built for its own sake. It was built to ship a customer website where deployments are predictable, access is scoped, failure domains are isolated, and every change is auditable.
If your team is currently running everything in one account, you don't need to do all of this at once. Start with Part 1 and work through the series incrementally - each post builds on the last, and each step makes the next one easier.