-
Notifications
You must be signed in to change notification settings - Fork 37
Creating staging instances on Amazon
To save costs, we don't keep our staging system online all the time. Instead, we make it reasonable to be able to spin it up through scripts.
But the scripts aren't all written. This is what it takes to set up a staging environment.
You'll need the overview-manage script on your desktop, described at the bottom of Deploying from scratch to Amazon. It's just a wrapper to ssh the manage instance.
You'll also need the prerequisites from Deploying from scratch to Amazon, so you can spin the instances up. (TODO: add spin-up functionality to overview-manage?)
Rather than create a new database, we can create one from our latest backup. As a side effect, this verifies that our backups always work.
-
cd aws-overview-tools/instances -
AWS_KEYPAIR_NAME=manage ./create_instance.rb database_staging us-east-1a -
overview-manage add-instance staging.database.10.x.x.x -
overview-manage ssh staging database -
On the EC2 Management Console, go to "Snapshots", pick the most recent "[Database] Daily backup" and click "Create volume". Make it "Standard" and the same size as the original. (On EC2, it can't be any smaller.) Go to "Volumes", find the new volume (its status will be "Available" or "Loading...", the latter meaning it can be mounted but will initially be a bit slow), and "Attach" it to the
database-staginginstance under/dev/sdf. -
Give the volume a Tag:
Namedatabase-staging. -
In the
staging-databaseinstance:sudo /etc/init.d/postgresql stop && sudo rm -rf /var/lib/postgresql/* && echo '/dev/xvdf1 /var/lib/postgresql auto defaults 0 0' | sudo tee -a /etc/fstab && sudo mount /var/lib/postgresql && sudo /etc/init.d/postgresql start -
Still in the instance:
sudo -u postgres psql overview -c "ALTER USER overview PASSWORD 'overview'"(so if there's a security hole in the staging server, it won't expose our database password).
AWS_KEYPAIR_NAME=manage ./create_instance.rb worker_staging us-east-1aoverview-manage add-instance staging.worker.10.x.x.x
-
AWS_KEYPAIR_NAME=manage ./create_instance.rb searchindex_staging us-east-1a -
overview-manage add-instance staging.searchindex.10.x.x.x -
On the EC2 Management Console, go to "Snapshots", pick the most recent searchindex "Daily backup"and click "Create volume". Make it "Standard" and the same size as the original. Go to "Volumes", find the new volume (its status will be "Available" or "Loading...", the latter meaning it can be mounted but will initially be a bit slow), and "Attach" it to the
searchindex-staginginstance under/dev/sdf. -
overview-manage ssh staging searchindexsudo mkdir /etc/elasticsearch sudo mkdir /var/lib/elasticsearch && echo '/dev/xvdf1 /var/lib/elasticsearch auto defaults 0 0' | sudo tee -a /etc/fstab && sudo mount /var/lib/elasticsearch -
On the EC2 Management Console, go to "Snapshots", pick the most recent apollo "Daily backup" and click "Create volume". Make it "Standard" and the same size as the original. Go to "Volumes", find the new volume (its status will be "Available" or "Loading...", the latter meaning it can be mounted but will initially be a bit slow), and "Attach" it to the
worker-staginginstance under/dev/sdg. -
overview-manage ssh staging workersudo mkdir /etc/apollo sudo mkdir /var/lib/apollo && echo '/dev/xvdg /var/lib/apollo auto defaults 0 0' | sudo tee -a /etc/fstab && sudo mount /var/lib/apollo sudo mv /var/lib/elasticsearch/data/overview-search-index /var/lib/elasticsearch/data/overview-search-index-staging
AWS_KEYPAIR_NAME=manage ./create_instance.rb web_staging us-east-1aoverview-manage add-instance staging.web.10.x.x.x-
overview-manage ssh staging weband then log out
- Run
overview-manage deploy-config staging -
overview-manage ssh staging databaseandsudo /etc/init.d/postgresql restartto pick up the new configuration. - Run
overview-manage deploy staging [TAG] - Attach the correct AWS "Elastic IP" to the
staginginstance.
And test away, at http://staging.overviewproject.org
overview-manage deploy staging origin/<branch-name>
- The
stagingdatabase has a different password. - The cookie domain is different. We don't have any cookies on
overviewproject.org: we only have them onwww.overviewproject.organdstaging.overviewproject.org. That ensures no overlaps. - Only on
productiondo we redirect fromhttp(?:s)://overviewproject.orgtohttps://www.overviewproject.org.
- The network topology is the same
- The configuration files are near-identical. The only differences are specified in
/opt/overview/config/manage/config.ymlandsecrets.ymlon themanageinstance. - The
stagingdatabase is a snapshot of theproductionone. - The SSL key is the same.
- Our users' sensitive data is on the
database-staginginstance, so if that database is hacked, we expose real data. Our only comfort is that the hacker can't edit the live database (it's on a different computer, with a different password, in a different security group).
Shut down the staging instance when it isn't in use.
-
Run
for instance in $(overview-manage status | grep staging | cut -f2,3,4 | sed -e 's/[[:blank:]]/\//g'); do overview-manage remove-instance $instance; done(that runsoverview-manage remove-instancefor everystaginginstance. -
On the EC2 console, terminate all instances whose tags end in
-staging. -
Delete the
database-staging,searchindex-staging, andapollo-stagingvolumes. (You may "force detach" first, if your instance is taking a while to shut down. But if you're doing "force detach", make sure you aren't detaching the production volume!)
Be extra careful not to terminate non-staging instances. That would lead to downtime (though we wouldn't lose data).
Be extra-extra careful not to delete the non-staging volume. That would lead to up to 24 hours' worth of data loss, since we'd need to recover from a snapshot. (Incidentally, the instructions on this page double as backup-recovery instructions.)