wiki:AWSMigration

AWS Migration

We have decided to move on from our old hosting provider e2e to aws for the following reasons:

  1. The opportunity to save a significant amount of costs by going to cloud
  2. Flexibility to expand/reduce capacity according to our needs
  3. The opportunity to automate server provisioning and management.

Changes from old setup

From the sysadmin/devops perspective, there has been significant change - we have completely automated our server provisioning and application deployment with terraform and ansible. Visit the infra-setup repo for more details on this.

From the developer perspective, though most things stay the same, there are a few changes:

  1. How ssh access works
  2. Removal of individual accounts and consolidating to the user ubuntu

The first point was changed because we no longer need to depend on ssh port obscuring to secure from ddos attacks - aws takes care of that, so we reverted back to port 22. You should be able to access most services in the following manner, otherwise please ask on devops channel: ssh ubuntu@klp.org.in ssh ubuntu@ems.klp.org.in ssh ubuntu@dev.klp.org.in ssh ubuntu@tada.klp.org.in

The second change was done for a couple of reasons:

  • We can no longer use servers as file storage and developers having to log into prod servers is an anti-pattern, which we have address by providing staging servers or easy ways to access the database for running the scripts on. Automation comes with the assumption that the server can be taken down and recreated within no time, and having non-restorable content stops us from doing that easily. It's highly recommended to create a directory with your name on the server, and download/upload the scripts needed to s3 buckets as shown below in the AWS Migration section. If you absolutely need to create a folder of scripts on the server, please ask for help in the devops channel for hooking up your folder to s3 backup cron scripts that run every morning.
  • Creating individual accounts on the server adds to administration overhead in giving access to servers and removing it when developers leave the team. Having all access consolidated in one account reduces this overhead, leads to more secure access policies.

AWS Backup Notes

Most the servers have backups enabled - check /etc/cron.d/ in each server for the relevant details. However, this is only limited to the deployment root folder, and database, images, etc.

All the rest of the assets usually produced by developers working on the servers is saved in the devbackup.klp.org.in bucket on s3 (accessible from all servers). However, it is the responsibility of the developer to backup his assets to the s3 bucket. We use awscli to take our backups, as well as manually add & sync folders to the s3 bucket. Here are a few examples to help you understand how it works:

  • List all the buckets:

aws s3 ls s3://

  • List all the developer backups (the slash at the end is important):

aws s3 ls s3://devbackup.klp.org.in/

  • Upload a file to s3:

aws s3 cp backup.tgz s3://devbackup.klp.org.in/home/brijesh/

  • Delete a file from s3:

aws s3 rm s3://devbackup.klp.org.in/home/brijesh/backup.tgz

  • Sync (up) a whole directory up to your home directory backup:

aws s3 sync /home/ubuntu/homedirs/brijesh/ s3://devbackup.klp.org.in/dev/home/brijesh/

  • Sync (down) a directory from s3 to your server directory:

aws s3 sync s3://devbackup.klp.org.in/dev/home/brijesh/ /home/ubuntu/homedirs/brijesh/

Last modified 11 months ago Last modified on 09/15/17 13:23:56