AWS Infrastructure Automation and Migration Project

AWS Infrastructure Automation and Migration Project

This was a hands-on experience involving the design, setup, migration, and automation of cloud services. The project, named CloudShift, focused on building a scalable, automated infrastructure by migrating our client's services from managed hosting to AWS Cloud.


Project CloudShift Overview

Our objectives were clear:

  1. Migrate client services to AWS for better scalability.

  2. Set up Infrastructure-as-Code (IaC) using Terraform and Jenkins.

  3. Orchestrate containers with Kubernetes.

  4. Establish an efficient CI/CD pipeline for fast, automated deployments.


AWS Migration and Network Setup

One of my key responsibilities was setting up AWS infrastructure. Using Terraform, I created VPCs, subnets, and security groups. This allowed us to automate the network setup, ensuring consistent environments across different regions.

Technologies Used:

  • AWS VPC

  • Terraform

  • Security Groups

Here’s an example of Terraform code I wrote for the VPC and subnets:

hclCopy codeprovider "aws" {
  region = "ap-south-1"
}

resource "aws_vpc" "main" {
  cidr_block = "10.0.0.0/16"
  tags = {
    Name    = "main-vpc"
    Project = "CloudShift"
  }
}

Data Migration to AWS S3 and RDS

We handled large-scale data migration from on-prem to AWS, using Python scripts to move data into AWS S3 and RDS. This migration was automated and ensured that no data was lost in the process.

Technologies Used:

  • AWS S3

  • AWS RDS

  • Python

Here’s a sample script I used to upload data to S3:

pythonCopy codeimport boto3
import os

s3 = boto3.client('s3')
bucket_name = 'my-migration-bucket'

def upload_to_s3(local_directory, bucket, s3_directory):
    for root, dirs, files in os.walk(local_directory):
        for filename in files:
            local_path = os.path.join(root, filename)
            s3_path = os.path.join(s3_directory, os.path.relpath(local_path, local_directory))
            s3.upload_file(local_path, bucket, s3_path)
upload_to_s3('/path/to/local/data', bucket_name, 'migrated-data')

Infrastructure Automation with Terraform and Jenkins

We automated the infrastructure deployment using Jenkins and Terraform. Each time code was committed to the GitHub repo, Jenkins would trigger an automated deployment of the AWS resources.

Technologies Used:

  • Jenkins

  • Terraform

  • GitHub

Here’s a sample Jenkinsfile for the deployment pipeline:

groovyCopy codepipeline {
  agent any

  environment {
    AWS_ACCESS_KEY_ID = credentials('AWS_ACCESS_KEY_ID')
    AWS_SECRET_ACCESS_KEY = credentials('AWS_SECRET_ACCESS_KEY')
  }

  stages {
    stage('Checkout') {
      steps { git 'https://github.com/accenture/cloudshift-infra.git' }
    }
    stage('Terraform Init') { steps { sh 'terraform init' } }
    stage('Terraform Plan') { steps { sh 'terraform plan -out=tfplan' } }
    stage('Approval') { steps { input message: 'Do you want to apply this plan?', ok: 'Apply' } }
    stage('Terraform Apply') { steps { sh 'terraform apply -auto-approve tfplan' } }
  }

  post { always { cleanWs() } }
}

Configuration Management Using Ansible

Managing EC2 instances was made easier with Ansible. I wrote playbooks to automate the installation and configuration of services like Nginx on the instances. This allowed us to ensure uniformity across the servers.

Technologies Used:

  • Ansible

  • EC2 Instances

Here’s a snippet of the Ansible Playbook I used:

yamlCopy code---
- name: Configure Web Server
  hosts: web_servers
  become: yes
  tasks:
    - name: Install Nginx
      apt: name=nginx state=present
    - name: Start Nginx Service
      service: name=nginx state=started enabled=yes
    - name: Copy Web Content
      copy: src=/path/to/web/content dest=/var/www/html owner=www-data group=www-data mode='0644'

Kubernetes Orchestration on AWS EKS

For container orchestration, I set up and managed Kubernetes clusters on AWS EKS. This helped scale our deployments and manage containerized applications effectively.

Technologies Used:

  • Kubernetes

  • AWS EKS

Here’s an example Kubernetes YAML configuration:

yamlCopy codeapiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
        - name: myapp
          image: 12345.dkr.ecr.ap-south-1.amazonaws.com/myapp:latest
          ports:
            - containerPort: 80

CI/CD Pipeline and Automation

I maintained a comprehensive CI/CD pipeline using Jenkins, integrated with Nexus, SonarQube, and Ansible. This pipeline automated everything from code builds, testing, to deploying Docker containers in Kubernetes clusters on AWS EKS.

Technologies Used:

  • Jenkins

  • Nexus

  • SonarQube


Challenges and Solutions

  1. Legacy Application Compatibility: Some applications weren’t cloud-native. I worked with the development team to containerize and refactor these applications.

  2. Data Migration Complexity: Moving large amounts of data without downtime was challenging. We used AWS DMS and staged migration to solve this issue.

  3. Security Concerns: We addressed security concerns by setting up VPCs, IAM roles, and security groups.


Key Results

  • Successfully migrated all services to AWS Cloud.

  • Reduced deployment times significantly with automation.

  • Gained flexibility in scaling services and reduced operational costs.


This project gave me invaluable hands-on experience with AWS services, Kubernetes, and infrastructure automation. It deepened my understanding of DevOps best practices and improved my skills in cloud architecture. I’m incredibly proud of what we accomplished and the impact it had on the client's business.

Project CloudShift: Overview

Our main focus was to:

  1. Migrate the client’s services to AWS

  2. Build an Infrastructure-as-Code (IaC) pipeline using Terraform and Jenkins

  3. Orchestrate containers using Kubernetes

  4. Set up a robust CI/CD pipeline for automated deployments

AWS Migration: The Network Setup

One of the first tasks was to design the AWS network infrastructure. I used Terraform to create VPCs, subnets, and security groups. The process was completely automated, ensuring a consistent and reliable setup across multiple environments.

Here's an example of the Terraform code I wrote for the VPC and subnets:

hclCopy codeprovider "aws" {
  region = "ap-south-1"
}

resource "aws_vpc" "main" {
  cidr_block = "10.0.0.0/16"
  tags = {
    Name    = "main-vpc"
    Project = "CloudShift"
  }
}

This was the backbone of our infrastructure. We created subnets, set up security groups, and allowed inbound web traffic using Terraform, making the whole process seamless and scalable.

Data Migration: Moving to S3 and RDS

We needed to move large amounts of data from managed hosting to AWS S3 and RDS. I wrote custom Python scripts to handle the migration process. One script, for example, uploaded files from the local environment to S3, ensuring data was securely transferred:

pythonCopy codeimport boto3
import os

s3 = boto3.client('s3')
bucket_name = 'my-migration-bucket'

def upload_to_s3(local_directory, bucket, s3_directory):
    for root, dirs, files in os.walk(local_directory):
        for filename in files:
            local_path = os.path.join(root, filename)
            s3_path = os.path.join(s3_directory, os.path.relpath(local_path, local_directory))
            s3.upload_file(local_path, bucket, s3_path)
upload_to_s3('/path/to/local/data', bucket_name, 'migrated-data')

Infrastructure Automation: Terraform and Jenkins

To automate AWS infrastructure creation, I integrated Jenkins with our GitHub repository, which contained the Terraform scripts. This meant that every time a change was pushed to the repo, Jenkins would automatically create or update the infrastructure. Here’s a sample Jenkinsfile that defines the pipeline:

groovyCopy codepipeline {
  agent any

  environment {
    AWS_ACCESS_KEY_ID = credentials('AWS_ACCESS_KEY_ID')
    AWS_SECRET_ACCESS_KEY = credentials('AWS_SECRET_ACCESS_KEY')
  }

  stages {
    stage('Checkout') {
      steps { git 'https://github.com/accenture/cloudshift-infra.git' }
    }
    stage('Terraform Init') { steps { sh 'terraform init' } }
    stage('Terraform Plan') { steps { sh 'terraform plan -out=tfplan' } }
    stage('Approval') { steps { input message: 'Do you want to apply this plan?', ok: 'Apply' } }
    stage('Terraform Apply') { steps { sh 'terraform apply -auto-approve tfplan' } }
  }

  post { always { cleanWs() } }
}

Configuration Management: Ansible and EC2

For managing the configuration of EC2 instances, I used Ansible. I wrote Ansible playbooks to install and configure services like Nginx. This made configuration management easier, ensuring every EC2 instance was configured the same way.

yamlCopy code---
- name: Configure Web Server
  hosts: web_servers
  become: yes
  tasks:
    - name: Install Nginx
      apt: name=nginx state=present
    - name: Start Nginx Service
      service: name=nginx state=started enabled=yes
    - name: Copy Web Content
      copy: src=/path/to/web/content dest=/var/www/html owner=www-data group=www-data mode='0644'

Kubernetes: Managing Containers with AWS EKS

Another key part of the project was setting up Kubernetes clusters on AWS EKS for efficient container deployment. We used Kubernetes to manage services and scale deployments effectively.

Here’s a snippet of a Kubernetes YAML file I created for one of our services:

yamlCopy codeapiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
spec:
  replicas: 3
  selector: matchLabels: app: myapp
  template:
    metadata: labels: app: myapp
    spec:
      containers:
        - name: myapp
          image: 12345.dkr.ecr.ap-south-1.amazonaws.com/myapp:latest
          ports: - containerPort: 80

CI/CD Pipeline: Streamlining Deployments

I also helped maintain the CI/CD pipeline using Jenkins, integrating it with Nexus, SonarQube, and Ansible to automate everything from code builds, unit tests, to container deployment on AWS EKS. This reduced deployment times significantly.

Challenges and Solutions

  1. Legacy App Compatibility: Many of the client’s legacy apps weren’t cloud-native, so we containerized them using Docker and refactored the code to work in a cloud environment.

  2. Data Migration: Migrating a large amount of data without downtime was a challenge. I used a staged migration approach, leveraging AWS DMS for databases and custom scripts for file data.

  3. Security: Moving services to AWS raised concerns about security. We addressed this by implementing VPCs, security groups, and IAM roles to secure our environment.

Key Results

By the end of the project:

  • We successfully migrated all services to AWS.

  • Set up a robust Infrastructure-as-Code pipeline.

  • Reduced deployment times and improved service scalability.

This project was a major learning experience. I developed deep knowledge of AWS services, Kubernetes, and automation tools like Terraform and Jenkins. It not only improved my cloud architecture skills but also gave me a solid understanding of DevOps best practices. This project is something I’m really proud of, and it had a huge impact on our client's cloud strategy.