December 23, 2017

Introduction to AWS & Cloud Migration @ BIT Sathy

Guest Lecture on Introduction to AWS & Cloud Migration @ BIT Sathy for Information Technology & Computer Science Department Professors.

Here is the topics covered during the lecture and hope professors did enjoy the session :) 
  • What is cloud computing?
  • What is cloud migration?
  • Different types of Cloud Offerings available in the market
  • Different forms of cloud computing
  • World Leaders in Public Cloud & how AWS stands out!
  • Amazon Web Services vs On-Premise
  • Introduction to Amazon Web Services (AWS)
  • Details on 5 core AWS services that are most widely used during migration
  • Introduction to AWS Tooling
  • A successful use case on Cloud Migration -> Data Centre to AWS.

Refer the link to certificate of appreciation  from the college for the Guest Lecture.

Special Award for Contribution to Alma Mater 2017

Appreciation goes a long way and a recognition from your Alma Mater after 17 yrs of your graduation is something close to your heart.
I was delighted to receive the Special Award from Bannari Amman Institute of Technology for my contribution towards the technical guidance & lectures to the college as well as to the Bengaluru based BIT Alumnae as a Secretary of  BIT Alumni Chapter, Bengaluru.

December 13, 2017

Install CodeDeploy agent on EC2 Instance

 Code Snippet to install code deploy agent

When you provision EC2 instance for deployment to be done by AWS CodeDeploy, that EC2 instance should have CodeDeploy agent for deployment to proceed. 

Option 1:

            yum update -y
            yum install -y ruby
            yum install wget

            cd /home/ec2-user

            wget https://aws-codedeploy-${AWS::Region}

            chmod +x ./install
            ./install auto  

service codedeploy-agent status 
service codedeploy-agent start

Option 2:

yum update -y 
            yum install -y ruby aws-cli 
            cd /home/ec2-user 
            aws s3 cp s3://aws-codedeploy-us-east-1/latest/install . --region us-east-1 
            chmod +x ./install 
./install auto

Note: Use sudo, if elevated privilege is required.

Overview of Sample Web Application Architecture

The Sample Web Application depicted below will include Web Servers, App Servers and Database Servers
-       There are two Availability Zones (AZ) in the Sample Web Application, in order to provide high redundancy and therefore high availability
-       Subnet is nothing but the range of IP addresses in a VPC
-       Network ACLs stands for Network Access Control Lists that are applied to subnets
-       Each AZ has one private subnet and public subnet
-       All subnets within a VPC is designed to talk to each other freely
-       Only public subnets are accessible from the internet
-       Servers in the private subnet can only make outbound calls to the Internet via the NAT server. No inbound traffic is accepted.
-       NAT have only one purpose here  -> allows instances on private subnets to call out to the Internet to download updates. Traffic from the Internet is not permitted to make inbound connections
-       Traffic is further restricted via security groups
-       NAT Instance is relatively old service and we have an alternative now called NAT Gateway, which was introduced in Re-invent 2016
-       Basically NAT Instance is an EC2 instance with certain configurations where you have to establish ASG to scale-up or down and enable fault tolerance whereas in NAT Gateway both elasticity and failover are handled by AWS.
-       AWS Internet Gateway - An Internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between instances in your VPC and the Internet.
-       Amazon Route 53 (Route 53) is a scalable and highly available Domain Name System (DNS)

When the user access the website either from his computer or mobile, the request goes to Route 53, passes through Internet Gateway & Elastic Load Balancer before hitting the Web Servers in public subnet. The Application Servers and Database Servers are placed in private subnet which can be accessed only by Web Servers. These Servers in private subnet can make only outbound calls to the Internet where they get their software updates, which happens through VPC NAT Gateway.

What is an ELB?
-       ELB stands for Elastic Load Balancing.
-       Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances & multiple availability zones.
-       ELB enables you to achieve greater levels of fault tolerance in your applications ensuring that only healthy Amazon EC2 instances receive traffic

-       ELB can automatically scales its request-handling capacity to meet the demands of application traffic.

November 23, 2017

NAT Gateway vs NAT Instances

Some around the web readings on NAT Gateway vs NAT Instances.

o NAT Instance – old one; NAT Gateway is relatively new, introduced in 2016
o NAT Instance is an EC2 instance
  •  Create an EC2 instance and put it behind default web SG, launch instance.
  •  Create a Route out to the internet from NAT Instance for the go ahead and disable “Source & Destination Check”
  • Attach the instance to the Private Security Group / Default one, go and edit the Route Table of the Main Route Table to allow internet – to the newly created NAT Instance and select the NAT Instance Id (not IGW).
o NAT Gateway
  • NAT Gateway is service which AWS takes care of scaling up, scaling down under lying resources based on the need 
  • Most customers always use NAT Gateway in Production as Failover is taken care internally

NAT Instances
  • When creating a NAT instance, Disable Source / Destination check on the instance
  • NAT instance must be in a public subnet
  • There must be a route out of the private subnet to the NAT instance, in order for this to work
  • The amount of traffic that NAT instances supports, depends on the instance size. If you are bottlenecking, increase the instance size
  • You can create high availability using AutoScaling Groups, multiple subnets in different AZ’s and a script to automate failover. This is extremely painful but can be done. Customers always complain about this pain point and hence the NAT Gateways were created.
  • NAT Instances are always behind a SG.

NAT Gateways
  • Relatively new service
  • Preferred by the enterprise
  • Scale automatically upto 10 Gbps
  • No need to patch
  • Not associated with SGs
  • Automatically assigned with public IP
  • Remember to update your route tables
  • No need to disable Source / Destination checks.

November 3, 2017

Tips for `AWS Certified Solution Architect - Associate` Preparation

Self Evalution:

Take Diagnostic Test (60 questions 80 mins) in whizlabs practise-tests to guage your strength and weaknesses before you begin your preparation.

If you have very limited knowledge wrt AWS, move on to #1 under Course Material.

Course Material

1. AWS Certified Solutions Architect - Associate 2017 from A Cloud Guru in - Should be around $10.
Tips 1: This course starts from zero and covers upto 80% of the course, provided you read all the FAQs & white papers as advised by the instructor. Good to start with this and go for Linux Academy]
Tips 2: Complete all Labs & repeat VPC lab couple of times
Tips 3: Mobile app is also good, if you would want to listen on the move.
2. - AWS Certified Solutions Architect - Associate Badge (Optional)
3.AWS Certified Solutions Architect - Associate from Linux Academy 
Tips 1: Complete all labs from LA and read all the whitepapers referrred in downloads section. Subnetting & EC2 trouble shooting is well explained there.
Tips 2: Mobile app is also good if you would want to listen on the move, especially for the flash cards & final quiz.
4. Linux Academy - The Orion paper is a good reference material

Practise Tests

  1. Linux Academy Chapter Quiz - Final Quiz
  2. -> Costs around INR 899/- but around 20 questions will come from this whizlabs, so worth practising all 7 or 8 papers they have.



October 28, 2017

Migration Story: Agile to FDD in light of AWS @ AWS Community Day 2017

AWS User Group Bengaluru had organized the first ever AWS Community Day in Bengaluru on 28th Oct 2017 which was an all-day event running two parallel tracks - proven use cases / success stories and workshops.
I also had an opportunity to present one of our companies most successful use-case "Migration Story: Agile to FDD in-light of AWS"

While moving from Agile to Feature Driven Development (FDD) with geographically distributed development centre, it is customary to have dedicated light environment per feature and robust automation to build & deploy at your own will!
The migration story was on Agile to FDD from a cloud based supply chain leader,  GTNexus an Infor Company that unfolds how the AWS Services came as a blessing in disguise to provide a highly scalable, Elastic & Cost Effective Solution to facilitate on-demand miniature development environments and independent Build & Deployment framework.


  • How GTNexus used Agile Scrum and the paradigm shift in the branching strategy when moved from scrum to Feature Driven Development
  • Development Cycle & the Timeline
  • High-level architecture of a full-fledged test environment hosted in DataCenter and the need for on-demand, scalable, miniature development environment for feature based testing in AWS.
  • Highly Elastic Build & Deployment Framework to cater the on-demand build & deployment needs of FTEs
    While we were talking, our we had ~ 500 on-demand FTEs across 4 regions viz - Mumbai, US West - Oregon, Europe - Frankfurt & US East - North Virginia.
  • How useful was the move to AWS in terms of ScalabilityElasticityCost Efficiency and Security

Miniature FTE in AWS

On the pursuit of light environment for Feature Based Testing, re-designed the QA Environment with a Single Windows & Linux VM Images with application services installed and housing the OS specific Data Services (say SQL Server in Windows; DB2, Riak/KVS & memsql in Linux).

After the light environment is setup in private cloud / Data Center, the next step is to setup this light Feature Test Environment in AWS. Windows & Linux VM Images are exported from Data Center & imported into AWS. A Feature Test Environment in AWS comprises of a VPC, Internet Gateway, Internet facing subnet, two EC2 instances that are replicated from the Data Center VMs. These two EC2 instances are stored as pre-fabricated AMIs for creating on-demand FTEs using CloudFormation templates. 

Elasticity in FTEs

With a FrontEnd application developed using AWS SDK for Python, Engineer aka the Feature Owner is free to create new FTE by providing the following inputs like
  • Who owns this FTE? 
  • Where is User’s office located?
  • Which branch should the FTE be based off?
  • What is the feature development Id as in Jira?
and creates the cloud formation template on the fly, which in turn creates the Windows & Linux Instance from the region specific AMIs in the low latency region close to user’s office location and creates RecordSets.
The feature owner can trigger the build & deployment the against the chosen branch to get his changes into the new FTE.

With the FrontEnd application, Engineers are free to create FTEs at their own will, trigger build & deployment as needed, Run Code validation suite like JUnit Tests & SQL Static Code Check. 
When the feature development is complete, feature owner promotes the feature branch to the integration branch and delete the FTE Environment.

And thats how the elasticity is in place for Feature Test Environments.

Elasticity in Build & Deploy Infra

Yes, the build & deployment framework for FTEs are also scalable & elastic in nature and that’s implemented using Jenkins + EC2 plugin. With this plugin, if Jenkins notices that the build or deployment cluster is overloaded, it'll start instances using EC2 API and automatically connect them as Jenkins slaves. When the load goes down, excessive EC2 instances will be terminated. This setup allows you to maintain a small in-house cluster. The FrontEnd application for creating FTEs have an interface to the Jenkins which helps to trigger build & deployment at the click of a button.

The Build & Deployment infrastructure is of three pre-fabricated AMIs
  1. Build
  2. Deployment 
  3. NAS box which acts as source code and artifact repository

If there are any failures during the deployment, it can be triggered independently via Jenkins as well.

Slide Deck for the Tech Talk can be found here.