March 17, 2019

SAML 2.0 - AD Integration with WebApplication

For any Enterprise, it is nearly impossible to maintain to set of accounts for all of their employees / users, hence AWS offers easy ways to authenticate users outside of AWS (say non-IAM users) using Identity Federation. There are 3 different ways in which this authentication can happen, viz Web Identities, SAML and Custom Identity Providers.
In this article, we will be focusing SAML.

What is SAML?

Security Assertion Markup Language (SAML) is an XML-based, open-standard data format for exchanging authentication and authorization data between an identity provider and a service provider. SAML addresses the web browser single sign-on (SSO).
The SAML specification defines three roles:
  • User
  • Service provider (SP)
  • Identity provider (IdP)
Security Tokens contains assertions and assertions are passed in JWT format - Json Web Token from IdP to SP. Its URL-safe Base64 encoding - ideally signed for security. 

SAML Basics

Sample Scenario

  1. Let's assume 
    • user :
    • Service Provider: hosted in AWS
    • Identity Provider: AD Setup in ABC Enterprise
  2. User browses the web application from the service provider (SP)
  3. Web application redirects for the SSO
  4. SSO page takes the AD credentials, validates at the Identity Provider (IdP)
  5. IdP provides the identity or SAML assertion in JWT format
  6. SP gets the identity assertion from Identity Provider (IdP), this validation is done by a special URL for this application (in case of AWS resources; a special URL takes care of AWS Signin)
  7. Then SP lets the user access to their content management service web application. 

SAML (AD) Configuration

  • Login to AD FS Server and Launch ADFS Management console
    • Edit Federation Service Properties and note down Federation Service Identifier
    • Go to Certificates -> Token Signing -> Details -> Copy the Thumbprint details
      You will these information to configure SAML provider details on the Service Provider / Application configuration side.
  • Create "Relying Party Trust" on your ADFS Server -> Create Amazon Web Services and configure
    • All that you will be mentioning here is how the AD have to trust Amazon Web Service
  • Fillin the Federation Metadata URL - > 
    • This URL is common across AWS, you can download them and use it
  • Configure Trust Identifier
    • If you are using Cognito User Pool, this should be your Cognito User Pool Identifier. If you are using a different application, this will be a different URL.
  • Add an EndPoint
    • This is the URL which SAML provided calls back with the assertion in the JWT format.
    • In case of Cognito, this is your Cognito Authentication Domain, which is unique per region

Service Provider Configuration

  • Federate with your SAML Provider
    • Create SAML IdP in the "Federation" Section -> Go to AWS Console -> IAM -> Identity Federation -> Create Provider,
      • Choose Provider Type as SAML 
      • Enter the provider name 
      • Provided the XML Metadate file (URL)
  • Enable your IdP / SAML Provider
    • Specify the Callback URL / SignOn which will tell SAML to come after you are authenticated
    • Optionally specify the SighOut URL as well
    • Configure certificate thumbnail if applicable

March 16, 2019

Amazon Kinesis Workflow

Let's build Amazon Kinesis sample workflow with Amazon Kinesis Agent on EC2 Instance, Amazon Kinesis Stream, Amazon Kinesis Firehouse and Amazon S3 bucket.

Read Amazon Kinesis Data Streams key-concepts before proceeding further.

Amazon Kinesis Workflow

Step by Step Instructions:

  • Go to AWS console; choose a region say - N.Virginia
  • Create Kinesis data stream as data_stream with 1 shard and leave rest as default
  • Create a Kinesis Forehose delivery stream named data_delivery with 
    • source as Kinesis data stream - data_stream, created in previous step
    • destination as S3 bucket - kinesis-destination
      Note: You can choose to e
  • Create an EC2 instance and setup JDK, JAVA_HOME and follow the instructions in Amazon Kinesis Agent Setup guide.
    • create a log file as /opt/applogs/web.log add some text
    • configure /etc/aws-kinesis/agent.log file with kinesis endpoint, firehose endpoint, file to be streamed and kinesisstream name 
      • kinesis.endpoint -
      • firehose.endpoint -
      • filepattern - /opt/applogs*
      • kinesisstream - data_stream
  • sudo service aws-kinesis-agent <start / stop> start or stop the kinesis streaming agent.
    Once you start the agent, you will see that dat flowing through Kinesis Stream to Kinesis firehouse and then finally getting stored in the destination bucket
  • Delete S3 Bucket - kinesis-destination
  • Delete Kinesis Forehose delivery stream named data_delivery 
  • Delete Role - firehose_delivery_role
  • Delete Kinesis data stream - data_stream
  • Delete EC2 instance & the corresponding role, if you have created them as part of this workflow

March 14, 2019

Amazon SNS & SQS Simplified

A overview of Amazon SNS & SQS with introduction, creating configuring & testing sample workflow using Amazon SNS, SQS & S3 through AWS Console.

Amazon SNS

Amazon Simple Notification Service (SNS) is a highly available, durable, secure, fully managed pub/sub messaging service that enables you to decouple microservices, distributed systems, and serverless applications. Amazon SNS provides topics for high-

throughput, push-based, many-to-many messaging. Using Amazon SNS topics, your publisher systems can fan out messages to a large number of subscriber endpoints for parallel processing, including Amazon SQS queues, AWS Lambda functions, and HTTP/S webhooks. Additionally, SNS can be used to fan out notifications to end users using mobile push, SMS, and email. 

Amazon SQS

Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. SQS eliminates the complexity and overhead associated with managing and operating message oriented middleware, and empowers developers to focus on differentiating work. Using SQS, you can send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be available. Get started with SQS in minutes using the AWS console, Command Line Interface or SDK of your choice, and three simple commands.
SQS offers two types of message queues. Standard queues offer maximum throughput, best-effort ordering, and at-least-once delivery. SQS FIFO queues are designed to guarantee that messages are processed exactly once, in the exact order that they are sent.

What is Dead Letter Queue?
A dead-letter queue is a queue that other (source) queues can target for messages that can't be processed (consumed) successfully. In this tutorial you learn how to create an Amazon SQS source queue and to configure a second queue as a dead-letter queue for it. For more information, see Amazon SQS Dead-Letter Queues.
When you designate a queue to be a source queue, a dead-letter queue is not created automatically. You must first create a normal standard or FIFO queue before designating it a dead-letter queue.
The dead-letter queue of a FIFO queue must also be a FIFO queue. Similarly, the dead-letter queue of a standard queue must also be a standard queue.

 Simple Workflow - Amazon SNS & SQS:

Assume a scenario you are placing an order for a notebook, SNS sends an email to designated email address about the order request plus action to be taken and also adds the message to the orders content queue in SQS.

Setup SNS Topic

  • Login to AWS Console & setup an SNS Topic named Order and two subscription for Email & Amazon SQS
  • Create a Subscription using Email protocol, configure your email id and verify the email address by clicking the Subscription link received to your email
  • Create a Subscription using Amazon SQS protocol, copy the ARN of the ContentQ that you will be creating under Setup SQS Queues

Setup SQS Queues

  • Create a standard queue named ContentQ , leave everything to default and change message retention period to 1 day (range is 1 to 14 days).
    • Go to Permission tab of the newly created Q enable Allow Everybody for All SQS Actions (SQS:*)
    • Redrive policy will be empty at this point in time
  • Create a standard queue named ContentRedriveQ , leave everything to default and change message retention period to 5 day (range is 1 to 14 days).
    • No change to permission and redrive policy will be empty
  • Go to ContentQ, enable Dead Letter Queue and select ContentRedriveQ, leave Maximum receives as 1.
  • Testing SQS 
    • Select ContentQ -> Queue Actions -> Send a Message -> Type a message and Click Send
    • Select ContentQ -> Queue Actions -> View / Delete Messages -> Click Start polling for messages.  This action reads the message from the queue but does not send a confirmation of successful processing of the message. This turns to be failed or unprocessed category, which then falls into Dead Letter Queue. Now you will be able to see this message under ContentRedriveQ, which is mapped as a Dead Letter Queue of ContentQ.

Testing SNS & SQS Flow

  • Go to Amazon SNS ->Topic -> Orders -> Publish Message -> Enter Subject & Message Body, leave rest to default and click `Publish Message`.
  • This message will be delivered to your configured email via Email protocol subscription to Order topic and will be delivered to SQS - ContentQ via SQS protocol subscription to Order topic.
  • Select ContentQ -> Queue Actions -> View / Delete Messages -> Click Start polling for messages.  This action reads the message from the queue but does not send a confirmation of successful processing of the message. This turns to be failed or unprocessed category, which then falls into Dead Letter Queue. Now you will be able to see this message under ContentRedriveQ, which is mapped as a Dead Letter Queue of ContentQ.

Enhanced Workflow: Amazon S3, SNS & SQS Events:

Now lets enhance the above workflow a bit by triggering message as part of S3 upload / put event. You will see how well the messages fan-out from SNS Topic to email & SQS queues.
Message fan-out is nothing but broadcasting messages from one to many.

Setup S3 Event

  • Create an S3 bucket named ordersbucket, go to Properties -> Events -> Add Notification -> Enter Details for New Event
    Event Name: ForSNSNotificaiton, Events: PUT, Send to: SNS Topic, SNS: Orders and try to save
    You will get the following error as your S3 bucket do not have permission to SNS Topic
    Unable to validate the following destination configurations. Permissions on the destination topic do not allow S3 to publish notifications from this bucket. (arn:aws:sns:us-east-1:<account id>:Orders)
  • To grant permission to the S3 bucket events to trigger SNS Topic, go to SNS Topic -> Orders -> Edit Access Policy ->
    replace the blue text by green 
"Condition": {
        "StringEquals": {
          "AWS:SourceOwner": "<account id>"
"Condition": {
        "ArnLike": {
          "aws:SourceArn": "arn:aws:s3:::

  • Now go back & Click Save for S3, it should work.
  • This configuration itself triggers the S3 event, which would sent the below email to your configured email and a message to ContentQ

{  "Type" : "Notification",  "MessageId" : "f0c66789-543d-5527-9bfd-328a83cbd237",  "TopicArn" : "arn:aws:sns:us-east-1:544638597657:Orders",  "Subject" : "Amazon S3 Notification",  "Message" : "{\"Service\":\"Amazon S3\",\"Event\":\"s3:TestEvent\",\"Time\":\"2019-03-14T04:16:54.038Z\",\"Bucket\":\"ordersbucket\",\"RequestId\":\"37475229D2AF3B2E\",\"HostId\":\"csuC6wL1zh8now9VybrB8LUju2Nc1z1sF1C1TN0HU15tHR6KvI2x4lIBqrM9pMXiz5wgSQkgyYczk=\"}",  "Timestamp" : "2019-03-14T04:16:54.145Z",  "SignatureVersion" : "1",  "Signature" : "LBIZeIodCe6Y1Irx8ZBLifqrEPbUw+tEFAwVygDszoMDyKZSrMD5kwKsJ0kZzjuaXvOeYhdITIuwgWMNnrRJpLhH9EtlhbHV0g/GT/pDaNZb52JV6vRB8zO0de8DC2AVDgQ7TyxS7Vx6TuqBPuxsRX0mdD1H+UPxc3+1ory7UAXggcT0h7zKVQkT7BrT+9dJs8+RfUQ/1YODYNZCR0qJMHQIqUDbx4KR0KtuobZ+wTwT60hJgUYcM/13VL7cZgckMNGuYv8qNJc4hEwb591V8C5nnvx7JEksLJkP91PfQJsCzoGaGvh+UhDWmjVI6fHMZNo+zmyMe8I0sEw==",  "SigningCertURL" : "",  "UnsubscribeURL" : "<accountid>:Orders:925ec4b2-1333-4b69-b82b-569c1eb7dd94"}

Testing S3, SNS & SQS Flow

  • To test the whole flow upload a file into ordersbucket, which would trigger S3 PUT Event -> SNS Topic - Orders and watch the rest to happen seamlessly
  • This message will be delivered to your configured email with subject - Amazon S3 Notification via Email protocol subscription to Order topic and will be delivered to SQS - ContentQ via SQS protocol subscription to Order topic.

  • Select ContentQ -> Queue Actions -> View / Delete Messages -> Click Start polling for messages.  This action reads the message from the queue but does not send a confirmation of successful processing of the message. This turns to be failed or unprocessed category, which then falls into Dead Letter Queue. Now you will be able to see this message under ContentRedriveQ, which is mapped as a Dead Letter Queue of ContentQ.


Let's clean the resources that we have created as part of this demo
  • Delete S3 Bucket named ordersbucket
  • Delete SNS Topic Subscriptions that you created for email protocol & SQS
  • Delete SNS Topic Orders
  • Delete SQS - ContentQRedrive & ContentQ
    • You will get a waring that there are messages in the Queue, if you still want to delete. You can either choose to delete or purge the messages and then come back for deletion.

        March 9, 2019

        Thank you, BIT Sathy! Alumnae Excellence Award was an overwhelming honour!

        I never thought I would have a memorable day than my graduation day until the International Women's Day 2019 Celebration at Bannari Amman Institute of Technology, Sathyamangalam, District, TamilNadu.
        A hearty warming verbal invitation for women's day a month ago by Dr. A. Bharathi, HOD IT Department, unfolded into a lovely invitation a week ago and later was pleasantly surprised by an overwhelming honour from our beloved Chairman Thiru. S. V. Balasubramaniam, Trustee Mr. M.P. Vijayakumar, Principal Dr. C. Palanisamy, Dean Mr. Thangaraj & HOD Dr. A. Bharathi. My hearty congratulations to fellow awardees - KaligaSelvi Lenin & S P Srivalli.

        Let me transform that buoyant experience into an ever-green blog to express my thanks to the BIT family!

        My mom always had an unfulfilled wish that she couldn't make it for my engineering graduation day for various reasons and I made sure that she accompanied me this time. I am sure she would have felt more happy than ever for being part of this memorable ceremony. My wholehearted appreciation to all those vacation-less class - especially my mom and mom-in-law for all their round the clock toil to send us out to enjoy the world around; most importantly standing by me to continue my career under various circumstances.

        We get to have an hour of conversation with Chairman & Trustee about the exponential growth that BIT have gone through over the years. Started with few classrooms with 100+ students in 1996 eventually grown manifold to house close to 7k students with world class facilities. Skill Development Program, Women Development Cell, Special Interest Group, Support for Entrepreneurial thirst in Alumnae are some of the highly commendable facilities at BIT. I was completely amazed by the five storied library with who lot of national & international journals. The list goes on & on, so let me shift the focus to International Women's Day Celebration 2019 at BIT :)


        BIT Entrance was decked-up with Rangoli and floral decorations along with cheerful students and staff to welcome on this special occasion.

        Welcome Address

        Principal Dr. C. Palanisamy extended the Welcome Address on the 2019 International Women's Day Celebration at BIT.

        Annual Report

        Annual report on the women empowerment activities carried out through-out the year by Women's Development Cell was read by Ms. A Swetha, Student Coordinator WDC.

        Presidential Address

        Chairman Thiru S V Balasubramaniam spoke elegantly about how important is to empower, encourage women to come forward to achieve great heights. It was good to know that 37% of the college staffs are women and they are working towards 50% in the coming years. He remembered several women achievers in various fields and spoke about their greatness.  

        Special Address

        Trustee Mr. M.P. Vijayakumar brought the hardships that women go-through to balance to be working mother & caring daughter-in-law.

        Awarding Ceremony

        Bhuvaneswari Subramani

        Kaliga Selvi Lenin

        S P Srivalli


         Award Acceptance Speech

        Bhuvaneswari Subramani 
        On 2019 Women's Day, receiving a honorary Award from BIT,

        • who had lit the lamp of consciousness in me; 
        • who had enabled me to embrace the external competencies,
        • who is the sole reason for what I am
        brings in the eternal bliss, self-contentment & pride.
        Also spoke about how I became an accidental engineer, my experience as working mom and few suggestions to the budding engineers.

        Kaliga Selvi Lenin 
        Kaliga expressed her sincere thanks for inviting her over for such a prestigious award and spoke about how students can stay abreast with the trending technologies like Robotics, Water Irrigation, Conservation and IOT. As an IoT specialist, she emphasised on tremendous transformation expected in next few years as a result of IoT enablement in every field.

        S P Srivalli
        Srivalli expressed her over joy in receiving such a wonderful award from Alma Mater was certainly close to her heart and kept the students engaged in talking about women empowerment. Also shared her views of how the Alumnae can give back to the institution. 

        Cultural Program

        Any function in college would be incomplete without a cultural program and of-course International Women's Celebration at BIT was nay an exception. We had two mesmerising Bharatanatyam - Indian classical dance performance by students that stole the show.

        Vote of Thanks

        To conclude the wonderful event, Dr. A. Bharathi, Coordinator WDC proposed the Vote of Thanks
        and the event ended with National Anthem.

        Visit to Agri Engineer's Farm

        Department of Agriculture Engineering, relatively young department in BIT offers 4 years BE- Agriculture Engineering course. To impart the knowledge on crop cultivation aspects from seed to harvest, the department has a 10 acres farm along with well-equipped soil science, agriculture metrology and crop husbandry labs. We got a chance to tour around the farm where students have cultivated organically and at the same time with modern technology.
        Dear BIT,
        I am totally impressed, fascinated and thrilled by the way agriculture engineering students have been farming.  If at all you would offer me a Diploma course (with age no bar), I would love to take-up one and switch to organic farming now, which I anyways wanted to do post retirement :) .

        Warm Send Off

        BIT is known for its hospitality and we were hosted with yummy lunch with all home grown vegetables before we bid farewell to our Chairman, Trustee and Staff. It was such memorable day & wonderful thoughts to carry along but not sure whether I did justice to describe it in a single blog post.
        Few words with nativity about the veggie pack that was sent along..
        அன்னை வீட்டுக் கொல்லைப்புறத்திலிருந்து பங்காகப் பயிரிட்டதுபோல் இயற்கைமுறையில் வேளாண்மை பொறியியல் மாணாக்கர்களால் பயிரிடப்பட்ட அவரைக்காய், பாகற்காய், முட்டைகோஸ், பீட்ரூட், முள்ளங்கி, பீர்க்கங்காய், தக்காளி, குடைமிளகாய்  மற்றும் எழில்  மிகு காளான் அனைத்தும் சீர் வரிசைபோல் பாங்காக 
        பொட்டலம் கட்டி தந்து வாசல் வரை வந்து வழியனுப்பினமைக்கு மனமார்ந்த நன்றி.

        We women should cherish the blessing of being a woman every moment and most importantly never forget to thank the men who have been behind our success, be it your father, brother, spouse, your mentors, classmates, colleagues or friends.

        Going by the theme of the year, Let's #BalanceForBetter!!!

        March 7, 2019

        Guest Lecture on `Cloud Computing & Amazon Web Services`

        Guest Lecture at Bannari Amman Institute of Technology, Sathyamangalam, my Alma Mater, on Cloud Computing & Amazon Web Services to 3rd year B.Tech - Information Technology Students.

        March 2, 2019

        Serverless Series - Demystifying Serverless Architecture

        In the blog series Demystifying Serverless Architecture, will be covering the evolution of Serverless Architecture with an introduction to The AWS Serverless Platform followed by walkthrough of sample use cases to build serverless web application using Amazon S3, Amazon API Gateway, AWS Lambda & Amazon CloudWatch.

        Serverless Blog Series 

        Reference Links

        Serverless Series - Building a Serverless WebApp

         Serverless Series HomeAs part of the Serverless WebApp Demo, we will be building 2 use cases and 1 for DIY. All of these use cases will involve Amazon API Gateway, Amazon S3, Amazon CloudWatch & AWS Lambda. Familiarise yourself a bit about these services before you dive into the below sessions. Source code for the Serverless WebApp can be found on Github.

        High-level steps to setup Serverless WebApplication using API Gateway & Lambda

        1. Create LambdaFunction with content from code, create a new role with `Simple Micro Services` permission and add API Gateway Trigger
        2. Create & Configure API Gateway GET Method to invoke
        3. Create S3 Bucket, make its objects public, convert it to a static website.
        4. upload error.html & index.html to S3 bucket created in #3, make these objects public
        5. Launch the S3 Static Website URL to see the web application is working
        6. Copy code to replace existing Lambda Function
        7. Add `AmazonEC2FullAccess` Lambda Execute Role created in #1
        8. Create one or more EC2 instances with tag name `AutoOff` with the value `yes` (case sensitive)
        9. Launch the S3 Static Website URL to test the running EC2 instances with the tag AutoOff=yes are shutdown.

        MindMap covering Use Case 1 & Use Case 2 - View this for before proceeding..

        Serverless WebApp Build - Use Case 1
        Serverless WebApp Build - Use Case 2
        Serverless WebApp Build - Use Case 3 - DIY

        Serverless Series - Serverless WebApp - Use Case 3

         Serverless Series HomeHere is the network diagram to build Serverless WebApp - Use Case 3. It is just an increment of Serverless Webapp - Use Case 2 with Route 53 integration. So DIY :)

        Network Diagram

        Serverless Series - Serverless WebApp - Use Case 2

         Serverless Series HomeThis blog post will be taking you through the details to build Serverless WebApp - Use Case which will get the running EC2 instances with tag AutoOff=yes through lambda function and stop those instances, display the details in webpage and add to cloudwatch logs.

        Network Diagram

        Output  Webpage

        Steps to AutoOff EC2

        • Copy code to replace existing Lambda Function 
        • Add `AmazonEC2FullAccess` LambdaExecutionRole created earlier
        • Create one or more EC2 instances with tag name `AutoOff` with the value `yes` (case sensitive)
        • Launch the S3 Static Website URL to test the running EC2 instances with the tag AutoOff=yes are shutdown.

        • CloudWatch console Delete the log group /aws/lambda/ServerlessWebAppDemo under CloudWatch Logs
        • S3 console -> Delete S3 Bucket serverlesswebappdemo
        • Lambda consolue -> Delete Lambda Function - ServerlessWebAppDemo
        • API Gateway console -> LambdaMicroservice -> Resources -> ServerlessWebAppDemo
        • IAM console -> Roles -> Delete Role - LambdaExecutionRole

        Serverless Series - Serverless Services used in Demo

         Serverless Series HomeLet's have a closer look the 4 Serverless Services that will be used in today's Demo. 

        Amazon S3

        S3 is the massive storage and one of the back bones of AWS. Infact this was first service launched by AWS
        How much data can I store?
        The total volume of data and number of objects you can store are unlimited. Individual Amazon S3 objects can range in size from a minimum of 0 bytes to a maximum of 5 terabytes. The largest object that can be uploaded in a single PUT is 5 gigabytes. For objects larger than 100 megabytes, customers should consider using the Multipart Upload capability.

        What storage classes does Amazon S3 offer?
        Amazon S3 offers a range of storage classes designed for different use cases. There are three highly durable storage classes including 
        • S3 Standard for general-purpose storage of frequently accessed data
        • S3 Intelligent-Tiering storage class is designed to optimize costs by automatically moving data to the most cost-effective access tier, without performance impact or operational overhead. 
        • S3 Standard - Infrequent Access for long-lived, but less frequently accessed data, 
        • S3 One Zone-IA is for data that is accessed less frequently, but requires rapid access when needed. Unlike other S3 Storage Classes which store data in a minimum of three Availability Zones (AZs), S3 One Zone-IA stores data in a single AZ and costs 20% less than S3 Standard-IA. 
        • S3 Glacier  is a secure, durable, and low-cost storage class for data archiving.
        • S3 Glacier Deep Archive is Amazon S3’s lowest-cost storage class and supports long-term retention and digital preservation for data that won’t be regularly accessed. It is designed for customers — particularly those in highly-regulated industries, such as the Financial Services, Healthcare, and Public Sectors — that retain data sets for 7-10 years or longer to meet regulatory compliance requirements. 

        S3 Storage Classes can be configured at the object level and a single bucket can contain objects stored in S3 Standard, S3 Intelligent-Tiering, S3 Standard-IA, and S3 One Zone-IA. For further reading on S3 Storage Classes, refer

         Amazon CloudWatch

        CloudWatch sits under Management Tools in AWS Console and can monitor any AWS Services – ALB, EBS, EC2, ELB, S3 etc
        • Metrics - For example, basic host level metrics like CPU Utilization, DiskOps etc are monitored where OS level metrics like memory would need you to write custom metrics. Documentations are available for the same.
        • Dashboards – Creates awesome dashboards to see what is happening with your AWS Environment. 
        • Alarms – Allows you to set Alarms that notify you when particular thresholds are hit. 
        • Events – CloudWatch events helps you to respond to state changes in your AWS resources  
        • Logs – CloudWatch Logs helps you to aggregate, monitor, and store logs. 

        Amazon API Gateway
        Amazon API Gateway is an AWS service that enables you to create, publish, maintain, monitor, and secure your own APIs at any scale. You can create robust, secure, and scalable APIs that access AWS or other web services, as well as data stored in the AWS Cloud
        • Support for stateful (WebSocket) and stateless (REST) APIs
        • Integration with AWS services such as AWS Lambda, Amazon Kinesis, and Amazon DynamoDB
        • There are two kinds of developers who use API Gateway: API developers and app developers.
          • An API developer creates and deploys an API to enable the required functionality in API Gateway. The API developer must be an IAM user in the AWS account that owns the API.
          • As an API developer, you can create and manage an API by using the API Gateway console, CLI, SDK or Cloud Formation Template
          • An app developer builds a functioning application to call AWS services by invoking a WebSocket or REST API created by an API developer in API Gateway.
          • The app developer is the customer of the API developer. The app developer does not need to have an AWS account, provided API is designed that way.
        Together with AWS Lambda, API Gateway forms the app-facing part of the AWS serverless infrastructure

        AWS Lambda

        AWS Lambda is a compute service that lets you run code without provisioning or managing servers. AWS Lambda executes your code only when needed and scales automatically, from a few requests per day to thousands per second. You pay only for the compute time you consume - there is no charge when your code is not running. With AWS Lambda, you can run code for virtually any type of application or backend service - all with zero administration. AWS Lambda runs your code on a high-availability compute infrastructure and performs all of the administration of the compute resources, including server and operating system maintenance, capacity provisioning and automatic scaling, code monitoring and logging. 
        • All you need to do is supply your code in one of the languages that AWS Lambda supports (currently Node.js, Java, C# and Python). 
        • You can use AWS Lambda  
          • (1) to run your code in response to events (Event Driven), such as changes to data in an Amazon S3 bucket or an Amazon DynamoDB table;  
          • (2) to run your code in response to HTTP requests using Amazon API Gateway; or invoke your code using API calls made using AWS SDKs.  
        • Lambda functions are independent but one lambda function can trigger other lambda functions 
        • Lambda scales automatically and also can perform actions globally like copying files from S3 in one region to other region.

        Pricing at high-level:

        • S3 - new AWS customers receive 5 GB of Amazon S3 Standard storage, 20,000 Get Requests, 2,000 Put Requests, 15GB of data transfer in, and 15GB of data transfer out each month for one year.
          • ~ 0.23/per GB update 50GB
        • API Gateway 
          • Rest API 
            • Pay only for the API calls you receive and the amount of data transferred out
            • For better performance and faster API execution, you can optionally provision a dedicated cache for each stage of your APIs. After you specify the size of the cache you require, you will be charged the following hourly rates for each stage’s cache, without any long-term commitments.
            • Free Tier - 1M API CALLS RECEIVED | 1M MESSAGES | 750,000 CONNECTION MINUTES
          • WebSocket APIs
            • Pay only for messages sent and received and the total number of connection minutes. You may send and receive messages up to 128 kilobytes (KB) in size. Messages are metered in 32 KB increments. So, a 33 KB message is metered as two messages.
            • For WebSocket APIs, the API Gateway free tier includes one million messages (sent or received) and 750,000 connection minutes for up to 12 months.
        • AWS Lambda
          • Lambda counts a request each time it starts executing in response to an event notification or invoke call, including test invokes from the console. You are charged for the total number of requests across all your functions.  
        • Amazon CloudWatch
          • Basic Monitoring Metrics (at 5-minute frequency) where detailed monitoring Metrics (at 1-minute frequency) is charged
          • Example: EC2 Detailed Monitoring is priced at $2.10 per instance per month and goes down to $0.14 per instance at the lowest priced tier.) 

        For the Sample that we are going to try today, trust me that it won’t incur any charge if you clean-up diligently :).