“Bridging Theory and Practice: The Power of AWS Cloud Bootcamp in Fostering Continued Education and Practical Experience”

This article was originally published on LinkedIn here.

Introduction

Practical skills are more valuable than theoretical knowledge. Credentialism (Degrees, Certifications, and titles) doesn’t equate to skills or ability when it comes to solving a problem. That is my belief after decades as a self-taught technical professional.

Having acquired the AWS Certified Solutions Architect Associate in 2022, I was pleased with the accomplishment but I understood that gaining practical experience typically should outweigh credentials.

One of the reasons I could acquire the certification with limited study was that although I hadn’t used AWS for complex workloads in the cloud, I could lean on lived experiences and a strong fundamental understanding of how the elements of the technology all fit together.

Since the level of effort required to acquire the certification didn’t provide as much education as I seeking, I set out to gain practical experience with AWS. One of the techniques that I decided on was to work through AWS Labs. I researched, analyzed, and tested from multiple sources around the web. Some were good, some were very poorly documented, outdated, and not very good.

During this time I discovered an interesting Reddit post that mentioned a Cloud Project Bootcamp starting in early 2023. So I did some research on the Instructor – Andrew Brown – looked into the curriculum, and decided that this appeared to fill the gap between certification knowledge and practical knowledge gained through hands-on problem-solving.

AWS Cloud Project Bootcamp Objective

My understanding of the goal of the Bootcamp was to help students who have acquired associate-level knowledge or had acquired a certification and are at a point where they realize they need a cloud project on their resume, to progress their career goals. Or they want to gain hands-on experience working on a complex project.

Topics Covered

  • Architecture – designing a system by translating requirements into required microservices and illustrating visually with diagrams.
  • Billing – understanding metered pricing, tracking spend, forecasting spend, reporting, and billing with Cost Explorer, CloudWatch Billing Alarms, and AWS Budget.
  • Containerization – makes applications portable across environments by tethering configurations and dependencies into a single unit called an image.
  • Distributed Tracing – a method that addresses logging data requests in microservice-based applications across an environment. 
  • Decentralized Authentication – identity management service that is decoupled from the application.
  • Postgres and RDS(Relational Database Service) – a distributed Cloud service that simplifies the setup, operation, and scalability of several popular relational databases.
  • DynamoDB – Amazon’s proprietary NoSQL (non-tabular) database managed service.
  • Deploying “Serverless” Containers – Fargate (Amazon’s serverless Container offering) enables companies to run containers without having to manage servers.
  • Solving Cross-Origin Resource Sharing (CORS) with a Load Balancer and Custom Domain – CORS allows interactions between resources from different origins. Load balancers distribute network or application traffic across a number of servers. Custom domains correlate human-readable names instead of IP addresses with the AWS Route53 registrar service.
  • Serverless Image Processing – upload, store, and manipulate images with AWS Lambda and Sharp.js.
  • Continuous Integration / Continuous Deployment (CI/CD) with CodePipeline(CP), CodeBuild(CB), and CodeDeploy(CD) – CP is used to model and automate the steps required to release your software. CB is a managed continuous integration service that compiles source code, tests, and creates software packages that are ready for deployment. CD is a deployment service that automates code deployments to AWS or on-premises servers.
  • CloudFormation – provides a common language for users to describe and provision all the infrastructure resources in your cloud environment as code.

Outcome

I took a set of requirements for a fictional startup company to build a three-tier web application. Then leveraged the codebase of an existing on-premises (local) custom micro-blogging platform and integrated several microservices to meet those requirements.

Security best practices were discussed, evaluated, and leveraged throughout the entire implementation of this project.

I set up and configured Gitpod and Github Codespaces. The bulk of the integrations documented here were developed and tested in these environments prior to deployment to AWS Cloud.

I documented conceptual and logical diagrams of the architecture based on the requirements that were provided.

I studied and explored the AWS pricing, metered-billing model, and tools such as Cost Explorer, Budgets, and Cloudwatch Billing Alarms. I further leveraged these tools to forecasted potential spending for the aforementioned microservices and set up notifications using billing alarms and budgets.

I containerized the front-end (presentation tier), back-end (logic tier (and data tiers locally) and stored the container images in the Elastic Container Registry. I then deployed the application containers using the Elastic Container Service and Fargate.

I setup and configured several distributed tracing tools including AWS-Xray, Rollbar, Honeycomb, and activated CloudWatch logs for troubleshooting and monitoring.

I integrated AWS Cognito for decentralized JWT token-based authentication and removed the cookie-based authentication from the codebase.

I integrated AWS RDS Postgres database to hold all non-messaging data for the application. Non-messaging data includes elements such as activity messages, number of replies, user ID’s.

I integrated the AWS Dynamodb NoSQL database for all application messaging patterns. This includes creating, listing, and updating messages within the application.

I leveraged AWS Application Load Balancer so that the application would have performance efficiency and be highly available. This led to a series of cross-origin resource-sharing issues which I resolved by reviewing logs and troubleshooting. I also leveraged Route53 for its custom domain functionality to point users to crudder.net.

The application requirements include an option for end users to upload a profile image which is then manipulated for efficiency and storage. I used the API gateway to facilitate communications between AWS Lambda for authorization checks, Sharp.js for size conversion for thumbnails, S3 for storage, and the application interface for uploading and displaying the avatar.

I set up a Continuous Integration and Continuous Deployment Pipeline using AWS CodePiepline to automate code build using CodeBuild and code deployment using CodeDeploy whenever future pull requests for code changes are made and accepted on the codebase repository in github.

Finally, I installed the AWS Cloud Development Kit and built several CloudFormation templates to automate all of the implementations referenced to the point.

References

By Kyle

My name is Kyle M. Brown and I am passionate about solving business problems with new technologies.

Leave a comment

Your email address will not be published. Required fields are marked *