<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Joseph Mbatchou Cloud Platform]]></title><description><![CDATA[My name is Joseph Mbatchou, and I am grateful for the opportunity to introduce myself to you. I have been in the Tech industry for about 8 years, and I am curre]]></description><link>https://platform.joebahocloud.com</link><generator>RSS for Node</generator><lastBuildDate>Sat, 11 Apr 2026 17:16:45 GMT</lastBuildDate><atom:link href="https://platform.joebahocloud.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[🚀 Automating Scalable Infrastructure with Terraform & Ansible Dynamic Inventory]]></title><description><![CDATA[🚀 Automating Scalable Infrastructure with Terraform & Ansible Dynamic Inventory
🚀 Overview:
Modern infrastructure demands automation, scalability, and repeatability. Manual provisioning of cloud resources and configuration management quickly become...]]></description><link>https://platform.joebahocloud.com/automating-scalable-infrastructure-with-terraform-and-ansible-dynamic-inventory</link><guid isPermaLink="true">https://platform.joebahocloud.com/automating-scalable-infrastructure-with-terraform-and-ansible-dynamic-inventory</guid><category><![CDATA[automating-scalable-infrastructure-with-terraform-and-ansible-dynamic-inventory]]></category><dc:creator><![CDATA[Joseph Mbatchou]]></dc:creator><pubDate>Fri, 06 Feb 2026 22:20:26 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1770412597465/58f6e8cc-dba4-4de0-95be-cc1e96bab9df.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-automating-scalable-infrastructure-with-terraform-amp-ansible-dynamic-inventory">🚀 Automating Scalable Infrastructure with Terraform &amp; Ansible Dynamic Inventory</h1>
<h2 id="heading-overview">🚀 Overview:</h2>
<p>Modern infrastructure demands <strong>automation, scalability, and repeatability</strong>. Manual provisioning of cloud resources and configuration management quickly becomes error-prone and unmaintainable as environments grow.</p>
<p>This project demonstrates how to build a <strong>fully automated, production-grade infrastructure pipeline</strong> using <strong>Terraform</strong> for infrastructure provisioning and <strong>Ansible with dynamic AWS inventory</strong> for configuration management.</p>
<p>The solution provisions multiple EC2 instances across environments (dev, stage, prod), dynamically discovers them using AWS APIs, and configures them automatically — all without hardcoded IP addresses or manual intervention.</p>
<h2 id="heading-problem-statement">🔧 Problem Statement</h2>
<p>Traditional infrastructure workflows often suffer from:</p>
<ul>
<li><p>Hardcoded server IPs in configuration files</p>
</li>
<li><p>Manual SSH access and host inventory management</p>
</li>
<li><p>Inconsistent environments across dev, stage, and prod</p>
</li>
<li><p>Infrastructure drift due to manual changes</p>
</li>
<li><p>Difficulty scaling across availability zones and environments</p>
</li>
</ul>
<p>Additionally, many teams struggle with:</p>
<ul>
<li><p>Incorrect subnet placement</p>
</li>
<li><p>SSH connectivity failures due to networking misconfigurations</p>
</li>
<li><p>Poor separation between provisioning and configuration</p>
</li>
</ul>
<p>This project solves these problems by combining <strong>Infrastructure as Code (IaC)</strong> with <strong>dynamic configuration orchestration</strong>.</p>
<h2 id="heading-techonology-stack">💽 Techonology Stack</h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Category</td><td>Tools</td></tr>
</thead>
<tbody>
<tr>
<td>Infrastructure Provisioning</td><td>Terraform</td></tr>
<tr>
<td>Configuration Management</td><td>Ansible</td></tr>
<tr>
<td>Dynamic Inventory</td><td>Ansible AWS EC2 Plugin</td></tr>
<tr>
<td>Cloud Provider</td><td>AWS</td></tr>
<tr>
<td>Operating System</td><td>Ubuntu 22.04 LTS</td></tr>
<tr>
<td>Security</td><td>AWS Security Groups, IAM</td></tr>
<tr>
<td>Scripting</td><td>Bash</td></tr>
<tr>
<td>Version Control</td><td>Git &amp; GitHub</td></tr>
</tbody>
</table>
</div><h2 id="heading-architecture-diagram">📌 Architecture Diagram</h2>
<p>┌──────────────────┐</p>
<p>Local Machine (Deploy Scripts)</p>
<p>└─────────┬────────┘</p>
<p>▼</p>
<p>┌──────────────────────────┐</p>
<p>Terraform (IaC)</p>
<p>- VPC &amp; Subnets (existing)</p>
<p>- EC2 Controller</p>
<p>- EC2 Application Nodes</p>
<p>- Security Groups</p>
<p>- SSH Key Pair │</p>
<p>└─────────┬────────────────┘</p>
<p>▼</p>
<p>┌──────────────────────────┐</p>
<ul>
<li><ul>
<li><p>AWS EC2 Instances</p>
<ul>
<li>- Dev / Stage / Prod</li>
</ul>
</li>
</ul>
</li>
</ul>
<p>- Auto-assigned Public IP</p>
<p>└─────────┬────────────────┘</p>
<p>▼</p>
<p>┌──────────────────────────┐</p>
<p>Ansible Dynamic Inventory</p>
<p>- AWS API Discovery</p>
<p>- Tag-based Grouping</p>
<p>└─────────┬────────────────┘</p>
<p>▼</p>
<p>┌──────────────────────────┐</p>
<p>Ansible Playbooks</p>
<p>- Configure environments</p>
<p>- Apply roles</p>
<p>└──────────────────────────┘</p>
<h3 id="heading-key-architectural-decisions">Key Architectural Decisions</h3>
<ul>
<li><p><strong>Dynamic inventory</strong> eliminates static host files</p>
</li>
<li><p><strong>Tag-based grouping</strong> separates dev/stage/prod automatically</p>
</li>
<li><p><strong>Terraform-generated SSH keypair</strong> ensures secure access</p>
</li>
<li><p><strong>Public subnet detection via route tables</strong> prevents silent networking failures</p>
</li>
<li><p><strong>Preflight validation</strong> ensures infrastructure readiness before configuration</p>
</li>
</ul>
<h2 id="heading-project-requirements">🌟 Project Requirements</h2>
<ul>
<li><p>AWS account with permissions for EC2, VPC, IAM</p>
</li>
<li><p>Terraform ≥ 1.5</p>
</li>
<li><p>Ansible ≥ 2.15</p>
</li>
<li><p>AWS CLI configured (<code>aws configure</code>)</p>
</li>
<li><p>Bash shell</p>
</li>
<li><p>Git</p>
</li>
</ul>
<h2 id="heading-table-of-contents">📋 Table of Contents</h2>
<p>I - <strong>Terraform Configuration files</strong></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Provider-configuration">Step 1: Provider Configuration</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-variables-configuration">Step 2: Variables Configuration</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-VPC-configuration">Step 3: Main Configuration</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-VPC-configuration">Step 4: Iam role Configuration</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Output-configuration">Step 5: Output Configuration</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Output-configuration">Step 6: User-data</a></p>
<p>II - <strong>Ansible files</strong></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Provider-configuration">Step 7: Inventory file</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-variables-configuration">Step 8: Ansible configuration file</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-variables-configuration">Step 9: Playbook file</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-variables-configuration">Step 10: Requirements file</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-variables-configuration">Step 11: Handlers</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-variables-configuration">Step 12: Tasks</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-variables-configuration">Step 13: Template files</a></p>
<p>III - <strong>Scripting files</strong></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Provider-configuration">Step 14: Deploy file</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-variables-configuration">Step 15: Destroy file</a></p>
<p>IV - <strong>Instructions of Deployment</strong></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Clone-Repository">Step 16: Clone Repository</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Format-Files">Step 17: Run Deployment</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Format-Files">Step 18: Destroy Infrastructure</a></p>
<h2 id="heading-terraform-configuration-files">✨Terraform Configuration files</h2>
<p>You need to write different files generating resources</p>
<h5 id="heading-step-1-provider-configuration">Step 1: <strong><em>Provider Configuration</em></strong></h5>
<p>Here we declare our cloud provider and we specify the region where we will be launching resources</p>
<ul>
<li><a target="_blank" href="https://github.com/Joebaho/Terra-Ansible-Dynamic/blob/main/terra-config/providers.tf">provider Configuration</a></li>
</ul>
<h5 id="heading-step-2-variables-configuration">Step 2: <strong><em>Variables Configuration</em></strong></h5>
<p>This is where we declare all variables and their value. It includes the list of element that can vary or change. They can be reuse values throughout our code without repeating ourselves and help make the code dynamic</p>
<ul>
<li><a target="_blank" href="https://github.com/Joebaho/Terra-Ansible-Dynamic/blob/main/terra-config/variables.tf">variables Configuration</a></li>
</ul>
<h5 id="heading-step-3-main-configuration">Step 3: <strong><em>Main Configuration</em></strong></h5>
<p>This is where you create the basement, foundation and networking where all the resources will be launch. It includes keypair, Security groups and EC2 instances,</p>
<ul>
<li><a target="_blank" href="https://github.com/Joebaho/Terra-Ansible-Dynamic/blob/main/terra-config/main.tf">Main Configuration</a></li>
</ul>
<h5 id="heading-step-4-iam-role-configuration">Step 4: <strong><em>IAM Role Configuration</em></strong></h5>
<p>We define the temporary access to the instances via the role. it is a simple way of allow the controller to SSH via the others instances.</p>
<ul>
<li><a target="_blank" href="https://github.com/Joebaho/Terra-Ansible-Dynamic/blob/main/terra-config/iam-role.tf">IAM Role Configuration</a></li>
</ul>
<h5 id="heading-step-5-output-configuration">Step 5: <strong><em>Output Configuration</em></strong></h5>
<p>Know as Output Value : it is a convenient way to get useful information about your infrastructure printed on the CLI. It is showing all public IP of the instances.(Controller and nodes)</p>
<ul>
<li><a target="_blank" href="https://github.com/Joebaho/Terra-Ansible-Dynamic/blob/main/terra-config/outputs.tf">Output Configuration</a></li>
</ul>
<h5 id="heading-step-6-user-data">Step 6: <strong><em>User-data</em></strong></h5>
<p>user data is <strong><mark>a script or custom data that you provide to an EC2 instance to be executed automatically on its first launch</mark></strong>. It's used to automate the initial setup of an instance, such as installing software, configuring settings, or running commands needed to make the instance ready for use. In our case we will have two user data one for the controller where ansible and all dependencies will be install. The the one for the nodes where Python3 will be install.</p>
<ul>
<li><p><a target="_blank" href="https://github.com/Joebaho/Terra-Ansible-Dynamic/blob/main/terra-config/userdata-controller.sh">Userdata-controller</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/Joebaho/Terra-Ansible-Dynamic/blob/main/terra-config/userdata-node.sh">Userdata-node</a></p>
</li>
</ul>
<h2 id="heading-ansible-files">✨Ansible files</h2>
<p>You need to write different files generating resources</p>
<h5 id="heading-step-7-inventory-file">Step 7 : <strong><em>Inventory file</em></strong></h5>
<p>The Ansible AWS EC2 inventory plugin dynamically discovers EC2 instances directly from AWS using API calls instead of static host files. Instances are grouped automatically based on tags, regions, or environments, ensuring inventories stay accurate as infrastructure scales or changes.</p>
<ul>
<li><a target="_blank" href="https://github.com/Joebaho/Terra-Ansible-Dynamic/blob/main/ansible/inventory/aws_ec2.yml">Inventory : AWS plugin</a></li>
</ul>
<h5 id="heading-step-8-ansible-configuration">Step 8: <strong><em>Ansible Configuration</em></strong></h5>
<p>The <code>ansible.cfg</code> file defines global Ansible behavior such as inventory location, SSH settings, privilege escalation, and retry behavior. It standardizes execution across environments and prevents inconsistencies caused by user-specific defaults.</p>
<ul>
<li><a target="_blank" href="https://github.com/Joebaho/Terra-Ansible-Dynamic/blob/main/ansible/ansible.cfg">ansible.cfg</a></li>
</ul>
<h5 id="heading-step-9-playbook-file">Step 9: <strong><em>Playbook file</em></strong></h5>
<p>An Ansible playbook is a declarative YAML file that defines which tasks run on which hosts and in what order. It orchestrates configuration, package installation, and service management in a repeatable and idempotent manner.</p>
<ul>
<li><a target="_blank" href="https://github.com/Joebaho/Terra-Ansible-Dynamic/blob/main/ansible/playbook.yml">playbook.yml</a></li>
</ul>
<h5 id="heading-step-10-requirements-file">Step 10: <strong><em>Requirements file</em></strong></h5>
<p>The <code>requirements.yml</code> file defines external Ansible roles or collections required by the project. This ensures consistent dependencies across environments and allows easy setup using a single installation command.</p>
<ul>
<li><a target="_blank" href="https://github.com/Joebaho/Terra-Ansible-Dynamic/blob/main/ansible/requirements.yml">requirements.yml</a></li>
</ul>
<h5 id="heading-step-11-handlers-file">Step 11: <strong><em>Handlers file</em></strong></h5>
<p>Handlers are special Ansible tasks that run only when triggered by a change in another task. They are commonly used for operations like restarting services, ensuring actions occur only when necessary and avoiding unnecessary disruptions.</p>
<ul>
<li><a target="_blank" href="https://github.com/Joebaho/Terra-Ansible-Dynamic/blob/main/ansible/roles/webserver/handlers/main.yml">main.yml</a></li>
</ul>
<h5 id="heading-step-12-tasks-file">Step 12: <strong><em>Tasks file</em></strong></h5>
<p>This is where you create the basement, foundation and networking where all the resources will be launch. It includes keypair, Security groups and EC2 instances,</p>
<ul>
<li><a target="_blank" href="https://github.com/Joebaho/Terra-Ansible-Dynamic/blob/main/ansible/roles/webserver/tasks/main.yml">main.yml</a></li>
</ul>
<h5 id="heading-step-13-template-files">Step 13: <strong><em>Template files</em></strong></h5>
<p>The <code>index.html</code> template is a Jinja2-based HTML file dynamically rendered by Ansible during deployment. It allows environment-specific values (such as hostnames or environment names) to be injected into web content automatically. Deploys environment-specific <code>index.html</code> pages:</p>
<ul>
<li><p><a target="_blank" href="https://github.com/Joebaho/Terra-Ansible-Dynamic/blob/main/ansible/roles/webserver/templates/prod.html">prod.html</a>. → Portfolio UI with achievements, skills, certifications, socials, and image.</p>
</li>
<li><p><a target="_blank" href="https://github.com/Joebaho/Terra-Ansible-Dynamic/blob/main/ansible/roles/webserver/templates/dev.html">dev.html</a>. → Minimal UI with a developer theme.</p>
</li>
<li><p><a target="_blank" href="https://github.com/Joebaho/Terra-Ansible-Dynamic/blob/main/ansible/roles/webserver/templates/stage.html">stage.html</a>. → Modern preview UI.</p>
</li>
</ul>
<h2 id="heading-scripting-files">✨ Scripting files</h2>
<p>You need to write different bash script files generating and destroying resources. The file will contains all step by step meaning all terraform command and ansible management for the execution of the process.</p>
<h5 id="heading-step-14-deploy-file">Step 14: <strong><em>Deploy file</em></strong></h5>
<p>Here we declare the initialization of the folder, the validation of the configuration, the plan of resources to be create and apply of the execution where we will be launching resources, then we will SSH in the controller and pursuit the deployment of the index webpage in all 6 instances in three environment..</p>
<ul>
<li><a target="_blank" href="https://github.com/Joebaho/Terra-Ansible-Dynamic/blob/main/deploy.sh">Deploy file</a></li>
</ul>
<h5 id="heading-step-15-destroy-file">Step 15: <strong><em>Destroy file</em></strong></h5>
<p>This is where we declare the script that will allow terraform command to destroy all the resources created and all other dependencies and ansible to be destroy.</p>
<ul>
<li><a target="_blank" href="https://github.com/Joebaho/Terra-Ansible-Dynamic/blob/main/destroy.sh">Destroy file</a></li>
</ul>
<h2 id="heading-instructions-of-deployment">💼 Instructions of Deployment</h2>
<p>Follow these steps to deploy the architecture:</p>
<h5 id="heading-step-16-clone-repository">Step 16: <strong><em>Clone Repository:</em></strong></h5>
<p>Clone the repository in your local machine using the command "git clone" then enter the folder</p>
<blockquote>
<p>git clone <a target="_blank" href="https://github.com/Joebaho/Terra-Ansible-Dynamic.git">https://github.com/Joebaho/Terra-Ansible-Dynamic.git</a></p>
<p>cd Terra-Ansible-Dynamic</p>
</blockquote>
<h5 id="heading-step-17-run-deployment">Step 17: <strong><em>Run Deployment</em></strong></h5>
<p>For a one time deployment of the infrastructures and configuration of the application you must type both commands bellow. These would launch the process and get everything ready. Terraform and Ansible would apply the configuration and management of the webpages in each environments.</p>
<blockquote>
<p>chmod +x deploy.sh</p>
<p>./deploy.sh</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770413205134/b8c199e4-d2e4-4415-a619-bd8347cfcc6b.png" alt class="image--center mx-auto" /></p>
<p>The process will execute all terraform command and display the following steps. You must see this image</p>
<p>Terraform init: Initialize the folder. Terraform fmt &amp; validate: Check for any syntax error and valid the status of the configuration files</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770414884872/1bca9879-a01a-4c4e-b851-39322dec51f9.png" alt class="image--center mx-auto" /></p>
<p>Terraform apply: confirm all resources created and Terraform output: show all the output information to be use by user.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770414964835/9afab2fe-8e80-4c33-a468-77f48e3a4f26.png" alt class="image--center mx-auto" /></p>
<p>After all infrastructure deploy, Ansible now take over and starting process by first via the aws ec2 plugin will collect the public Ip of all the instances.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770415178899/e2ad670e-6e8c-4195-8787-92414e2ce7e0.png" alt class="image--center mx-auto" /></p>
<p>Then ansible will run the playbook</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770415224726/55081b70-b7a8-4575-975c-a237ce1b3c41.png" alt class="image--center mx-auto" /></p>
<p>Continue the play following each stage or environment</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770415305805/18e3fb35-a642-4274-afa2-2976536c14fe.png" alt class="image--center mx-auto" /></p>
<p>Playbook completed you will have the confirmation</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770415357702/dfd50937-f24f-46dc-b52c-cce1e41c2191.png" alt class="image--center mx-auto" /></p>
<p>Going back in the console we can see all instances created</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770415434414/4ddcb137-1bf4-4966-9b49-fdcb892e4585.png" alt class="image--center mx-auto" /></p>
<p>By picking any public IP and paste that in a new window on the browser and the web page will display. As we had three environments we will have three webpage</p>
<p>Prod webpage</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770415616716/b5f1be28-b636-414b-9fa1-3960427e1d30.png" alt class="image--center mx-auto" /></p>
<p>Dev Webpage</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770415649322/7c3ee5d4-f867-49ed-8855-43ce412daae9.png" alt class="image--center mx-auto" /></p>
<p>stage webpage</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770415665928/cd2edc32-dad8-4c29-a488-676578e13254.png" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-18-destroy-infrastructure">Step 18: <strong><em>Destroy infrastructure</em></strong></h5>
<p>After you are done with the infrastructure and you are ready to destroy you can run the command. First you make the file executable then you apply execution.</p>
<blockquote>
<p>chmod +x destroy.sh</p>
<p>./destroy.sh</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770413434851/477aafce-2aed-401f-82eb-13479a4569bf.png" alt class="image--center mx-auto" /></p>
<p>After typing or pasting the command, the process will destroy all infrastructures and configuration files. You will get bellow image as confirmation</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770414745046/e600a881-fb93-47d6-9d28-30c77b9cff3b.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-learning-outcomes"><strong>📌 Learning Outcomes</strong></h2>
<ul>
<li><p>Through this project, you will learn how to:</p>
<ul>
<li><p>Build <strong>production-ready Terraform modules</strong></p>
</li>
<li><p>Avoid common AWS networking pitfalls (public vs private subnets)</p>
</li>
<li><p>Use <strong>Ansible dynamic inventory with AWS</strong></p>
</li>
<li><p>Eliminate static inventory files</p>
</li>
<li><p>Secure SSH access using Terraform-managed keys</p>
</li>
<li><p>Debug real-world infrastructure issues (timeouts, subnet routing)</p>
</li>
<li><p>Structure deploy/destroy workflows professionally</p>
</li>
<li><p>Create <strong>reusable DevOps portfolio projects</strong></p>
</li>
</ul>
</li>
</ul>
<hr />
<h2 id="heading-resources"><strong>🔗 Resources</strong></h2>
<ul>
<li><ul>
<li><p>Terraform AWS Provider Documentation<br />    <a target="_blank" href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs">https://registry.terraform.io/providers/hashicorp/aws/latest/docs</a></p>
<ul>
<li><p><a target="_blank" href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs">Ansible Dynamic Inventory (AWS EC2)  
  </a><a target="_blank" href="https://docs.ansible.com/ansible/latest/plugins/inventory/aws_ec2.html">https://docs.ansible.com/ansible/latest/plugins/inventory/aws_ec2.html</a></p>
</li>
<li><p><a target="_blank" href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs">AWS VPC Routing Concepts  
  </a><a target="_blank" href="https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Route_Tables.html">https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Route_Tables.html</a></p>
</li>
<li><p><a target="_blank" href="https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Route_Tables.html">In</a>f<a target="_blank" href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs">rastructure as Code Best Practices  
  </a><a target="_blank" href="https://www.hashicorp.com/resources/infrastructure-as-code">https://www.hashicorp.com/resources/infrastructure-as-code</a></p>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<h2 id="heading-contributinghttpsregistryterraformioprovidershashicorpawslatestdocs"><a target="_blank" href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs">🤝 Contributing</a></h2>
<p><a target="_blank" href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs">Your perspective is valuable! Whether you see po</a>tential for imp<a target="_blank" href="https://developer.hashicorp.com/terraform/cli">rovem</a>ent or ap<a target="_blank" href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs">pre</a>ciate what'<a target="_blank" href="https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Route_Tables.html">s</a> already here, your contributions are welcomed and appreciated. Thank <a target="_blank" href="https://developer.hashicorp.com/terraform/cli">you for</a> <a target="_blank" href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs">considering joini</a><a target="_blank" href="https://developer.hashicorp.com/terraform/cli">ng</a> <a target="_blank" href="https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Route_Tables.html">us in making this project even better. Feel free to follow me for upda</a>tes on this project and others, and to explore opportunities for collaboration. Together, we can create something amazing!</p>
<p>Contributions are welcome 🚀</p>
<p>If you’d like to improve this project:</p>
<ol>
<li><p>Fork the repository</p>
</li>
<li><p>Create a feature branch</p>
</li>
<li><p>Submit a pull request</p>
</li>
</ol>
<p>Ideas for contributions:</p>
<ul>
<li><p>Add private subnet + NAT architecture</p>
</li>
<li><p>Introduce Ansible roles</p>
</li>
<li><p>Add CI/CD validation</p>
</li>
<li><p>Extend to multi-region deployments</p>
</li>
</ul>
<h2 id="heading-license">📄 License</h2>
<p>This project is licensed under the JoebahoCloud License</p>
]]></content:encoded></item><item><title><![CDATA[Web Application Deployment on AWS Using Terraform, NGINX and Bash Scripting]]></title><description><![CDATA[Web Application Deployment on AWS Using Terraform, NGINX and Bash Scripting
🚀 Overview:
The building of the infrastructure and deploy of a web page on AWS using Terraform aims to create a scalable and resilient infrastructure that leverages the powe...]]></description><link>https://platform.joebahocloud.com/web-application-deployment-on-aws-using-terraform-nginx-and-bash-scripting</link><guid isPermaLink="true">https://platform.joebahocloud.com/web-application-deployment-on-aws-using-terraform-nginx-and-bash-scripting</guid><category><![CDATA[Web Application Deployment on AWS Using Terraform, NGINX and Bash Scripting]]></category><dc:creator><![CDATA[Joseph Mbatchou]]></dc:creator><pubDate>Thu, 27 Nov 2025 04:08:27 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1764216354491/7b0a6ef9-39f0-476f-84e6-bcea32795a74.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-web-application-deployment-on-aws-using-terraform-nginx-and-bash-scripting">Web Application Deployment on AWS Using Terraform, NGINX and Bash Scripting</h1>
<h2 id="heading-overview">🚀 Overview:</h2>
<p>The building of the infrastructure and deploy of a web page on AWS using Terraform aims to create a scalable and resilient infrastructure that leverages the power of Amazon Web Services (AWS) cloud platform. This project utilizes Terraform, an Infrastructure as Code (IaC) tool, to provision and manage the infrastructure components, enabling automation, repeatability, and scalability. This project demonstrates how to <strong>provision, deploy and destroy infrastructure on AWS using Terraform and automate the deployment with bash Scripting</strong>. A <strong>deployment script (</strong><a target="_blank" href="http://deploy.sh"><code>deploy.sh</code></a>) and a <strong>destroy script (</strong><a target="_blank" href="http://destroy.sh"><code>destroy.sh</code></a>) are use for the automation process. It’s a hands-on DevOps project showing Infrastructure as Code (IaC) and automation.</p>
<h2 id="heading-problem-statement">🔧 Problem Statement</h2>
<p>Terraform is an IaC software tool that provides a consistent command line interface (CLI) workflow to manage hundreds of cloud services. Terraform codifies cloud APIs into declarative configuration files. In this specific case you need to create foundation Networking(VPC, Subnets, route table, IGW, NAT Gateway...), Terraform will automatically use the configuration files to provide the infrastructure resources where we can run application needed. Terraform will use his deployment to provide all AWS needed elements avoiding us to use the console and it will automate the setup, ensuring consistency and reducing human error. The bash scripting and the other hands will facilitate the deploy by simply automating the process.</p>
<h2 id="heading-techonology-stack">💽 Techonology Stack</h2>
<p>The architecture consists of the following three tiers:</p>
<ul>
<li><p><strong>AWS VPC</strong>: AWS VPC</p>
</li>
<li><p><strong>Subnets</strong>: AWS Subnets</p>
</li>
<li><p><strong>Route table</strong>: AWS route table</p>
</li>
<li><p><strong>NACL</strong>: AWS NACL</p>
</li>
<li><p><strong>Internet Gateway</strong>: AWS IGW</p>
</li>
<li><p><strong>Elastic Load Balancer</strong>: AWS ELB</p>
</li>
</ul>
<h2 id="heading-architecture-diagram">📌 Architecture Diagram</h2>
<p>┌──────────────────────────┐</p>
<p>│ Developers │ │ Push Code to GitHub │</p>
<p>└─────────────┬────────────┘</p>
<p>┌──────────────────────────┐</p>
<p>│ Terraform Configuration files │</p>
<p>- <a target="_blank" href="http://main.tf">main.tf</a></p>
<p>- <a target="_blank" href="http://providers.tf">providers.tf</a></p>
<p>- <a target="_blank" href="http://variables.tf">variables.tf</a></p>
<p>└─────────────┬────────────┘ ┌────────────────────────────┐</p>
<p>│ Bash Scripting │</p>
<p>- <a target="_blank" href="http://IDeploy.sh">Deploy.sh</a></p>
<p>- <a target="_blank" href="http://Destroy.sh">Destroy.sh</a></p>
<p>└────────────┬────────────────┘ ┌────────────────────────────┐</p>
<p>│ NGINX │</p>
<p>- Install and configuration</p>
<p>- Copy index.html</p>
<p>└─────────────┬──────────────┘ ┌────────────────────────────┐</p>
<p>│ Elastic Load Balancer│</p>
<p>- Distributes traffic</p>
<p>- Performs health checks</p>
<p>└─────────────┬──────────────┘ ┌────────────────────────────┐</p>
<p>│ End Users │ │ Access via ELB DNS NAME │</p>
<p>└────────────────────────────┘</p>
<h2 id="heading-project-requirements">🌟 Project Requirements</h2>
<p>Before you get started, make sure you have the following prerequisites in place:</p>
<ul>
<li><p><a target="_blank" href="https://www.terraform.io/">Terraform</a> installed on your local machine.</p>
</li>
<li><p>AWS IAM credentials configured in your text editor. In this case we will use VSCODE.</p>
</li>
<li><p>Git installed on your local machine and Github account set up <a target="_blank" href="https://www.github.com/">Github</a></p>
</li>
<li><p>Git for cloning the repository.</p>
</li>
</ul>
<p>You must also know Terraform workflow</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715055468795/5726d060-0df2-404e-a0b1-3ce1889018ac.jpeg" alt class="image--center mx-auto" /></p>
<h2 id="heading-table-of-contents">📋 Table of Contents</h2>
<p>I - <strong>Terraform Configuration files</strong></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Provider-configuration">Step 1: Provider Configuration</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-variables-configuration">Step 2: Variables Configuration</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-VPC-configuration">Step 3: Main Configuration</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Output-configuration">Step 4: Output Configuration</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Output-configuration">Step 5: User-data</a></p>
<p>II - <strong>Scripting files</strong></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Provider-configuration">Step 6: Deploy file</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-variables-configuration">Step 7: Destroy file</a></p>
<p>III - <strong>Instructions of Deployment</strong></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Clone-Repository">Step 8: Clone Repository</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Format-Files">Step 9: Run Deployment</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Format-Files">Step 10: Destroy Infrastructure</a></p>
<h2 id="heading-terraform-configuration-files">✨Terraform Configuration files</h2>
<p>You need to write different files generating resources</p>
<h5 id="heading-step-1-provider-configuration">Step 1: <strong><em>Provider Configuration</em></strong></h5>
<p>Here we declare our cloud provider and we specify the region where we will be launching resources</p>
<ul>
<li><a target="_blank" href="https://github.com/Joebaho/Flipkart-Deploy/blob/main/providers.tf">provider Configuration</a></li>
</ul>
<h5 id="heading-step-2-variables-configuration">Step 2: <strong><em>Variables Configuration</em></strong></h5>
<p>This is where we declare all variables and their value. It includes</p>
<ul>
<li><p><strong>Variables</strong>: List of element that can vary or change. They can be reuse values throughout our code without repeating ourselves and help make the code dynamic</p>
</li>
<li><p><strong>values</strong>: values attributed to each variables.</p>
</li>
</ul>
<p>We have</p>
<ul>
<li><p><a target="_blank" href="https://github.com/Joebaho/Flipkart-Deploy/blob/main/variables.tf">variables Configuration</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/Joebaho/Flipkart-Deploy/blob/main/terraform.tfvars">value Configuration</a></p>
</li>
</ul>
<h5 id="heading-step-3-main-configuration">Step 3: <strong><em>Main Configuration</em></strong></h5>
<p>This is where you create the basement, foundation and networking where all the resources will be launch. It includes VPC, Subnets, IGW, Route tables, Security groups, EC2 instances, Targets group and Elastic load balancer</p>
<ul>
<li><a target="_blank" href="https://github.com/Joebaho/Flipkart-Deploy/blob/main/main.tf">Main Configuration</a></li>
</ul>
<h5 id="heading-step-4-output-configuration">Step 4: <strong><em>Output Configuration</em></strong></h5>
<p>Know as Output Value : it is a convenient way to get useful information about your infrastructure printed on the CLI. It is showing the DNS name of the load balancer and public IP of the instances.</p>
<ul>
<li><a target="_blank" href="https://github.com/Joebaho/Flipkart-Deploy/blob/main/outputs.tf">Output Configuration</a></li>
</ul>
<h5 id="heading-step-5-user-data">Step 5: <strong><em>User-data</em></strong></h5>
<p>user data is <strong><mark>a script or custom data that you provide to an EC2 instance to be executed automatically on its first launch</mark></strong>. It's used to automate the initial setup of an instance, such as installing software, configuring settings, or running commands needed to make the instance ready for use. In our case the user data will update the server, install Nginx and copy the index.html file to the correspondant folder.</p>
<ul>
<li><a target="_blank" href="https://github.com/Joebaho/Flipkart-Deploy/blob/main/user-data.sh">User-data</a></li>
</ul>
<h2 id="heading-bash-scripting-files">✨Bash Scripting files</h2>
<p>You need to write different bash script files generating and destroying resources. The file will contains all step by step meaning all terraform command for the execution of the process.</p>
<h5 id="heading-step-6-deploy-file">Step 6: <strong><em>Deploy file</em></strong></h5>
<p>Here we declare the initialization of the folder, the validation of the configuration, the plan of resources to be create and apply of the execution where we will be launching resources.</p>
<ul>
<li><a target="_blank" href="https://github.com/Joebaho/Flipkart-Deploy/blob/main/deploy.sh">Deploy file</a></li>
</ul>
<h5 id="heading-step-7-destroy-file">Step 7: <strong><em>Destroy file</em></strong></h5>
<p>This is where we declare the terraform command to destroy all the resources created.</p>
<ul>
<li><a target="_blank" href="https://github.com/Joebaho/Flipkart-Deploy/blob/main/destroy.sh">Destroy file</a></li>
</ul>
<h2 id="heading-instructions-of-deployment">💼 Instructions of Deployment</h2>
<p>Follow these steps to deploy the architecture:</p>
<h5 id="heading-step-8-clone-repository">Step 8: <strong><em>Clone Repository:</em></strong></h5>
<p>Clone the repository in your local machine using the command "git clone" then enter the folder</p>
<blockquote>
<p>git clone <a target="_blank" href="https://github.com/Joebaho/Flipkart-Deploy.git">https://github.com/Joebaho/Flipkart-Deploy.git</a></p>
<p>cd Flipkart-Deploy</p>
</blockquote>
<h5 id="heading-step-9-run-deployment">Step 9: <strong><em>Run Deployment</em></strong></h5>
<p>Initialize the folder containing configuration files that were clone to Terraform and apply the configuration by typing the following command</p>
<blockquote>
<p>chmod +x deploy.sh</p>
<p>./deploy.sh</p>
</blockquote>
<p>The process will execute all terraform command and display the following steps. You must see this image</p>
<p>Terraform init: Initialize the folder</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1764212187396/05b6ef33-73f7-4bd0-8fb1-3b70153d8e4e.png" alt class="image--center mx-auto" /></p>
<p>Terraform fmt &amp; validate: Check for any syntax error and valid the status of the configuration files</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1764212232658/ff0bd9af-c0d8-4d66-9141-746ecd17a100.png" alt class="image--center mx-auto" /></p>
<p>Terraform plan: confirm the numbers of resources that will be create</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1764212266934/75887368-fcfe-4913-a606-1f67c5d343c8.png" alt class="image--center mx-auto" /></p>
<p>Terraform apply: confirm all resources created</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1764212291088/342c26f7-6c10-40be-b1d8-2cba159a51e7.png" alt class="image--center mx-auto" /></p>
<p>Terraform output: show all the output information to be use by user.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1764212338167/cbf75280-a22b-49d7-b632-034c3b2dcc5f.png" alt class="image--center mx-auto" /></p>
<p>After copy the ELB dns name in the output section you can go paste that in a new window on the browser and the web page will display.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1764212445028/0e37e738-648b-4d11-99b9-ffc1c27d7359.png" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-10-destroy-infrastructure">Step 10: <strong><em>Destroy infrastructure</em></strong></h5>
<p>After you are done with the infrastructure and you are ready to destroy you can run the command. First you make the file executable then you apply execution.</p>
<blockquote>
<p>chmod +x destroy.sh</p>
<p>./destroy.sh</p>
</blockquote>
<p>After typing or pasting the command you will get images</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1764212705679/f6070d34-e952-4080-8774-3e7cdcd3f73b.png" alt class="image--center mx-auto" /></p>
<p>You will need to confirm the destruction of the deployment. Then all the 16 resources created before will be destroyed.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1764212781108/6ada8696-a786-496d-ae16-3fa6a6a223f2.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-learning-outcomes"><strong>📌 Learning Outcomes</strong></h2>
<ul>
<li><p>Understand <strong>Terraform basics</strong> (providers, resources, state management)</p>
</li>
<li><p>Automate deployments with <strong>Shell scripting</strong></p>
</li>
<li><p>Hands-on AWS infrastructure provisioning</p>
</li>
</ul>
<hr />
<h2 id="heading-resources"><strong>🔗 Resources</strong></h2>
<ul>
<li><p><a target="_blank" href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs">Terraform AWS Provider D</a><a target="_blank" href="https://developer.hashicorp.com/terraform/cli">ocs</a></p>
</li>
<li><p><a target="_blank" href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs">Terraform CLI Doc</a><a target="_blank" href="https://developer.hashicorp.com/terraform/cli">s</a></p>
</li>
</ul>
<h2 id="heading-contributihttpsdeveloperhashicorpcomterraformcling"><a target="_blank" href="https://developer.hashicorp.com/terraform/cli">🤝 Contributi</a>ng</h2>
<p>Your perspective is valuable! Wh<a target="_blank" href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs">ether you see pote</a><a target="_blank" href="https://developer.hashicorp.com/terraform/cli">ntial for improvem</a>ent or ap<a target="_blank" href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs">preciate what's al</a><a target="_blank" href="https://developer.hashicorp.com/terraform/cli">ready here, your c</a>ontributi<a target="_blank" href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs">ons are welcomed and appreciated.</a> <a target="_blank" href="https://developer.hashicorp.com/terraform/cli">Thank you for</a> <a target="_blank" href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs">considering joini</a><a target="_blank" href="https://developer.hashicorp.com/terraform/cli">ng us in making th</a>is project even better. Feel free to follow me for updates on this project and others, and to explore opportunities for collaboration. Together, we can create something amazing!</p>
<h2 id="heading-license">📄 License</h2>
<p>This project is licensed under the JoebahoCloud License</p>
]]></content:encoded></item><item><title><![CDATA[Peering and testing  Connections of two separated VPCS on AWS using Terraform]]></title><description><![CDATA[Peering and testing Connections of two separated VPCS on AWS using Terraform
🚀 Overview:
VPC peering across two AWS regions is a network connection that allows Virtual Private Clouds (VPCs) in different geographic locations to communicate securely a...]]></description><link>https://platform.joebahocloud.com/peering-and-testing-connections-of-two-separated-vpcs-on-aws-using-terraform</link><guid isPermaLink="true">https://platform.joebahocloud.com/peering-and-testing-connections-of-two-separated-vpcs-on-aws-using-terraform</guid><dc:creator><![CDATA[Joseph Mbatchou]]></dc:creator><pubDate>Mon, 30 Dec 2024 06:28:33 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1735463501842/2b6e5324-3638-42e0-a562-cb08bea3c5a7.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-peering-and-testing-connections-of-two-separated-vpcs-on-aws-using-terraform"><strong>Peering and testing Connections of two separated VPCS on AWS using Terraform</strong></h1>
<h2 id="heading-overview">🚀 Overview:</h2>
<p>VPC peering across two AWS regions is a network connection that allows Virtual Private Clouds (VPCs) in different geographic locations to communicate securely and directly with each other, enabling seamless data transfer and resource access while maintaining the isolation and security of each VPC. This inter-region connectivity facilitates distributed application architectures, disaster recovery setups, and data replication scenarios, enhancing the versatility and global reach of AWS infrastructure for businesses and organizations. This project utilizes Terraform, an Infrastructure as Code (IaC) tool, to provision and manage the infrastructure components, enabling automation, repeatability, and scalability. The primary objective of this project is to design and deploy two virtual Private Cloud on AWS in two separated region “us-west-2” and “us-east-1” then we will create a VPC peering to link both VPCs and finally launch two EC2 instances in both private subnets then test connectivity between them.</p>
<h2 id="heading-problem-statement">🔧 Problem Statement</h2>
<p>Terraform is an IaC software tool that provides a consistent command line interface (CLI) workflow to manage hundreds of cloud services. Terraform codifies cloud APIs into declarative configuration files. In this specific case you need to create VPC peering across two AWS regions. It is a network connection that allows Virtual Private Clouds (VPCs) in different geographic locations to communicate securely and directly with each other, enabling seamless data transfer and resource access while maintaining the isolation and security of each VPC. This inter-region connectivity facilitates distributed application architectures, disaster recovery setups, and data replication scenarios, enhancing the versatility and global reach of AWS infrastructure for businesses and organizations. Terraform will use his deployment to provide all AWS needed elements avoiding us to use the console and it will automate the setup, ensuring consistency and reducing human error.</p>
<h2 id="heading-techonology-stack">💽 Techonology Stack</h2>
<p>The architecture consists of the following services tiers:</p>
<ul>
<li><p><strong>VPC</strong>: AWS VPC</p>
</li>
<li><p><strong>Subnets</strong>: AWS Subnets</p>
</li>
<li><p><strong>Route table</strong>: AWS route table</p>
</li>
<li><p><strong>NACL</strong>: AWS NACL</p>
</li>
<li><p><strong>Internet Gateway</strong>: AWS IGW</p>
</li>
<li><p><strong>NatGateway</strong> : AWS NATGATEWAY</p>
</li>
<li><p><strong>SSM Role:</strong> AWS IAM</p>
</li>
<li><p><strong>EC2 Instance</strong>: AWS EC2</p>
</li>
<li><p><strong>Peering Connection:</strong> AWS VPC Peering</p>
</li>
</ul>
<h2 id="heading-architecture-diagram">📌 Architecture Diagram</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735460267277/be084e67-e85e-40f6-89ba-fe80c599c2d6.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-project-requirements">🌟 Project Requirements</h2>
<p>Before you get started, make sure you have the following prerequisites in place:</p>
<ul>
<li><p><a target="_blank" href="https://www.terraform.io/"><strong>Terraform</strong></a> installed on your local machine.</p>
</li>
<li><p>AWS IAM credentials configured in your text editor. In this case we will use VSCODE.</p>
</li>
<li><p>Git installed on your local machine and Github account set up <a target="_blank" href="https://www.github.com/"><strong>Github</strong></a></p>
</li>
<li><p>Git for cloning the repository.</p>
</li>
<li><p>AWS VPC services</p>
</li>
</ul>
<p>You must know the goal of the peering connection:</p>
<p>To achieve VPC peering across regions, first, create the necessary VPCs in each region, ensuring they have unique CIDR blocks. Next, create VPC peering connections in both regions, accepting the peer requests. Update the route tables in each VPC to include routes for the peer VPC's CIDR block, pointing to the peering connection. Finally, configure security groups and network ACLs to allow the required traffic between the peered VPCs. This setup enables seamless and secure communication between resources in the US East and US west regions while maintaining network isolation. We will skip the completion of these manual steps and we will use Terraform where cade with all those steps were wrote and now will just be run and deploy.</p>
<h2 id="heading-table-of-contents">📋 Table of Contents</h2>
<p>I - <strong>Terraform Configuration files</strong></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Provider-configuration"><strong>Step 1: Providers Configuration</strong></a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-variables-configuration"><strong>Step 2: Variables Configuration</strong></a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-VPC-configuration"><strong>Step 3: VPCs Configuration</strong></a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Output-configuration"><strong>Step 4: Main Configuration</strong></a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Output-configuration"><strong>Step 5: Output Configuration</strong></a></p>
<p>II - <strong>Instructions of Deployment</strong></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Clone-Repository"><strong>Step 1: Clone Repository</strong></a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Initialize-Folder"><strong>Step 2: Initialize Folder</strong></a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Format-Files"><strong>Step 3: Format Files</strong></a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Validate-Files"><strong>Step 4: Validate Files</strong></a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Plan"><strong>Step 5: Plan</strong></a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Apply"><strong>Step 6: Apply</strong></a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Review-Of-Resources"><strong>Step 7: Review of Resources</strong></a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Destroy"><strong>Step 8: Test</strong></a><strong>ing Connectivity</strong></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Destroy"><strong>Step 9: Destroy</strong></a></p>
<h2 id="heading-terraform-configuration-files">✨Terraform Configuration files</h2>
<p>You need to write different files generating resources</p>
<h5 id="heading-step-1-provider-configuration">Step 1: <strong><em>Provider Configuration</em></strong></h5>
<p>Here we declare our cloud provider and we specify the regions where we will be launching resources. The regions us-east-1 and us-west-2 are the two to be precise.</p>
<ul>
<li><a target="_blank" href="https://github.com/Joebaho/VPC-PEERING/blob/main/providers.tf">providers Configuration</a></li>
</ul>
<h5 id="heading-step-2-variables-configuration">Step 2: <strong><em>Variables Configuration</em></strong></h5>
<p>This is where we declare all variables and their value. It includes</p>
<p><strong>Variables</strong>: List of element that can vary or change. They can be reuse values throughout our code without repeating ourselves and help make the code dynamic. We can de declare stuff like CIDR blocks, ports numbers, key name, instance type, count VPCs and subnets name.</p>
<p><strong>Value:</strong> Declare different default value of each variables</p>
<p>We have</p>
<ul>
<li><p><a target="_blank" href="https://github.com/Joebaho/VPC-PEERING/blob/main/variables.tf"><strong>variables Configuration</strong></a></p>
</li>
<li><p><a target="_blank" href="https://github.com/Joebaho/VPC-PEERING/blob/main/terraform.tfvars"><strong>value Configuration</strong></a></p>
</li>
</ul>
<h5 id="heading-step-3-vpcs-configuration">Step 3: <strong><em>VPCs Configuration</em></strong></h5>
<p>This is where you create the basement, foundation and networking where all the resources will be launch. It includes VPC, Subnets, IGW, NatGateway, EIP, NCAL and Route tables</p>
<ul>
<li><p><a target="_blank" href="https://github.com/Joebaho/VPC-PEERING/blob/main/vpc_east.tf">vpc-east</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/Joebaho/VPC-PEERING/blob/main/vpc_west.tf">vpc-west</a></p>
</li>
</ul>
<p>We have here</p>
<ul>
<li><p><strong>VPC</strong>: Virtual Private Cloud the main and private environment where all resources will be launch</p>
</li>
<li><p><strong>Subnets</strong>: is a segmented portion of a virtual private cloud (VPC) that allows you to partition your network resources. Subnets are used to organize and manage your cloud resources more effectively by providing isolation and control over network traffic. We will be having two public and private in each VPCs</p>
</li>
<li><p><strong>Internet Gateway</strong>: it plays a crucial role in enabling internet connectivity for resources within a VPC, allowing instances to access services, applications, and data hosted on the public internet while providing scalability, redundancy, and security features. One for the each will allow access to internet for each VPCs.</p>
</li>
<li><p><strong>Route Tables</strong>: is a fundamental networking component that controls the routing of network traffic within a Virtual Private Cloud (VPC). Route tables define the rules for directing traffic from one subnet to another or to external networks, such as the internet or on-premises networks. As we need to are peering tow VPCs we will need to route the traffic from local to internet and the private link between the two close environment.</p>
</li>
<li><p><strong>NCAL</strong>: Network Access Control Lists (NACLs) are a security layer in AWS that act as a firewall for controlling traffic in and out of one or more subnets within a Virtual Private Cloud (VPC).</p>
</li>
<li><p><strong>Security Groups</strong>: a security group acts as a virtual firewall for controlling inbound and outbound traffic to AWS resources, such as EC2 instances, RDS databases, and other services within a Virtual Private Cloud (VPC). Security groups allow you to define rules that specify the type of traffic allowed or denied based on protocols, ports, and IP addresses.</p>
</li>
</ul>
<h5 id="heading-step-4-main-configuration">Step 4: <strong><em>Main Configuration</em></strong></h5>
<p>This is where we declare the file for the peering which is the main file. The main file is the one containing the goal of the project which is the peering of both VPCs. This is where the link and the attachment and connectivity of the VPCs is done.</p>
<p>. <a target="_blank" href="https://github.com/Joebaho/VPC-PEERING/blob/main/main.tf">main</a></p>
<p>The SSM file will be the role because we will connect to the instance using the private IP and as we will connect to the instance securely. The easier way will be by SSM connect</p>
<p><strong>.</strong> <a target="_blank" href="https://github.com/Joebaho/VPC-PEERING/blob/main/ssm-role.tf">ssm-role</a></p>
<p>The both instances will bee launched in the private subnets of each VPCs. Here are the contains of each files that will launch the instances</p>
<p>. <a target="_blank" href="https://github.com/Joebaho/VPC-PEERING/blob/main/ec2-east.tf">ec2-east</a></p>
<p>. <a target="_blank" href="https://github.com/Joebaho/VPC-PEERING/blob/main/ec2-west.tf">ec2-west</a></p>
<h5 id="heading-step-5-output-configuration">Step 5: <strong><em>Output Configuration</em></strong></h5>
<p>Know as Output Value : it is a convenient way to get useful information about your infrastructure printed on the CLI. It is showing the ARN, name or ID of a resource. In this case we are bringing out the both VPCs Id and both EC2 private IP. we gonna us those private IPs to test the connectivity</p>
<ul>
<li><a target="_blank" href="https://github.com/Joebaho/VPC-PEERING/blob/main/outputs.tf"><strong>Output Configuration</strong></a></li>
</ul>
<h2 id="heading-instructions-of-deployment">💼 Instructions of Deployment</h2>
<p>Follow these steps to deploy the architecture:</p>
<h5 id="heading-step-1-clone-repository">Step 1: <strong><em>Clone Repository:</em></strong></h5>
<p>Clone the repository in your local machine using the command "git clone"</p>
<blockquote>
<p><strong><em>git clone</em></strong> <a target="_blank" href="https://github.com/Joebaho/VPC-PEERING">https://github.com/Joebaho/VPC-PEERING</a></p>
</blockquote>
<p>Step 2: <strong><em>Initialize Folder</em></strong></p>
<p>Initialize the folder containing configuration files that were clone to Terraform and apply the configuration by typing the following command</p>
<blockquote>
<p><strong><em>terraform init</em></strong></p>
</blockquote>
<p>You must see this image</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735528451950/cbfe81d7-5034-4f8e-afc8-74b792508202.png" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-3-format-files">Step 3: <strong><em>Format Files</em></strong></h5>
<p>Apply any changes on files and Review the changes and confirm the good format with command:</p>
<blockquote>
<p><strong><em>terraform fmt</em></strong></p>
</blockquote>
<h5 id="heading-step-4-validate-files">Step 4: <strong><em>Validate Files</em></strong></h5>
<p>Ensure that every files are syntactically valid and ready to go with the command:</p>
<blockquote>
<p><strong><em>terraform validate</em></strong></p>
</blockquote>
<p>If everything is good you will have something like this</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735528480013/1fbc684b-108e-45db-868c-8f7e5a571d7b.png" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-5-plan">Step 5: <strong><em>Plan</em></strong></h5>
<p>Create an execution plan to provide the achievement of the desired state. It Check and confirm the numbers of resources that will be create. Use command:</p>
<blockquote>
<p><strong><em>terraform plan</em></strong></p>
</blockquote>
<p>The list of all resources in stage of creation will appear and you can see all properties(arguments and attributs) of each resources</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735528768226/269180d0-1759-499a-a7d9-c1ae3d886aa4.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735528778554/b7735ed6-9197-40e5-8559-f31df8723239.png" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-6-apply">Step 6: <strong><em>Apply</em></strong></h5>
<p>Bring all desired state resources on life. It Launch and create all resources listed in the configuration files. The command to perform the task is:</p>
<blockquote>
<p><strong><em>terraform apply -auto-approve</em></strong></p>
</blockquote>
<p>Now, the creation will start and you will be able to see which resources is on the way to be create and the time it taking to create.</p>
<p>At the end you will receive a prompt message showing all resources status: created, changed and the numbers of them.</p>
<p>Here are the outputs :</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735530865923/562e3607-f2b2-45da-baae-ddbe7a99b626.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735530874962/90291247-b5c4-421a-b1be-e551628b533f.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735530883276/be5022e0-deef-4aed-aeb6-353d81fa8850.png" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-7-review-of-resources">Step 7: <strong><em>Review of resources</em></strong></h5>
<p>Go back on the console and check all actual state resources one by one to see. You will have</p>
<ul>
<li><strong>VPC-EAST</strong></li>
</ul>
<p>In the VPC option, we can see here the VPC, subnets( Public &amp; Private) routes tables ( public &amp; Private), Internet Gateway , Nat Gateway deployed in the us-east-1 region.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735532161346/9f61a374-8a08-4953-84c3-7b4e5d8a8587.png" alt class="image--center mx-auto" /></p>
<p>This shows route for the connectivity. Every communication from the CDIR block of the other VPC will have as target the VPC peering, for local one use the CDIR block of the requester VPC. If you want to access internet the Internet Gateway or the Nat Gateway will be the way to go.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735532180799/67d00606-81f0-4907-bc67-efe47f8757a2.png" alt class="image--center mx-auto" /></p>
<ul>
<li><strong>VPC_WEST</strong></li>
</ul>
<p>We can see here the VPC, subnets( Public &amp; Private) routes tables ( public &amp; Private), Internet Gateway , Nat Gateway deployed in the us-west-2 region.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735532197717/76797599-94db-4703-9e07-a00e1535c1c3.png" alt class="image--center mx-auto" /></p>
<p>This shows route for the connectivity. Every communication from the CDIR block of the other VPC will have as target the VPC peering, for local one use the CDIR block of the requester VPC. If you want to access internet the Internet Gateway or the Nat Gateway will be the way to go.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735532207730/404462b1-7e34-44e5-82c5-c929d23d2ef3.png" alt class="image--center mx-auto" /></p>
<ul>
<li><strong>Peering Connection east</strong></li>
</ul>
<p>VPC peering connects two VPCs privately. The <strong>requester VPC-west</strong> initiates the connection, and the <strong>accepter</strong> VPC-east approves it. Both configure route tables for traffic flow between the VPCs. Manually this process needs acceptance confirmation of the accepter but in this case we placed that automatically in the code.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735532645602/08e0b45d-623b-4b3c-b12a-01431346e69e.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735532655502/35b686cb-e08f-4d00-924d-4f8924a51474.png" alt class="image--center mx-auto" /></p>
<ul>
<li><strong>Peering Connection west</strong></li>
</ul>
<p>VPC peering connects two VPCs privately. The <strong>requester VPC-west</strong> initiates the connection, and the <strong>accepter</strong> VPC-east approves it.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735532669812/8ae9e4ef-eda4-44b9-a2d3-e484238b8e41.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735532680872/ef2d45ca-8d27-437c-b73d-4c7e10bb27a4.png" alt class="image--center mx-auto" /></p>
<ul>
<li><strong>EC2-WEST</strong></li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735532988714/1d1a3aad-6e21-4bd6-9981-143181d4a4fa.png" alt class="image--center mx-auto" /></p>
<ul>
<li><strong>EC2-EAST</strong></li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735533359586/f574e794-9207-4d5e-837c-ab426f14dfc9.png" alt class="image--center mx-auto" /></p>
<ul>
<li><strong>Server-west connect</strong></li>
</ul>
<p>After selecting the instance, we may now look for a way to connect to it. As we added the SSM role we will have to select it and hit on “Connect” that will land us directly in the server.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735533012629/d7697e18-3ea7-430b-9827-11ea024cc38c.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735533095999/5ca9d121-e92b-4343-b68f-6bf0c0151ff3.png" alt class="image--center mx-auto" /></p>
<ul>
<li><strong>Server-east connect</strong></li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735533402024/11fc985c-e108-4a05-ad68-a850976eabce.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735533413425/9d835b30-8635-4929-a002-bf471a6d662d.png" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-8-testing-connectivity">Step 8: <strong><em>Testing Connectivity</em></strong></h5>
<p>The testing process will require to use the ping command to see if there is any response from the server. We need to grab the private IP of ec2-east server and test it the ec2-west server. As you can see on the both Screenshots each ping return a response this mean the connectivity is perfect and going through</p>
<ul>
<li><strong>Ping server west</strong></li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735533745148/d5c41e99-ce9c-42eb-b9a6-76f77fc99f44.png" alt class="image--center mx-auto" /></p>
<ul>
<li><strong>Ping server east</strong></li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735533755037/1023d171-637a-469f-9f30-f70e73ab3d8b.png" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-9-destroy">Step 9: <strong><em>Destroy</em></strong></h5>
<p>Destroy the terraform managed infrastructure meaning all resources created will be shut down. This action can be done with the command "terraform destroy"</p>
<blockquote>
<p><strong><em>terraform destroy -auto-approve</em></strong></p>
</blockquote>
<p>At the end you will receive a prompt message showing all resources has been destroyed. The 53 resources created for the purpose of this project will all be destroyed and instances terminated. After typing the command the process of deleting resources will be launch and at the end there will be a confirmation message. See screenshot bellow:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735534199247/17371a53-97ce-49e4-a0a0-b4c0c307ac27.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-contributing">🤝 Contributing</h2>
<p>Your perspective is valuable! Whether you see potential for improvement or appreciate what's already here, your contributions are welcomed and appreciated. Thank you for considering joining us in making this project even better. Feel free to follow me for updates on this project and others, and to explore opportunities for collaboration. Together, we can create something amazing!</p>
<h2 id="heading-license">📄 License</h2>
<p>This project is licensed under the JoebahoCloud License</p>
]]></content:encoded></item><item><title><![CDATA[Build A Quadratic Equation Solver Application Using Docker, Terraform, Bashscript And Cloudformation]]></title><description><![CDATA[**Add Subtitle**
Building a quadratic equation solver application using Docker, Terraform, Bash scripting and Cloudformation.
🚀 Overview:
This project aims to develop and deploy a scalable web application that solves quadratic equations, incorporati...]]></description><link>https://platform.joebahocloud.com/build-a-quadratic-equation-solver-application-using-docker-terraform-bashscript-and-cloudformation</link><guid isPermaLink="true">https://platform.joebahocloud.com/build-a-quadratic-equation-solver-application-using-docker-terraform-bashscript-and-cloudformation</guid><dc:creator><![CDATA[Joseph Mbatchou]]></dc:creator><pubDate>Sat, 16 Nov 2024 02:37:51 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1731699615396/95a9812d-5432-42a2-a1bd-8319cac74b9f.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>**<br />Add Subtitle**</p>
<h1 id="heading-building-a-quadratic-equation-solver-application-using-docker-terraform-bash-scripting-and-cloudformation"><strong>Building a quadratic equation solver application using Docker, Terraform, Bash scripting and Cloudformation.</strong></h1>
<h2 id="heading-overview"><strong>🚀 Overview:</strong></h2>
<p>This project aims to develop and deploy a scalable web application that solves quadratic equations, incorporating modern DevOps practices and cloud infrastructure automation. The application will provide users with a simple interface to input coefficients (a, b, c) of a quadratic equation (ax² + bx + c = 0) and receive calculated roots in real-time.</p>
<p>The technical implementation leverages containerization through Docker for consistent deployment environments, Infrastructure as Code (IaC) using both Terraform and AWS CloudFormation for cloud resource management, and Bash scripting for automation and operational tasks. This multi-tool approach demonstrates the integration of various DevOps technologies while solving a practical mathematical problem.</p>
<h2 id="heading-problem-statement"><strong>🔧 Problem Statement</strong></h2>
<p>Educational institutions and students often need quick access to quadratic equation solutions, but existing calculators are typically standalone applications or simple websites that lack scalability and modern deployment practices. Additionally, organizations face several challenges when deploying and maintaining such applications:</p>
<ol>
<li><p><strong>Consistency Issues:</strong> Different development and production environments lead to deployment failures and inconsistent behavior.</p>
</li>
<li><p><strong>Manual Infrastructure Setup:</strong> Traditional deployment methods require manual configuration, making it time-consuming and error-prone to set up new environments.</p>
</li>
<li><p><strong>Limited Scalability:</strong> Most existing solutions cannot easily scale to handle varying loads of user requests.</p>
</li>
<li><p><strong>Complex Maintenance:</strong> Without proper automation and infrastructure management, maintaining and updating the application becomes increasingly difficult.</p>
</li>
</ol>
<p>This project addresses these challenges by:</p>
<ul>
<li><p>Containerizing the application to ensure consistency across environments</p>
</li>
<li><p>Automating infrastructure provisioning using both Terraform and CloudFormation</p>
</li>
<li><p>Implementing efficient deployment pipelines using Bash scripting</p>
</li>
<li><p>Creating a repeatable and maintainable cloud infrastructure setup</p>
</li>
</ul>
<p>The solution will demonstrate how modern DevOps tools and practices can be applied to create a reliable, scalable, and maintainable application, while serving as a practical example for educational purposes in both mathematics and cloud computing.</p>
<h2 id="heading-techonology-stack"><strong>💽 Techonology Stack</strong></h2>
<p>The project consists of using the following technologies:</p>
<ul>
<li><p><strong>CloudFormation</strong>:</p>
</li>
<li><p><strong>EC2</strong>:</p>
</li>
<li><p><strong>Docker</strong>:</p>
</li>
<li><p><strong>Terraform</strong>:</p>
</li>
<li><p><strong>Bash Scripting</strong>:</p>
</li>
</ul>
<h2 id="heading-architecture-diagram"><strong>📌 Architecture Diagram</strong></h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731724536410/cc200d6f-1974-40ec-926a-262656a94500.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-project-requirements"><strong>🌟 Project Requirements</strong></h2>
<p>Before you get started, make sure you have the following prerequisites in place:</p>
<ul>
<li><p>AWS IAM credentials configured in your text editor. In this case we will use VSCODE.</p>
</li>
<li><p>Git installed on your local machine and Github account set up <a target="_blank" href="https://www.github.com/"><strong>Github</strong></a></p>
</li>
<li><p>Git for cloning the repository.</p>
</li>
<li><p>Understand CloudFormation stack.</p>
</li>
<li><p>Familiar with Docker and Bash script commands.</p>
</li>
<li><p>Know the Terraform work flow and commands.</p>
</li>
</ul>
<h2 id="heading-table-of-contents"><strong>📋 Table of Contents</strong></h2>
<p>I - <strong>CloudFormation Templates</strong></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Provider-configuration"><strong>Step 1: Provide the template</strong></a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-variables-configuration"><strong>Step 2: Launch the Host server</strong></a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-variables-configuration"><strong>Step 3: Connect to the launched server</strong></a></p>
<p>II - <strong>Terraform Configuration files</strong></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Provider-configuration"><strong>Step 1: Provider Configuration</strong></a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-variables-configuration"><strong>Step 2: Variables Configuration</strong></a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-VPC-configuration"><strong>Step 3: Main Configuration</strong></a></p>
<p>III - <strong>Python Application</strong></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Provider-configuration"><strong>Step 1: Provide file app.py</strong></a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Provider-configuration"><strong>Step 2: Web page file</strong></a></p>
<p>IV - <strong>Dockerfile</strong></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Provider-configuration"><strong>Step 1: Writing Dockerfile</strong></a></p>
<p>V - <strong>Bash Scripting</strong></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Provider-configuration"><strong>Step 1: Writing the Bash script file</strong></a></p>
<p>VI - <strong>Instructions of Deployment</strong></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Clone-Repository"><strong>Step 1: Copy all files in the EC2 server</strong></a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Initialize-Folder"><strong>Step 2: Initialize Folder</strong></a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Format-Files"><strong>Step 3: Format Files</strong></a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Validate-Files"><strong>Step 4: Validate Files</strong></a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Plan"><strong>Step 5: Plan</strong></a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Apply"><strong>Step 6: Apply</strong></a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Review-Of-Resources"><strong>Step 7: Bash Scripting execution</strong></a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Review-Of-Resources"><strong>Step 8: Review of Resources</strong></a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Destroy"><strong>Step 9: Destroy</strong></a></p>
<h2 id="heading-cloudformation"><strong>✨CloudFormation</strong></h2>
<p>You need to write the template generating EC2 server. We will add User-data that will install Docker and Terraform in the server. This will automate the launch f instance and avoid to login and perform installation manually. The User-data is the piece of the code that will contains all instructions og the Docker and Terraform installation.</p>
<h5 id="heading-step-1-provide-the-template">Step 1: <strong><em>Provide the template</em></strong></h5>
<p>Here we declare the contains of the server in with we will be working. It is an Amazon linux 2 server with “t2.micro” as instance type and will be launch in an existant VPC. Below is the link to the template.</p>
<ul>
<li><a target="_blank" href="https://github.com/Joebaho/TF-DK-QUADRATIC-EQUATION--SOLVER/blob/main/Linux-Docker-terraform.yml"><strong>CloudFormation template</strong></a></li>
</ul>
<h5 id="heading-step-2-launch-the-ec2-server-process">Step 2: <strong><em>Launch the EC2 server process</em></strong></h5>
<p>Here is the process of launching the server using Cloudformation. You must login in your account via console and navigate to CloudFormation option. Click on “Create Stack”</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731648791175/fcfbf6db-a312-4491-a5b0-417004ac463c.png" alt class="image--center mx-auto" /></p>
<p>Then, select the type and the location of the file.this will help CloudFormation to localise the file the upload it to the stack.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731648810502/943175e5-170b-40db-b8ba-8a06344918a5.png" alt class="image--center mx-auto" /></p>
<p>You will provide informations such as “stack Name”, “Instance_type”, “key_name”, “SSH Unique IP address which is your personal public IP”. You will select an existing public subnet in the list also the VPC.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731648826486/6e8d6fcd-e19d-41e6-83d8-c28ac5e1ae67.png" alt class="image--center mx-auto" /></p>
<p>Check the acknowledgement option on the bottom of the page then hit next and after on the next page hit “Submit”</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731648836028/f70327d1-8858-4b57-8c23-bef5de7ff430.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731648853986/6f5a3300-3957-4f45-9749-ec054c549a1b.png" alt class="image--center mx-auto" /></p>
<p>The process of creating resources will launch and resources will be on slowly and one after another base of what were declare in the template.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731648863366/251d6498-f5af-4595-a159-fdb3778efa0c.png" alt class="image--center mx-auto" /></p>
<p>Keep refreshing with the circle arrow in order to see the evolution of the creation.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731648882440/e65d5cca-c59b-4459-910d-fdd4d1dc6d43.png" alt class="image--center mx-auto" /></p>
<p>When completed you will get the get the massege saying “Create Completed”</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731648897910/98c641c1-22c4-47de-ab18-07e81eb101c7.png" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-3-connect-to-the-launched-ec2-server">Step 3: <strong><em>Connect to the launched EC2 server</em></strong></h5>
<p>On the console open another tab where you will navigate till get to EC2. Select the running instance and click on “Connect”</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731648925896/f74b74bb-3b22-4066-9894-2d1983c598d3.png" alt class="image--center mx-auto" /></p>
<p>The following windows will appear. On your ligne of command be on the folder that contains your private key pair. Copy each command and type to the command ligne</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731648937060/4c38b13e-343b-4504-a30d-a7d98a46d19e.png" alt class="image--center mx-auto" /></p>
<p>The two commands will prompt to a validation and then connect securely to your EC2 server.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731648947667/42ec283e-f90e-4894-a68a-51749a1cb9dc.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-terraform-configuration-files"><strong>✨Terraform Configuration files</strong></h2>
<p>You need to write different files generating the build of Docker image, run the container base of the image. To perform the task you will need sets of terraform files such as providers, variables and main. Let each of those files.</p>
<h5 id="heading-step-1-provider-configuration">Step 1: <strong><em>Provider Configuration</em></strong></h5>
<p>Here we declare our cloud provider and we specify the region where we will be launching resources</p>
<ul>
<li><a target="_blank" href="https://github.com/Joebaho/TF-DK-QUADRATIC-EQUATION--SOLVER/blob/main/providers.tf"><strong>provider Configuration</strong></a></li>
</ul>
<h5 id="heading-step-2-variables-configuration">Step 2: <strong><em>Variables Configuration</em></strong></h5>
<p>This is where we declare all variables and their value. It includes</p>
<ul>
<li><p><strong>Variables</strong>: List of element that can vary or change. They can be reuse values throughout our code without repeating ourselves and help make the code dynamic. Here we have only the Docker username and password.</p>
</li>
<li><p><strong>values</strong>: values attributed to each variables.</p>
</li>
</ul>
<p>We have</p>
<ul>
<li><p><a target="_blank" href="https://github.com/Joebaho/TF-DK-QUADRATIC-EQUATION--SOLVER/blob/main/variables.tf"><strong>variables Configuration</strong></a></p>
</li>
<li><p><a target="_blank" href="https://github.com/Joebaho/TF-DK-QUADRATIC-EQUATION--SOLVER/blob/main/terraform.tfvars"><strong>value Configuration</strong></a></p>
</li>
</ul>
<h5 id="heading-step-3-main-configuration">Step 3: <strong><em>Main Configuration</em></strong></h5>
<p>This is where you build the Docker image and run the container with that image created.</p>
<ul>
<li><a target="_blank" href="https://github.com/Joebaho/TF-DK-QUADRATIC-EQUATION--SOLVER/blob/main/main.tf"><strong>Main Configuration</strong></a></li>
</ul>
<h2 id="heading-python-application"><strong>✨Python Application</strong></h2>
<p>You need to write the application itself using the programming language call Python. The file will contains all instructions for doing the calculation base of coefficients enter by the user. We will also gonna add the layout of the web page by adding the file name index.html.</p>
<h5 id="heading-step-1-provide-the-file-apppy">Step 1: <strong><em>Provide the file app.py</em></strong></h5>
<p>Here we declare the contains of the file in with we will be working.</p>
<ul>
<li><a target="_blank" href="https://github.com/Joebaho/TF-DK-QUADRATIC-EQUATION--SOLVER/blob/main/app.py"><strong>Python app file</strong></a></li>
</ul>
<h5 id="heading-step-2-web-page-file">Step 2: <strong><em>Web page file</em></strong></h5>
<p>Here is the contains of the web page. The file will provide the layout of the web page. This file must be inside a folder name “templates”.</p>
<ul>
<li><a target="_blank" href="https://github.com/Joebaho/TF-DK-QUADRATIC-EQUATION--SOLVER/blob/main/templates/index.html"><strong>index.html</strong></a></li>
</ul>
<h2 id="heading-dockerfile"><strong>✨Dockerfile</strong></h2>
<p>You need to write the step of instructions that will help the application to be display. it will follow instructions to produce image that will help in the creation of the container on top of the where the application will run.</p>
<h5 id="heading-step-1-writing-dockerfile">Step 1: <strong><em>Writing Dockerfile</em></strong></h5>
<p>Here we declare the contains of the file in with we will be working.</p>
<ul>
<li><a target="_blank" href="https://github.com/Joebaho/TF-DK-QUADRATIC-EQUATION--SOLVER/blob/main/Dockerfile">Dockerfile</a></li>
</ul>
<h2 id="heading-bash-scripting"><strong>✨Bash Scripting</strong></h2>
<p>You need to write the step of instructions that will help push the Docker image created to the Dockerhub repository.</p>
<h5 id="heading-step-1-writing-the-bash-scripting-file">Step 1: <strong><em>Writing the Bash Scripting file</em></strong></h5>
<p>Here we declare the contains of the file in with we will be working.</p>
<ul>
<li><p><a target="_blank" href="https://github.com/Joebaho/TF-DK-QUADRATIC-EQUATION--SOLVER/blob/main/image_push.sh">image_push.sh</a></p>
<p>  You will get back on this later on in the execution or deployment of the application.</p>
</li>
</ul>
<h2 id="heading-instructions-of-deployment"><strong>💼 Instructions of Deployment</strong></h2>
<p>Follow these steps to deploy the architecture:</p>
<h5 id="heading-step-1-copy-all-files-in-the-server">Step 1: <strong><em>Copy all files in the server:</em></strong></h5>
<p>You will have to create a folder that will contains all these files. Call the folder “Terraform-Docker”</p>
<blockquote>
<p><strong><em>mkdir Terraform-Docker</em></strong></p>
<p><strong><em>cd Terraform-Docker</em></strong></p>
</blockquote>
<p>You will use the “<strong><em>vi</em></strong>” command to create file and then insert the contains directly</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731650251773/68e88ee3-64f4-4f16-9bd9-aa1638816117.png" alt class="image--center mx-auto" /></p>
<p>Step 2: <strong><em>Initialize Folder</em></strong></p>
<p>Ounce you done copying files you must Initialize the folder containing configuration files that were clone to Terraform and apply the configuration by typing the following command</p>
<blockquote>
<p><strong><em>terraform init</em></strong></p>
</blockquote>
<p>You must see this image</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731650374051/73628999-13b9-44fd-9764-ba123fa82101.png" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-3-format-files">Step 3: <strong><em>Format Files</em></strong></h5>
<p>Apply any changes on files and Review the changes and confirm the good format with command:</p>
<blockquote>
<p><strong><em>terraform fmt</em></strong></p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731650446665/6fdd5ae7-514b-4b98-a825-135b3ef90077.png" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-4-validate-files">Step 4: <strong><em>Validate Files</em></strong></h5>
<p>Ensure that every files are syntactically valid and ready to go with the command:</p>
<blockquote>
<p><strong><em>terraform validate</em></strong></p>
</blockquote>
<p>If everything is good you will have something like this</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731650523290/032798dc-4cda-4c67-bc3c-e841d33c3f93.png" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-5-plan">Step 5: <strong><em>Plan</em></strong></h5>
<p>Create an execution plan to provide the achievement of the desired state. It Check and confirm the numbers of resources that will be create. Use command:</p>
<blockquote>
<p><strong><em>terraform plan</em></strong></p>
</blockquote>
<p>The list of all resources in stage of creation will appear and you can see all properties(arguments and attributs) of each resources. here we have only two resources the image and the container.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731650688515/823962ed-ad41-4546-9587-1943258edce6.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731650697047/5534f535-2fd4-4869-9880-b951746b5e49.png" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-6-apply">Step 6: <strong><em>Apply</em></strong></h5>
<p>Bring all desired state resources on life. It Launch and create all resources listed in the configuration files. The command to perform the task is:</p>
<blockquote>
<p><strong><em>terraform apply -auto-approve</em></strong></p>
</blockquote>
<p>Now, the creation will start and you will be able to see which resources is on the way to be create and the time it taking to create.</p>
<p>At the end you will receive a prompt message showing all resources status: created, changed and the numbers of them.</p>
<p>Here is the output :</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731650867082/e12fc163-7c46-4347-9e74-f63f433daf60.png" alt class="image--center mx-auto" /></p>
<p>We now need to go back on the EC2 instance in the console to grab the public ip of the instance.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731651121811/26fa4aac-59a0-43f0-a8c6-8a08c0bb724c.png" alt class="image--center mx-auto" /></p>
<p>Copy and paste that Ip on the browser the add colons and port 5000 and hit enter you will get the web page. The application is running in a container which is open on port “5000”. So the IP address to paste in the browser is “public_ip:5000”</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731651199927/f93883e1-fda0-4516-964e-87c5dc4ecf3a.png" alt class="image--center mx-auto" /></p>
<p>After you add some coefficients you will get the result :</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731651246547/ce547759-9ae6-4c32-911e-69155237fbd5.png" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-7-bash-scripting-execution">Step 7: <strong><em>Bash Scripting execution</em></strong></h5>
<p>Once you are set up and the image is now created and then the container is running you will have to push image to Dockerhub. To complete that you can use the Bash script you wrote before and you will follow those two steps:</p>
<p>- Make the script to be executable by changing modification command. This will give power or privilege to file so it can be executable without error.</p>
<blockquote>
<p><strong><em>chmod +x image_push.sh</em></strong></p>
</blockquote>
<p>- The process of pushing will follow ounce you type command and you will get results succeeded.</p>
<blockquote>
<p><strong><em>./image_push.sh</em></strong></p>
</blockquote>
<p>You will get the image :</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731651806429/2f0d3b37-f752-4eca-bc09-3a1eabda4f22.png" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-8-review-of-resources">Step 8: <strong><em>Review of resources</em></strong></h5>
<ul>
<li><p>After the image pushed to Dockerhub you can now go to the platform to see the repository or library. There you can see the image built.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731652241997/82fd0b63-8529-4757-a295-19fffa59b39c.png" alt class="image--center mx-auto" /></p>
<p>  You can pull it to test by using the command:</p>
<blockquote>
<p><strong><em>docker pull joebaho2/quadratic_solver_image:latest</em></strong></p>
</blockquote>
</li>
</ul>
<h5 id="heading-step-9-destroy">Step 9: <strong><em>Destroy</em></strong></h5>
<p>Destroy the terraform managed infrastructure meaning all resources created will be shut down. This action can be done with the command "terraform destroy"</p>
<blockquote>
<p><strong><em>terraform destroy -auto-approve</em></strong></p>
</blockquote>
<p>At the end you will receive a prompt message showing all resources has been destroyed</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731652497905/6a33bc82-a66c-48be-a2d4-13a1cc5068fb.png" alt class="image--center mx-auto" /></p>
<p>Also you must go back to CloudFormation to delete the stack</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731652812667/c528d68d-29ce-4d2b-b718-5adde1c53a63.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731652820454/566a11e8-9bcc-48c6-8d1a-c031f1a55cce.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-contributing"><strong>🤝 Contributing</strong></h2>
<p>Your perspective is valuable! Whether you see potential for improvement or appreciate what's already here, your contributions are welcomed and appreciated. Thank you for considering joining us in making this project even better. Feel free to follow me for updates on this project and others, and to explore opportunities for collaboration. Together, we can create something amazing!</p>
<h2 id="heading-license"><strong>📄 License</strong></h2>
<p>This project is licensed under the JoebahoCloud License</p>
]]></content:encoded></item><item><title><![CDATA[Deploying an Application on AWS ECS with ECR and Docker]]></title><description><![CDATA[Deploying an Application on AWS ECS with ECR and Docker
🚀 Overview:
This project involves deploying an application using Amazon Elastic Container Service (ECS), Amazon Elastic Container Registry (ECR), and Docker. This combination allows for efficie...]]></description><link>https://platform.joebahocloud.com/deploying-an-application-on-aws-ecs-with-ecr-and-docker</link><guid isPermaLink="true">https://platform.joebahocloud.com/deploying-an-application-on-aws-ecs-with-ecr-and-docker</guid><dc:creator><![CDATA[Joseph Mbatchou]]></dc:creator><pubDate>Thu, 29 Aug 2024 16:57:30 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1724950546369/8ebbbdc5-ec28-4318-b5dd-b433f4071d3d.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-deploying-an-application-on-aws-ecs-with-ecr-and-docker"><strong>Deploying an Application on AWS ECS with ECR and Docker</strong></h1>
<h2 id="heading-overview">🚀 Overview:</h2>
<p>This project involves deploying an application using Amazon Elastic Container Service (ECS), Amazon Elastic Container Registry (ECR), and Docker. This combination allows for efficient containerization, storage, and orchestration of applications in the AWS cloud environment.</p>
<h2 id="heading-problem-statement">🔧 Problem Statement</h2>
<p>The project goal is to implement a robust, scalable, and efficient deployment solution using AWS Elastic Container Service (ECS), Elastic Container Registry (ECR), and Docker. This solution should automate the deployment process, ensure environment consistency, improve resource utilization, enhance security, and provide better monitoring and scaling capabilities.</p>
<p>By successfully implementing this project, we expects to significantly reduce deployment times, minimize downtime, improve application performance, optimize resource usage, and ultimately deliver a better experience to their rapidly growing user base.</p>
<h2 id="heading-architecture-diagram">📌 Architecture Diagram</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724950495108/a5099e37-7e3c-4fd1-ad2f-b178806ff9ea.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-techonology-stack">💽 Techonology Stack</h2>
<p>The architecture consists of the following three tiers:</p>
<ul>
<li><p><strong>Private Repository</strong> : AWS ECR</p>
</li>
<li><p><strong>Container</strong>: AWS ECS &amp; Docker</p>
</li>
<li><p><strong>Terminal</strong>: AWS CLI</p>
</li>
<li><p><strong>Policies and User</strong>: AWS IAM</p>
</li>
<li><p><strong>Virtual Machine</strong>: AWS EC2</p>
</li>
</ul>
<h2 id="heading-project-requirements">🌟 Project Requirements</h2>
<p>Before you get started, make sure you have the following prerequisites in place:</p>
<ul>
<li><p>IAM User with access and secrets access keys .</p>
</li>
<li><p>AWS IAM credentials configured in your text editor. In this case we will use VSCODE.</p>
</li>
<li><p>A virtual machine EC2 with Docker install already</p>
</li>
<li><p>Dockerfile and Index.html written</p>
</li>
</ul>
<h2 id="heading-table-of-contents">📋 Table of Contents</h2>
<p>I - <strong>IAM configuration</strong></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Provider-configuration"><strong>Step 1:</strong></a> <strong>Create IAM Policy for ECR Access</strong></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-variables-configuration"><strong>Step 2:</strong></a> <strong>Attach policy to IAM user</strong></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-VPC-configuration"><strong>Step 3:</strong></a> <strong>Configure your AWS Credentials on a running Ubuntu EC2</strong></p>
<p>II - <strong>Create an ECR Repository</strong></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Clone-Repository"><strong>Step 1:</strong></a> <strong>Navigate to Amazon ECR</strong></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Initialize-Folder"><strong>Step 2:</strong></a> <strong>Create a new Repository</strong></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Format-Files"><strong>Step 3:</strong></a> <strong>Configure setting</strong></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Validate-Files"><strong>Step 4:</strong></a> <strong>Repository created</strong></p>
<p><strong>III - Push Docker Image to ECR</strong></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Apply"><strong>Step 1:</strong></a> <strong>Push commands for my-ecr-repo</strong></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Review-Of-Resources"><strong>Step 2:</strong></a> <strong>Creation Image and push to repository</strong></p>
<p><strong>IV - Create ECS</strong></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Clone-Repository"><strong>Step 1:</strong></a> <strong>Navigate to Amazon ECS</strong></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Initialize-Folder"><strong>Step 2:</strong></a> <strong>Create a new ESC cluster</strong></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Format-Files"><strong>Step 3:</strong></a> <strong>Create Task Definition</strong></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Validate-Files"><strong>Step 4:</strong></a> <strong>Create a Container</strong></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Format-Files"><strong>Step 5:</strong></a> <strong>Create ECS Service</strong></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Validate-Files"><strong>Step 6:</strong></a> <strong>Create a Container</strong></p>
<p><strong>V - Access the web page</strong></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Clone-Repository"><strong>Step 1:</strong></a> <strong>Navigate to Configuration</strong></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Initialize-Folder"><strong>Step 2:</strong></a> <strong>Check the result</strong></p>
<h2 id="heading-iam-configuration">✨IAM CONFIGURATION</h2>
<p>Here is the place to set up permission and authentication for our user to access the repository</p>
<h5 id="heading-step-1-create-iam-policy-for-ecr-access">Step 1: <strong>Create IAM Policy for ECR Access</strong></h5>
<p>First, create an IAM policy that allows necessary permissions for Amazon ECR.</p>
<p>Go to AWS console, search for <strong>IAM</strong> service. In <strong>IAM Dashboard,</strong> click on <strong>Policies</strong> and then click on <strong>Create policy</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724941760815/a7cdaa77-b890-4e1f-93b3-88e9a4b10f17.png" alt class="image--center mx-auto" /></p>
<p>Click on <strong>JSON</strong> , and Then use the following JSON code for the IAM user policy to provide Amazon ECR permissions for creating repositories and pushing images.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724941921017/6da58f93-c0a2-4fa6-a265-14186f7aabfa.png" alt class="image--center mx-auto" /></p>
<p>Click on <strong>Next</strong> and enter the <strong>name</strong> for your policy. In this case we will call it <strong>AWS-ECR-Task_Policy.</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724942213353/7b7641e9-bc62-47fe-8ee4-8034017f962b.png" alt class="image--center mx-auto" /></p>
<p>Click on Create policy</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724942267149/e1bd6e16-945b-4f1c-8e90-813ecc5ec2a1.png" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-2-attach-policy-to-iam-user">Step 2: <strong>Attach policy to IAM user</strong></h5>
<p>This is where we attach the new policy to a user that was created before. Or feel free to create you own user.</p>
<p>Go to the <strong>IAM Management console</strong>, navigate to <strong>user</strong>, find the user. under <strong>Set permissions</strong>, select <strong>Attach policies directly</strong> and select the policy created</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724942840552/4ceec802-5fb5-4c89-8dcd-ad12475b57fe.png" alt class="image--center mx-auto" /></p>
<p>Click on <strong>Add permission</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724942956568/2b2f836e-9df7-44d2-9383-538f704180d3.png" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-3-configure-your-aws-credentials-on-a-running-ubuntu-ec2">Step 3: <strong>Configure your AWS Credentials on a running Ubuntu EC2</strong></h5>
<p>To perform this action as we said in the requirement you must have an Amazon Ubuntu EC2 running with Docker installed on it. Check my other project to see how to do that.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724945794791/ba7410c7-1277-4231-a96c-a4415295a9ce.png" alt class="image--center mx-auto" /></p>
<p>Configure AWS credentials using the <strong>aws configure</strong> command.</p>
<p>Provide your <strong>AWS Access Key ID</strong>, <strong>Secret Access Key</strong>, <strong>AWS Region</strong>, and output format as <strong>JSON</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724943277683/27f74d55-8bb9-415d-a554-3279efd88618.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-create-an-ecr-repository">💼 Create an ECR Repository</h2>
<p>Follow these steps to create an Elastic Container registry:</p>
<h5 id="heading-step-1-navigate-to-amazon-ecr">Step 1: <strong><em>Navigate</em> to Amazon ECR</strong></h5>
<p>Use the AWS services search bar and search for ECR</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724943588545/ce0aa2f3-eb55-4701-a562-92da9f000b55.png" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-2-create-a-new-repository">Step 2: <strong><em>Create a new Repository</em></strong></h5>
<p>In the Amazon ECR console , click on Create</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724943724488/a460d90f-f236-43f0-8385-0ef28a3792d4.png" alt class="image--center mx-auto" /></p>
<p>You will be prompt to a page where you will have to choose <strong>Public or private registry</strong>. For this project we will go with the private registry. then click <strong>Create repository</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724943877800/89d6574b-e846-4ed8-a1d5-9a89d7fe53ef.png" alt class="image--center mx-auto" /></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Format-Files"><strong>Step 3:</strong></a> <strong><em>Configure setting</em></strong></p>
<p>In <strong>General setting</strong> give a <strong>repository name (my-ecr-repo) ,</strong> the choose the <strong>image mutability.</strong> In <strong>Encryption setting</strong> stay with the standard <strong>AES-256.</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724944142111/b5bee8fc-ea59-48a1-b70a-00871e93bc88.png" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-4-repository-created">Step 4: <strong><em>Repository created</em></strong></h5>
<p>The repository has been created successfully</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724944342546/b73e115b-851b-4b02-8623-103920c60305.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-push-docker-image-to-ecr">💼 Push Docker image to ECR</h2>
<p>Here we have to create the docker image and the push it to the ECR repository. To complete that we have first the Dockerfile and the index.html files created and save in the virtual machine.</p>
<p>Dockerfile</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724944794207/24b60eae-f045-4f9b-92af-ff9434ccd952.png" alt class="image--center mx-auto" /></p>
<p>index.html</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724944812753/ca23ce98-90f8-4e43-a913-9ccf8c631923.png" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-1-push-commands-for-my-ecr-repo">Step 1: <strong><em>Push commands for my-ecr-repo</em></strong></h5>
<p>To push go to the <strong>Amazon ECR</strong>, open the <strong>Repository name</strong> and click on <strong>View push commands</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724944977692/d7ea23db-0617-4aaf-9f58-e3b8146f316e.png" alt class="image--center mx-auto" /></p>
<p>By following below steps, you can successfully push your Docker image to Amazon ECR and make it available for use in ECS<br />Run the following commands one by one.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724945077619/e5cd05e2-d09b-4092-8fbf-f5a1dfdd41e2.png" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-2-creation-image-and-push-to-repository">Step 2: <strong><em>Creation Image and push to repository</em></strong></h5>
<p><strong>Authenticate Docker to ECR</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724946046168/d2ef328f-be5d-49d1-8d50-b40ba7db2bd9.png" alt class="image--center mx-auto" /></p>
<p><strong>Build Docker Image</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724946089517/9bce2ee2-b1eb-4632-be29-3100bce7d6fb.png" alt class="image--center mx-auto" /></p>
<p>Tag image and push</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724946178322/5f71cf80-34e6-4240-8fc4-0108fa69e280.png" alt class="image--center mx-auto" /></p>
<p>List Images in ECR Repository:</p>
<p>Click on the refresh button to verify that the Docker image has been uploaded to the ECR repository .</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724946336300/227e2aaf-5e3d-4cb6-94b8-0d6c6da1b5f3.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-create-ecs"><strong>📌 Create ECS</strong></h2>
<h5 id="heading-step-1-navigate-to-amazon-ecs">Step 1: <strong>Navigate to Amazon ECS</strong></h5>
<p>Go to the AWS Management Console and search for ECS</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724947486797/2585c702-85dd-4daa-a6bd-4d74433f7de8.png" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-2-create-a-new-esc-cluster">Step 2: <strong>Create a new ESC cluster</strong></h5>
<p>in the option, Enter name for your cluster put <strong>cluster1</strong><br />Under the <strong>Infrastructure</strong>, choose <strong>"AWS Fargate"</strong>. Click on <strong>Create</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724947605798/75f131ca-d277-4be6-879c-5544bb6f9baa.png" alt class="image--center mx-auto" /></p>
<p>Cluster was created successfully</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724947658849/c6abc172-1ae7-41ba-921e-e037f8efc247.png" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-3-create-definition-task">Step 3: <strong>Create Definition Task</strong></h5>
<p>Click on <strong>Create a new task definition</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724947699022/cc370348-eaa4-4c7c-8f8f-9428d7255bf1.png" alt class="image--center mx-auto" /></p>
<p>Under <strong>task definition family</strong> enter name for your task. Choose <strong>FARGATE</strong> launch type. For the operating system take Linux/x86_64.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724947771528/936a85f6-cd48-41b1-a301-38c7b01b813c.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724947846925/9c4456a2-3ef1-48a9-922b-46244d2d14ef.png" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-4-create-a-container">Step 4: <strong>Create a container</strong></h5>
<p>Fill the option with</p>
<p><strong>Name of container</strong> (web-server)<br /><strong>Image URL</strong>: Copy the URI from the Repository that we created earlier<br /><strong>Essential Container</strong> (Yes)<br /><strong>Port Mapping</strong> Container (Port 80), <strong>Port Name</strong> (httpd)</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724947911922/c5fb721b-8079-4ddf-8c24-8a613ecb189b.png" alt class="image--center mx-auto" /></p>
<p>Then click create and the task will create</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724947976077/69ea3038-b3fa-45d4-8f11-1df41e746d33.png" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-5-create-ecs-service">Step 5: <strong>Create ECS service</strong></h5>
<p>Here you have to Go back to the cluster we created. Scroll down and click <strong>Create</strong> under <strong>Services</strong>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724948197989/f1ff1bfe-49df-45ab-a60b-28c6bfdf015b.png" alt class="image--center mx-auto" /></p>
<p>Under the Compute options menu. Select <strong>Capacity provider strategy.</strong> Select <strong>FARGATE</strong> as the capacity provider.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724948039139/00c16061-10ee-4634-962f-21f045fbad96.png" alt class="image--center mx-auto" /></p>
<p>Under Deployment configuration, choose <strong>Task</strong>. In <strong>Task definition</strong> Select the created task definition, <strong>(i.e., ECR-httpd)</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724948300078/10cddbab-7438-487b-a3ab-236e3d8d46e6.png" alt class="image--center mx-auto" /></p>
<p>Under <strong>Networking</strong>, select your VPC and Subnets.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724948314483/d2bf5d18-55aa-437e-94c3-a0ac612fe1c5.png" alt class="image--center mx-auto" /></p>
<p>Click on <strong>Create new security group. Select HTTP and open to anywhere.</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724948381535/f39f6138-5b88-4b91-aafc-b62fa53e053c.png" alt class="image--center mx-auto" /></p>
<p>Click <strong>Create</strong> the task has been created successfully</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724948518537/13c59916-b6b2-40d2-a637-5f07d22a59d3.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-access-the-web-page"><strong>✨Access the web page</strong></h2>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Clone-Repository"><strong>Step 1:</strong></a> <strong>Navigate to Configuration</strong></p>
<p>Return to the cluster1 open it , click on <strong>Tasks</strong> and click on the running task</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724948731710/d8f059fc-1cdb-4248-82a2-fb21cfd86007.png" alt class="image--center mx-auto" /></p>
<p>Under <strong>Configuration</strong>, click on <strong>open address.</strong> Open the address in a web browser to access the <strong>HTTPD</strong> page.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724948884348/2cd33c64-b7c8-47e7-81b2-c2a1d7b74461.png" alt class="image--center mx-auto" /></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Initialize-Folder"><strong>Step 2:</strong></a> <strong>Check the result</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724948917862/cc6eed2d-0ed6-468c-95fe-b21f33c31a28.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-contributing">🤝 Contributing</h2>
<p>Your perspective is valuable! Whether you see potential for improvement or appreciate what's already here, your contributions are welcomed and appreciated. Thank you for considering joining us in making this project even better. Feel free to follow me for updates on this project and others, and to explore opportunities for collaboration. Together, we can create something amazing!</p>
<h2 id="heading-license">📄 License</h2>
<p>This project is licensed under the JoebahoCloud License</p>
]]></content:encoded></item><item><title><![CDATA[Three ways of launching Amazon Linux 2 and Ubuntu Ec2 instances and install Docker and Jenkins on them]]></title><description><![CDATA[Three Methods for Launching Amazon Linux 2 and Ubuntu EC2 Instances with Docker and Jenkins installed
🚀 Overview:
This project aims to demonstrate three different approaches to launching Amazon Linux 2 and Ubuntu EC2 instances on AWS and installing ...]]></description><link>https://platform.joebahocloud.com/three-ways-of-launching-amazon-linux-2-and-ubuntu-ec2-instances-and-install-docker-and-jenkins-on-them</link><guid isPermaLink="true">https://platform.joebahocloud.com/three-ways-of-launching-amazon-linux-2-and-ubuntu-ec2-instances-and-install-docker-and-jenkins-on-them</guid><category><![CDATA[Three Methods for Launching Amazon Linux 2 and Ubuntu EC2 Instances with Docker and Jenkins installed]]></category><dc:creator><![CDATA[Joseph Mbatchou]]></dc:creator><pubDate>Sat, 27 Jul 2024 05:06:35 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1721972868961/eeed9951-7f22-469e-a1d2-bdf6c95b68f5.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-three-methods-for-launching-amazon-linux-2-and-ubuntu-ec2-instances-with-docker-and-jenkins-installed">Three Methods for Launching Amazon Linux 2 and Ubuntu EC2 Instances with Docker and Jenkins installed</h1>
<h1 id="heading-overview">🚀 Overview:</h1>
<p>This project aims to demonstrate three different approaches to launching Amazon Linux 2 and Ubuntu EC2 instances on AWS and installing Docker and Jenkins on them. The three methods include using AWS CloudFormation, Terraform, and direct Bash scripting with user data.</p>
<h2 id="heading-problem-statement">🔧 Problem Statement</h2>
<p>This project demonstrates three distinct methods to deploy Amazon Linux 2 and Ubuntu EC2 instances with Docker and Jenkins installed. Each approach showcases different infrastructure-as-code and automation techniques commonly used in DevOps practices.</p>
<h2 id="heading-techonology-stack">💽 Techonology Stack</h2>
<p>The architecture consists of the following three tiers:</p>
<ul>
<li><p><strong>VPC</strong>: AWS VPC</p>
</li>
<li><p><strong>EC2 Instances</strong>: AWS EC2</p>
</li>
<li><p><strong>Containerization</strong>: Docker</p>
</li>
<li><p><strong>Cl/CD:</strong> Jenkins</p>
</li>
<li><p><strong>Scripting:</strong> Bashscript</p>
</li>
</ul>
<h2 id="heading-architecture-diagram">📌 Architecture Diagram</h2>
<pre><code class="lang-bash">                                      +-----------------------------+
                                      |         AWS Region          |
                                      |                             |
                                      | +-------------------------+ |
                                      | |       VPC               | |
                                      | |                         | |
                                      | | +-------------------+   | |
                                      | | | Availability Zone |   | |
                                      | | |         A         |   | |
                                      | | |                   |   | |
                                      | | | +-------------+   |   | |
                                      | | | |  Subnet A   |   |   | |
                                      | | | |             |   |   | |
                                      | | | | +--------+  |   |   | |
                                      | | | | | EC2     |  |   |   | |
                                      | | | | | Amazon  |  |   |   | |
                                      | | | | | Linux 2 |  |   |   | |
                                      | | | | +--------+  |   |   | |
                                      | | | +-------------+   |   | |
                                      | | +-------------------+   | |
                                      | |                         | |
                                      | | +-------------------+   | |
                                      | | | Availability Zone |   | |
                                      | | |         B         |   | |
                                      | | |                   |   | |
                                      | | | +-------------+   |   | |
                                      | | | |  Subnet B   |   |   | |
                                      | | | |             |   |   | |
                                      | | | | +--------+  |   |   | |
                                      | | | | | EC2     |  |   |   | |
                                      | | | | | Ubuntu  |  |   |   | |
                                      | | | | +--------+  |   |   | |
                                      | | | +-------------+   |   | |
                                      | | +-------------------+   | |
                                      | |                         | |
                                      | +-------------------------+ |
                                      |                             |
                                      +-----------------------------+
</code></pre>
<h2 id="heading-project-requirements">🌟 Project Requirements</h2>
<p>Before you get started, make sure you have the following prerequisites in place:</p>
<ul>
<li><p>AWS account with an IAM user credentials configured in.</p>
</li>
<li><p>An Infrastructure (VPC, Subnets, route table, Security groups, NACL...) ready to be use for the lab.</p>
</li>
<li><p>An Amazon Linux 2 and Ubuntu EC2 instance running for the part for the bash scripting</p>
</li>
<li><p>To install Jenkins on a server you must have minimum hardware requirements:</p>
<p>  256 MB of RAM 1 GB of drive space (although 10 GB is a recommended minimum if running Jenkins as a Docker container) Recommended hardware configuration for a small team: 4 GB+ of RAM 50 GB+ of drive space</p>
</li>
<li><p>To install Jenkins on a server you must have minimum hardware requirements:</p>
</li>
</ul>
<p>OS: 64-bit version of Ubuntu RAM: At least 2 GB, but Docker recommends more for larger deployments or resource-intensive applications Disk space: 100 GB of free space CPU: Sufficient amount, depending on the applications Kernel: 64-bit kernel with support for virtualization Virtualization: Support for KVM virtualization technology QEMU: Version 5.2 or later Init system: systemd init system Desktop environment: Gnome, KDE, or MATE Firewall: Firewall rulesets created with iptables or iptables6, and added to the DOCKER-USER chai.</p>
<h2 id="heading-table-of-contents">📋 Table of Contents</h2>
<p>I - <strong>AWS CloudFormation Template</strong></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-flow-logs-on-aws/#-VPC-Configuration"><strong>Step 1:</strong></a> Description</p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-flow-logs-on-aws/#-VPC-Configuration"><strong>Step 2:</strong></a> Instructions of deployment</p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-flow-logs-on-aws/#-web-page-output"><strong>Step 3:</strong></a> Results</p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-flow-logs-on-aws/#-Checking-Logs"><strong>Step 4:</strong></a> Clean up</p>
<p>II - <strong>Terraform Script</strong></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-flow-logs-on-aws/#-IAM-Role-Creation"><strong>Step 1:</strong></a> Description</p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-flow-logs-on-aws/#-Log-Group-Creation"><strong>Step 2:</strong></a> Instructions of deployment</p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-flow-logs-on-aws/#-Flow-Logs_creation"><strong>Step 3:</strong></a> Results</p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-flow-logs-on-aws/#-Checking-Logs"><strong>Step 4:</strong></a> Clean Up</p>
<p>III - <strong>Bash Script</strong></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-flow-logs-on-aws/#-IAM-Role-Creation"><strong>Step 1:</strong></a> Description</p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-flow-logs-on-aws/#-Log-Group-Creation"><strong>Step 2:</strong></a> Instructions of deployment</p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-flow-logs-on-aws/#-Flow-Logs_creation"><strong>Step 3:</strong></a> Results</p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-flow-logs-on-aws/#-Checking-Logs"><strong>Step 4:</strong></a> Clean Up</p>
<h2 id="heading-aws-cloudformation-template">✨AWS CLOUDFORMATION TEMPLATE</h2>
<h5 id="heading-step-1-description">Step 1: <strong>DESCRIPTION</strong></h5>
<p>AWS CloudFormation allows you to define your infrastructure as code. We will create a CloudFormation template that launches Amazon Linux 2 and Ubuntu EC2 instances and uses user data scripts to install Docker and Jenkins.</p>
<h5 id="heading-step-2-instructions-of-deployment">Step 2: <strong>INSTRUCTIONS OF DEPLOYMENT</strong></h5>
<ul>
<li><p>Create a CloudFormation template</p>
<p>  Use your VS Code to create both files "Linux_Docker_jenkins.yml" and "ubntu-docker-jenkins.yml"</p>
</li>
<li><p>Define the resources for Amazon Linux 2 and Ubuntu EC2 instances. Here Use the <code>UserData</code> property to specify a script that installs Docker and Jenkins.</p>
<p>  Linux-Docker-Jenkins.yml</p>
<pre><code class="lang-bash">  AWSTemplateFormatVersion: <span class="hljs-string">'2010-09-09'</span>

  Metadata:
   License: Apache-2.0
   Authors:
      Description: Joseph Mbatchou  
   Description: <span class="hljs-string">" This template launches an amazon Linux 2 ec2 instance in a custom VPC, with an IAM assume role attached to it
                for SSM, then with the user data it will install Docker and Jenkins"</span>

  Parameters:
    InstanceType:
      Description: EC2 instance <span class="hljs-built_in">type</span>.
      Type: String
      AllowedValues:
        - t2.nano
        - t2.micro
        - t2.small
        - t2.medium
        - t2.large
    KeyName:
      Description: Name of an existing EC2 key pair <span class="hljs-keyword">for</span> SSH access to the EC2 instance.
      Type: AWS::EC2::KeyPair::KeyName
    SSHLocation:
      Description: The IP address range that can be used to SSH to the EC2 instances
      Type: String
      MinLength: <span class="hljs-string">'9'</span>
      MaxLength: <span class="hljs-string">'18'</span>
      AllowedPattern: <span class="hljs-string">"(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})/(\\d{1,2})"</span> <span class="hljs-comment"># IP Address</span>
      ConstraintDescription: must be a valid IP CIDR range of the form x.x.x.x/x.
    ImageId:
      Type: AWS::SSM::Parameter::Value&lt;AWS::EC2::Image::Id&gt;
      Default: /aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2
    VpcId:
      Type: AWS::EC2::VPC::Id
      Description: Select an existing VPC
    SubnetId:
      Type: AWS::EC2::Subnet::Id
      Description: Select an existing subnet within the selected VPC
  Resources:
    IamRole:
      Type: <span class="hljs-string">'AWS::IAM::Role'</span>
      Properties:
        RoleName: Ec2RoleForSSM
        Description: EC2 IAM role <span class="hljs-keyword">for</span> SSM access
        AssumeRolePolicyDocument:
          Version: <span class="hljs-string">'2012-10-17'</span>
          Statement:
            - Effect: Allow
              Principal:
                Service:
                  - ec2.amazonaws.com
              Action:
                - <span class="hljs-string">'sts:AssumeRole'</span>
        Path: /
        ManagedPolicyArns:
          - <span class="hljs-string">'arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore'</span>
    Ec2InstanceProfile:
      Type: <span class="hljs-string">'AWS::IAM::InstanceProfile'</span>
      Properties:
        InstanceProfileName: Ec2RoleForSSM
        Path: /
        Roles:
          - Ref: IamRole
    WebServer:
      Type: AWS::EC2::Instance
      Properties:
        IamInstanceProfile: !Ref Ec2InstanceProfile
        ImageId: !Ref ImageId
        InstanceType: !Ref InstanceType 
        KeyName: !Ref KeyName
        NetworkInterfaces: 
        - AssociatePublicIpAddress: <span class="hljs-string">"true"</span>
          DeviceIndex: <span class="hljs-string">"0"</span>
          GroupSet: 
            - !Ref WebServerSecurityGroup
          SubnetId: !Ref SubnetId
        UserData:
          Fn::Base64: !Sub |
            <span class="hljs-comment">#!/bin/bash</span>
            sudo yum update -y
            sudo yum install python3-pip
            sudo amazon-linux-extras install docker
            sudo yum install -y docker
            sudo service docker start
            sudo systemctl <span class="hljs-built_in">enable</span> docker
            sudo systemctl restart docker
            sudo usermod -aG docker ec2-user
            sudo yum update -y
            sudo sudo wget -O /etc/yum.repos.d/jenkins.repo \
            https://pkg.jenkins.io/redhat-stable/jenkins.repo
            sudo rpm --import https://pkg.jenkins.io/redhat-stable/jenkins.io-2023.key
            sudo yum upgrade
            sudo yum install java-17-amazon-corretto -y
            sudo yum install jenkins -y
            sudo systemctl start jenkins
            sudo systemctl <span class="hljs-built_in">enable</span> jenkins
            sudo systemctl restart jenkins
            sudo usermod -aG jenkins ec2-user
            sudo reboot

        Tags:
         - Key: Name
           Value: Linux-Docker-Jenkins-Server
    WebServerSecurityGroup: 
      Type: AWS::EC2::SecurityGroup
      Properties:
        GroupDescription: <span class="hljs-string">'Enable HTTP access via port 80 SSH access'</span>
        VpcId: !Ref VpcId
        SecurityGroupIngress:
          - CidrIp: 0.0.0.0/0
            FromPort: 8080
            IpProtocol: tcp
            ToPort: 8080
          - CidrIp: !Ref SSHLocation
            FromPort: 22
            IpProtocol: tcp
            ToPort: 22
          - CidrIp: 0.0.0.0/0
            FromPort: 80
            IpProtocol: tcp
            ToPort: 80 

  Outputs:
    InstanceId:
      Description: ID of the EC2 instance
      Value: !Ref WebServer
    PublicDNS:
      Description: Public DNS name of the EC2 instance
      Value: !GetAtt WebServer.PublicDnsName
    PublicIP:
      Description: Public IP address of the EC2 instance
      Value: !GetAtt WebServer.PublicIp
</code></pre>
<p>  ubuntu-docker-jenkins.yml</p>
</li>
</ul>
<pre><code class="lang-bash">AWSTemplateFormatVersion: <span class="hljs-string">'2010-09-09'</span>

Metadata:
  License: Apache-2.0
  Authors:
    Description: Joseph Mbatchou  <span class="hljs-comment"># The writer of this template</span>
  Description: <span class="hljs-string">" This template launches an ubuntu ec2 instance in a custom VPC, with an IAM assume role attached to it
              for SSM, then with the user data it will install Docker and Jenkins"</span>

Parameters:
  InstanceType:
    Description: EC2 instance <span class="hljs-built_in">type</span>.
    Type: String
    AllowedValues:
      - t2.nano
      - t2.micro
      - t2.small
      - t2.medium
      - t2.large
  KeyName:
    Description: Name of an existing EC2 key pair <span class="hljs-keyword">for</span> SSH access to the EC2 instance.
    Type: AWS::EC2::KeyPair::KeyName
  SSHLocation:
    Description: Place to enter your IP address. The IP address range that can be used to SSH to the EC2 instances
    Type: String
    MinLength: <span class="hljs-string">'9'</span>
    MaxLength: <span class="hljs-string">'18'</span>
    AllowedPattern: <span class="hljs-string">"(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})/(\\d{1,2})"</span>
    ConstraintDescription: must be a valid IP CIDR range of the form x.x.x.x/x.
  ImageId:
    Type: AWS::SSM::Parameter::Value&lt;AWS::EC2::Image::Id&gt;
    Default: <span class="hljs-string">'/aws/service/canonical/ubuntu/server/jammy/stable/current/amd64/hvm/ebs-gp2/ami-id'</span>
  VpcId:
    Type: AWS::EC2::VPC::Id
    Description: Select an existing VPC
  SubnetIds:
    Type: AWS::EC2::Subnet::Id
    Description: Select at least one subnet within the selected VPC  

Resources:
  IamRole:
    Type: <span class="hljs-string">'AWS::IAM::Role'</span>
    Properties:
      RoleName: !Sub <span class="hljs-string">"<span class="hljs-variable">${AWS::StackName}</span>-Ec2RoleForSSM"</span>
      Description: EC2 IAM role <span class="hljs-keyword">for</span> SSM access
      AssumeRolePolicyDocument:
        Version: <span class="hljs-string">'2012-10-17'</span>
        Statement:
          - Effect: Allow
            Principal:
              Service:
                - ec2.amazonaws.com
            Action:
              - <span class="hljs-string">'sts:AssumeRole'</span>
      Path: /
      ManagedPolicyArns:
        - <span class="hljs-string">'arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore'</span>

  Ec2InstanceProfile:
    Type: <span class="hljs-string">'AWS::IAM::InstanceProfile'</span>
    Properties:
      InstanceProfileName: !Sub <span class="hljs-string">"<span class="hljs-variable">${AWS::StackName}</span>-Ec2RoleForSSM"</span>
      Path: /
      Roles:
        - !Ref IamRole

  WebServer:
    Type: AWS::EC2::Instance
    Properties:
      IamInstanceProfile: !Ref Ec2InstanceProfile
      ImageId: !Ref ImageId
      InstanceType: !Ref InstanceType
      KeyName: !Ref KeyName
      NetworkInterfaces: 
        - AssociatePublicIpAddress: <span class="hljs-string">"true"</span>
          DeviceIndex: <span class="hljs-string">"0"</span>
          GroupSet: 
            - !Ref WebServerSecurityGroup
          SubnetId: !Ref SubnetIds
      UserData:
        Fn::Base64: !Sub |
          <span class="hljs-comment">#!/bin/bash</span>
          sudo apt-get update 
          sudo apt-get install -y ca-certificates curl gnupg
          sudo install -m 0755 -d /etc/apt/keyrings
          curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
          sudo chmod a+r /etc/apt/keyrings/docker.gpg
          <span class="hljs-built_in">echo</span> \
             <span class="hljs-string">"deb [arch=<span class="hljs-subst">$(dpkg --print-architecture)</span> signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
             <span class="hljs-subst">$(. /etc/os-release &amp;&amp; echo <span class="hljs-string">"<span class="hljs-variable">$VERSION_CODENAME</span>"</span>)</span> stable"</span> | \
             sudo tee /etc/apt/sources.list.d/docker.list &gt; /dev/null
          sudo apt-get update 
          sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
          sudo systemctl start docker.service
          sudo systemctl <span class="hljs-built_in">enable</span> docker.service
          sudo usermod -aG docker ubuntu
          sudo apt-get update -y
          sudo apt-get install openjdk-17-jdk -y
          sudo curl -fsSL https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key | sudo tee   /usr/share/keyrings/jenkins-keyring.asc &gt; /dev/null
          sudo <span class="hljs-built_in">echo</span> deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc]   https://pkg.jenkins.io/debian-stable binary/ | sudo tee   /etc/apt/sources.list.d/jenkins.list &gt; /dev/null
          sudo apt-get update  
          sudo apt-get install jenkins -y
          sudo systemctl start jenkins
          sudo systemctl <span class="hljs-built_in">enable</span> jenkins
          sudo usermod -aG jenkins ubuntu
          sudo ufw allow 8080
          sudo ufw allow OpenSSH
          sudo ufw <span class="hljs-built_in">enable</span>
      Tags:     
        - Key: Name
          Value: Ubuntu-Docker-jenkins-server

  WebServerSecurityGroup:
    Type: AWS::EC2::SecurityGroup
    Properties:
      GroupDescription: <span class="hljs-string">'Enable HTTP access via port 80 SSH access'</span>
      VpcId: !Ref VpcId
      SecurityGroupIngress:
        - CidrIp: 0.0.0.0/0
          FromPort: 8080
          IpProtocol: tcp
          ToPort: 8080
        - CidrIp: !Ref SSHLocation
          FromPort: 22
          IpProtocol: tcp
          ToPort: 22
        - CidrIp: 0.0.0.0/0
          FromPort: 80
          IpProtocol: tcp
          ToPort: 80 

Outputs:
  InstanceId:
    Description: ID of the EC2 instance
    Value: !Ref WebServer
  PublicDNS:
    Description: Public DNS name of the EC2 instance
    Value: !GetAtt WebServer.PublicDnsName
  PublicIP:
    Description: Public IP address of the EC2 instance
    Value: !GetAtt WebServer.PublicIp
</code></pre>
<ul>
<li>Deploy the stack using the AWS Management Console.</li>
</ul>
<p>On the console you must open CloudFormation option then hit on "Create Stack"</p>
<p>Linux-stack</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722046619365/23cf4de3-3c7a-4398-bf26-f05ed64d027e.png" alt class="image--center mx-auto" /></p>
<p>ubuntu stack</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722046672398/2fb5c490-00ea-4edf-b861-529700600dd7.png" alt class="image--center mx-auto" /></p>
<p>After launching both stacks you will get the complete stage</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722046781914/4b3d7e2f-5e84-452a-b084-9bf04b7194d9.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722046804013/78498d08-03d3-4a52-bd28-473d8751a6c8.png" alt class="image--center mx-auto" /></p>
<p>Step 3: <strong>RESULTS</strong></p>
<p>As result you will get first both instances running in the console</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722046884640/0ff4eaab-149a-49c4-a8be-e3295279b4e3.png" alt class="image--center mx-auto" /></p>
<p>After connecting to each servers via Session manager or SSH you can check status and version of each application installed.</p>
<p>Linux instance</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722046924620/a8ef9ee1-b54f-44b2-a373-135961d76baa.png" alt class="image--center mx-auto" /></p>
<p>ubuntu instance</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722046959586/63479337-4d03-4f0b-bae0-3657cf871f36.png" alt class="image--center mx-auto" /></p>
<p>Step 4: <strong>CLEAN UP</strong></p>
<p>You can clean up all resources created by just delete them via the CloudFormation stack . You just need to go back in the CloudFormation window select the stack and hit delete button till you get to the message saying "Delete Complete" .</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722047012373/e6dbac77-5810-4832-962b-8d199fbe704d.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-terraform-script">✨TERRAFORM SCRIPT</h2>
<h5 id="heading-step-1-description-1">Step 1: <strong>DESCRIPTION</strong></h5>
<p>Terraform is an open-source infrastructure as code software tool. We will create Terraform scripts to launch Amazon Linux 2 and Ubuntu EC2 instances and use user data to install Docker and Jenkins.</p>
<h5 id="heading-step-2-instructions-of-deployment-1">Step 2: <strong>INSTRUCTIONS OF DEPLOYMENT</strong></h5>
<p>Create a Terraform script</p>
<ul>
<li><p>Here you will have to create both (Linux-main.tf and ubuntu-main.tf) files and you will put each of them in a folder.</p>
</li>
<li><p>Define the resources for Amazon Linux 2 and Ubuntu EC2 instances. Use the <code>UserData</code> property to specify a script that installs Docker and Jenkins.</p>
<p>  Here are the contains of each files</p>
<p>  Linux-main.tf</p>
<pre><code class="lang-bash">  terraform {
    required_providers {
      aws = {
        <span class="hljs-built_in">source</span>  = <span class="hljs-string">"hashicorp/aws"</span>
        version = <span class="hljs-string">"~&gt; 4.0"</span>
      }
    }

  }
  variable <span class="hljs-string">"aws_region"</span> {
    <span class="hljs-built_in">type</span>        = string
    description = <span class="hljs-string">"region where to launch"</span>
    default = <span class="hljs-string">"us-west-2"</span>
  }
  variable <span class="hljs-string">"instance_type"</span> {
    <span class="hljs-built_in">type</span>        = string
    description = <span class="hljs-string">"instance type of the instance"</span>
    default = <span class="hljs-string">"ansible-key"</span>
  }

  variable <span class="hljs-string">"key_name"</span> {
    <span class="hljs-built_in">type</span>        = string
    description = <span class="hljs-string">"rkey name of the instance"</span>
    default = <span class="hljs-string">"t2.micro"</span>
  }
  <span class="hljs-comment">#Configure the AWS Provider</span>
  provider <span class="hljs-string">"aws"</span> {
    region = var.aws_region
  }<span class="hljs-comment"># Data source for latest Amazon Linux 2 AMI</span>
  data <span class="hljs-string">"aws_ami"</span> <span class="hljs-string">"amazon_linux_2"</span> {
    most_recent = <span class="hljs-literal">true</span>
    owners      = [<span class="hljs-string">"amazon"</span>]

    filter {
      name   = <span class="hljs-string">"name"</span>
      values = [<span class="hljs-string">"amzn2-ami-hvm-*-x86_64-gp2"</span>]
    }
  }
  data <span class="hljs-string">"aws_vpc"</span> <span class="hljs-string">"custom_vpc"</span> {
    filter {
      name   = <span class="hljs-string">"tag:Name"</span>
      values = [<span class="hljs-string">"c@licost-vpc"</span>] <span class="hljs-comment"># Replace with your VPC's name tag</span>
    }
  }
  data <span class="hljs-string">"aws_subnet"</span> <span class="hljs-string">"custom_subnet"</span> {
    vpc_id = data.aws_vpc.custom_vpc.id

    filter {
      name   = <span class="hljs-string">"tag:Name"</span>
      values = [<span class="hljs-string">"c@licost-subnet-public2-us-west-2b"</span>] <span class="hljs-comment"># Replace with your subnet's name tag</span>
    }
  }

  <span class="hljs-comment"># IAM Role</span>
  resource <span class="hljs-string">"aws_iam_role"</span> <span class="hljs-string">"ec2_ssm_role"</span> {
    name = <span class="hljs-string">"Ec2RoleForSSM"</span>
    assume_role_policy = jsonencode({
      Version = <span class="hljs-string">"2012-10-17"</span>
      Statement = [
        {
          Action = <span class="hljs-string">"sts:AssumeRole"</span>
          Effect = <span class="hljs-string">"Allow"</span>
          Principal = {
            Service = <span class="hljs-string">"ec2.amazonaws.com"</span>
          }
        }
      ]
    })
  }

  resource <span class="hljs-string">"aws_iam_role_policy_attachment"</span> <span class="hljs-string">"ssm_policy_attachment"</span> {
    role       = aws_iam_role.ec2_ssm_role.name
    policy_arn = <span class="hljs-string">"arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"</span>
  }

  resource <span class="hljs-string">"aws_iam_instance_profile"</span> <span class="hljs-string">"ec2_profile"</span> {
    name = <span class="hljs-string">"Ec2RoleForSSM"</span>
    role = aws_iam_role.ec2_ssm_role.name
  }

  <span class="hljs-comment"># Security Group</span>
  resource <span class="hljs-string">"aws_security_group"</span> <span class="hljs-string">"web_server_sg"</span> {
    name        = <span class="hljs-string">"WebServerSecurityGroup"</span>
    description = <span class="hljs-string">"Enable HTTP access via port 80 SSH access"</span>
    vpc_id      = data.aws_vpc.custom_vpc.id

    ingress {
      from_port   = 8080
      to_port     = 8080
      protocol    = <span class="hljs-string">"tcp"</span>
      cidr_blocks = [<span class="hljs-string">"0.0.0.0/0"</span>]
    }

    ingress {
      from_port   = 22
      to_port     = 22
      protocol    = <span class="hljs-string">"tcp"</span>
      cidr_blocks = [<span class="hljs-string">"0.0.0.0/0"</span>]
    }

    ingress {
      from_port   = 80
      to_port     = 80
      protocol    = <span class="hljs-string">"tcp"</span>
      cidr_blocks = [<span class="hljs-string">"0.0.0.0/0"</span>]
    }

    egress {
      from_port   = 0
      to_port     = 0
      protocol    = <span class="hljs-string">"-1"</span>
      cidr_blocks = [<span class="hljs-string">"0.0.0.0/0"</span>]
    }
  }

  <span class="hljs-comment"># EC2 Instance</span>
  resource <span class="hljs-string">"aws_instance"</span> <span class="hljs-string">"web_server"</span> {
    ami                    = data.aws_ami.amazon_linux_2.id
    instance_type          = var.instance_type
    key_name               = var.key_name
    subnet_id              = data.aws_subnet.custom_subnet.id
    vpc_security_group_ids = [aws_security_group.web_server_sg.id]
    iam_instance_profile   = aws_iam_instance_profile.ec2_profile.name
    associate_public_ip_address = <span class="hljs-string">"true"</span>

    user_data = base64encode(&lt;&lt;-EOF
      <span class="hljs-comment">#!/bin/bash</span>
      sudo yum update -y
      sudo yum install python3-pip
      sudo amazon-linux-extras install docker
      sudo yum install -y docker
      sudo service docker start
      sudo systemctl <span class="hljs-built_in">enable</span> docker
      sudo systemctl restart docker
      sudo usermod -aG docker ec2-user
      sudo yum update -y
      sudo sudo wget -O /etc/yum.repos.d/jenkins.repo \
      https://pkg.jenkins.io/redhat-stable/jenkins.repo
      sudo rpm --import https://pkg.jenkins.io/redhat-stable/jenkins.io-2023.key
      sudo yum upgrade
      sudo yum install java-17-amazon-corretto -y
      sudo yum install jenkins -y
      sudo systemctl start jenkins
      sudo systemctl <span class="hljs-built_in">enable</span> jenkins
      sudo systemctl restart jenkins
      sudo usermod -aG jenkins ec2-user
      sudo reboot
    EOF
    )

    tags = {
      Name = <span class="hljs-string">"Linux-Docker-Jenkins-server"</span>
    }
  }
  output <span class="hljs-string">"instance_id"</span> {
    description = <span class="hljs-string">"ID of the EC2 instance"</span>
    value       = aws_instance.web_server.id
  }

  output <span class="hljs-string">"public_dns"</span> {
    description = <span class="hljs-string">"Public DNS name of the EC2 instance"</span>
    value       = aws_instance.web_server.public_dns
  }

  output <span class="hljs-string">"public_ip"</span> {
    description = <span class="hljs-string">"Public IP address of the EC2 instance"</span>
    value       = aws_instance.web_server.public_ip
  }
</code></pre>
<p>  ubuntu-main.tf</p>
<pre><code class="lang-bash">  terraform {
    required_providers {
      aws = {
        <span class="hljs-built_in">source</span>  = <span class="hljs-string">"hashicorp/aws"</span>
        version = <span class="hljs-string">"~&gt; 4.0"</span>
      }
    }
  }
  <span class="hljs-comment">#Configure the AWS Provider</span>
  provider <span class="hljs-string">"aws"</span> {
    region = var.aws_region
  }
  variable <span class="hljs-string">"aws_region"</span> {
    <span class="hljs-built_in">type</span>        = string
    description = <span class="hljs-string">"region where to launch"</span>
    default = <span class="hljs-string">"us-west-2"</span>
  }
  variable <span class="hljs-string">"instance_type"</span> {
    <span class="hljs-built_in">type</span>        = string
    description = <span class="hljs-string">"instance type of the instance"</span>
    default = <span class="hljs-string">"ansible-key"</span>
  }

  variable <span class="hljs-string">"key_name"</span> {
    <span class="hljs-built_in">type</span>        = string
    description = <span class="hljs-string">"rkey name of the instance"</span>
    default = <span class="hljs-string">"t2.micro"</span>
  }

  <span class="hljs-comment">#Data source for latest Amazon Linux 2 AMI</span>
  data <span class="hljs-string">"aws_ami"</span> <span class="hljs-string">"ubuntu"</span> {
    most_recent = <span class="hljs-literal">true</span>
    owners      = [<span class="hljs-string">"amazon"</span>]

    filter {
      name   = <span class="hljs-string">"name"</span>
      values = [<span class="hljs-string">"ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-*"</span>]
    }
  }
  data <span class="hljs-string">"aws_vpc"</span> <span class="hljs-string">"custom_vpc"</span> {
    filter {
      name   = <span class="hljs-string">"tag:Name"</span>
      values = [<span class="hljs-string">"c@licost-vpc"</span>] <span class="hljs-comment"># Replace with your VPC's name tag</span>
    }
  }
  data <span class="hljs-string">"aws_subnet"</span> <span class="hljs-string">"custom_subnet"</span> {
    vpc_id = data.aws_vpc.custom_vpc.id

    filter {
      name   = <span class="hljs-string">"tag:Name"</span>
      values = [<span class="hljs-string">"c@licost-subnet-public2-us-west-2b"</span>] <span class="hljs-comment"># Replace with your subnet's name tag</span>
    }
  }

  <span class="hljs-comment"># IAM Role</span>
  resource <span class="hljs-string">"aws_iam_role"</span> <span class="hljs-string">"ec2_ssm_role"</span> {
    name = <span class="hljs-string">"Ec2RoleForSSM"</span>
    assume_role_policy = jsonencode({
      Version = <span class="hljs-string">"2012-10-17"</span>
      Statement = [
        {
          Action = <span class="hljs-string">"sts:AssumeRole"</span>
          Effect = <span class="hljs-string">"Allow"</span>
          Principal = {
            Service = <span class="hljs-string">"ec2.amazonaws.com"</span>
          }
        }
      ]
    })
  }

  resource <span class="hljs-string">"aws_iam_role_policy_attachment"</span> <span class="hljs-string">"ssm_policy_attachment"</span> {
    role       = aws_iam_role.ec2_ssm_role.name
    policy_arn = <span class="hljs-string">"arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"</span>
  }

  resource <span class="hljs-string">"aws_iam_instance_profile"</span> <span class="hljs-string">"ec2_profile"</span> {
    name = <span class="hljs-string">"Ec2RoleForSSM"</span>
    role = aws_iam_role.ec2_ssm_role.name
  }

  <span class="hljs-comment"># Security Group</span>
  resource <span class="hljs-string">"aws_security_group"</span> <span class="hljs-string">"web_server_sg"</span> {
    name        = <span class="hljs-string">"WebServerSecurityGroup"</span>
    description = <span class="hljs-string">"Enable HTTP access via port 80 SSH access"</span>
    vpc_id      = data.aws_vpc.custom_vpc.id

    ingress {
      from_port   = 8080
      to_port     = 8080
      protocol    = <span class="hljs-string">"tcp"</span>
      cidr_blocks = [<span class="hljs-string">"0.0.0.0/0"</span>]
    }

    ingress {
      from_port   = 22
      to_port     = 22
      protocol    = <span class="hljs-string">"tcp"</span>
      cidr_blocks = [<span class="hljs-string">"0.0.0.0/0"</span>]
    }

    ingress {
      from_port   = 80
      to_port     = 80
      protocol    = <span class="hljs-string">"tcp"</span>
      cidr_blocks = [<span class="hljs-string">"0.0.0.0/0"</span>]
    }

    egress {
      from_port   = 0
      to_port     = 0
      protocol    = <span class="hljs-string">"-1"</span>
      cidr_blocks = [<span class="hljs-string">"0.0.0.0/0"</span>]
    }
  }

  <span class="hljs-comment"># EC2 Instance</span>
  resource <span class="hljs-string">"aws_instance"</span> <span class="hljs-string">"web_server"</span> {
    ami                         = data.aws_ami.ubuntu.id
    instance_type               = var.instance_type
    key_name                    = var.key_name
    subnet_id                   = data.aws_subnet.custom_subnet.id
    vpc_security_group_ids      = [aws_security_group.web_server_sg.id]
    iam_instance_profile        = aws_iam_instance_profile.ec2_profile.name
    associate_public_ip_address = <span class="hljs-string">"true"</span>

    user_data = base64encode(&lt;&lt;-EOF
      <span class="hljs-comment">#!/bin/bash</span>
      sudo apt-get update 
      sudo apt-get install -y ca-certificates curl gnupg
      sudo install -m 0755 -d /etc/apt/keyrings
      curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
      sudo chmod a+r /etc/apt/keyrings/docker.gpg
      <span class="hljs-built_in">echo</span> \
         <span class="hljs-string">"deb [arch=<span class="hljs-subst">$(dpkg --print-architecture)</span> signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
         <span class="hljs-subst">$(. /etc/os-release &amp;&amp; echo <span class="hljs-string">"<span class="hljs-variable">$VERSION_CODENAME</span>"</span>)</span> stable"</span> | \
         sudo tee /etc/apt/sources.list.d/docker.list &gt; /dev/null
      sudo apt-get update 
      sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
      sudo systemctl start docker.service
      sudo systemctl <span class="hljs-built_in">enable</span> docker.service
      sudo usermod -aG docker ubuntu
      sudo apt-get update -y
      sudo apt-get install openjdk-17-jdk -y
      sudo curl -fsSL https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key | sudo tee   /usr/share/keyrings/jenkins-keyring.asc &gt; /dev/null
      sudo <span class="hljs-built_in">echo</span> deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc]   https://pkg.jenkins.io/debian-stable binary/ | sudo tee   /etc/apt/sources.list.d/jenkins.list &gt; /dev/null
      sudo apt-get update  
      sudo apt-get install jenkins -y
      sudo systemctl start jenkins
      sudo systemctl <span class="hljs-built_in">enable</span> jenkins
      sudo usermod -aG jenkins ubuntu
      sudo ufw allow 8080
      sudo ufw allow OpenSSH
      sudo ufw <span class="hljs-built_in">enable</span>
    EOF
    )

    tags = {
      Name = <span class="hljs-string">"Ubuntu-Docker-jenkins-server"</span>
    }
  }
  output <span class="hljs-string">"instance_id"</span> {
    description = <span class="hljs-string">"ID of the EC2 instance"</span>
    value       = aws_instance.web_server.id
  }

  output <span class="hljs-string">"public_dns"</span> {
    description = <span class="hljs-string">"Public DNS name of the EC2 instance"</span>
    value       = aws_instance.web_server.public_dns
  }

  output <span class="hljs-string">"public_ip"</span> {
    description = <span class="hljs-string">"Public IP address of the EC2 instance"</span>
    value       = aws_instance.web_server.public_ip
  }
</code></pre>
<p>  Step 3: <strong>RESULTS</strong></p>
<p>  After putting each files in a folder we have to go through the process of running terraform</p>
<ol>
<li>Initialise the folder</li>
</ol>
</li>
</ul>
<pre><code class="lang-bash">    terraform init
</code></pre>
<ol start="2">
<li>Check the correct syntax of the file</li>
</ol>
<pre><code class="lang-bash">    terraform fmt
</code></pre>
<ol start="3">
<li>Valid the status of the file</li>
</ol>
<pre><code class="lang-bash">    terraform validate
</code></pre>
<ol start="4">
<li>Plan to see which resources will be create</li>
</ol>
<pre><code class="lang-bash">    terraform plan
</code></pre>
<ol start="5">
<li>Create the resources with command</li>
</ol>
<pre><code class="lang-bash">    terraform apply
</code></pre>
<p>    After all this we will get to the result images</p>
<ul>
<li><p>Linux instance</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722054773493/1a4f5eb4-6c87-4c1e-9c1e-f22ef3fde155.png" alt class="image--center mx-auto" /></p>
<p>  In the console you have</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722054919286/3363b58c-0f7d-4d36-bf6c-e00322819559.png" alt class="image--center mx-auto" /></p>
<p>  Ubutntu</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722055495059/4ed0c3a8-8237-4bee-98e1-b9558407c6ae.png" alt class="image--center mx-auto" /></p>
<p>  In the console</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722055539062/b29bf4b5-084d-456c-8878-5d20c2d179ca.png" alt class="image--center mx-auto" /></p>
<p>  After SSM connect to each instances we have the result of different version</p>
<p>  For Linux instance</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722054877835/2f83c283-61bb-49aa-bba3-f7920258a5a6.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>For Ubuntu instance</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722055441701/1556b03d-2651-45a1-a58d-8b22f23bf6d5.png" alt class="image--center mx-auto" /></p>
<p>  Step 4: <strong>CLEAN UP</strong></p>
<p>  Use the command destroy to delete all the infrastructures</p>
<pre><code class="lang-bash">  terraform destroy
</code></pre>
</li>
</ul>
<p>For the Linux instance</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722054958008/771058a2-3567-456e-9b8f-001f870119d2.png" alt class="image--center mx-auto" /></p>
<p>For the ubuntu instance</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722055357434/d1ba2f0a-e54a-47ef-b071-73e7514b753a.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-bash-script">✨BASH SCRIPT</h2>
<h5 id="heading-step-1-description-2">Step 1: <strong>DESCRIPTION</strong></h5>
<p>A Bash script can be used to manually launch EC2 instances and configure them using the AWS CLI. We will write a Bash script that launches Amazon Linux 2 and Ubuntu EC2 instances and uses user data to install Docker and Jenkins.</p>
<h5 id="heading-step-2-instructions-of-deployment-2">Step 2: <strong>INSTRUCTIONS OF DEPLOYMENT</strong></h5>
<ul>
<li><p>Create a Bash script (<a target="_blank" href="http://launch-instances.sh"><code>launch-instances.sh</code></a>). This with contains the process of creating both Amazon linux and Ubuntu EC2 instances. The script included default value you can customize to your values.</p>
<pre><code class="lang-bash">  <span class="hljs-comment">#!/bin/bash</span>

  <span class="hljs-comment"># Variables</span>
  REGION=<span class="hljs-string">"us-west-2"</span>
  INSTANCE_TYPE=<span class="hljs-string">"t2.micro"</span>
  KEY_NAME=<span class="hljs-string">"ansible-key"</span>
  SECURITY_GROUP_ID=$(aws ec2 describe-security-groups --filters Name=group-name,Values=DevOps-SG --query <span class="hljs-string">'SecurityGroups[0].GroupId'</span> --output text --region <span class="hljs-variable">$REGION</span>)

  <span class="hljs-comment"># Subnet IDs - replace these with your actual subnet IDs</span>
  SUBNET_ID_1=<span class="hljs-string">"subnet-0afe31980b7b724ac"</span>
  SUBNET_ID_2=<span class="hljs-string">"subnet-099fca76d17b2fd4f"</span>

  <span class="hljs-comment"># Launch Amazon Linux 2 Instance</span>
  aws ec2 run-instances \
      --image-id ami-0648742c7600c103f \
      --count 1 \
      --instance-type <span class="hljs-variable">$INSTANCE_TYPE</span> \
      --associate-public-ip-address \
      --key-name <span class="hljs-variable">$KEY_NAME</span> \
      --security-group-ids <span class="hljs-variable">$SECURITY_GROUP_ID</span> \
      --subnet-id <span class="hljs-variable">$SUBNET_ID_1</span> \
      --region <span class="hljs-variable">$REGION</span> \
      --user-data file:///users/josephmbatchou/amazon-linux-user-data.sh \
      --tag-specifications <span class="hljs-string">'ResourceType=instance,Tags=[{Key=Name,Value=AmazonLinux2-Instance},{Key=OS,Value=AmazonLinux2}]'</span>

  <span class="hljs-comment"># Launch Ubuntu Instance</span>
  aws ec2 run-instances \
      --image-id ami-0075013580f6322a1 \
      --count 1 \
      --instance-type <span class="hljs-variable">$INSTANCE_TYPE</span> \
      --key-name <span class="hljs-variable">$KEY_NAME</span> \
      --associate-public-ip-address \
      --security-group-ids <span class="hljs-variable">$SECURITY_GROUP_ID</span> \
      --subnet-id <span class="hljs-variable">$SUBNET_ID_2</span> \
      --region <span class="hljs-variable">$REGION</span> \
      --user-data file:///users/josephmbatchou/ubuntu-user-data.sh \
      --tag-specifications <span class="hljs-string">'ResourceType=instance,Tags=[{Key=Name,Value=Ubuntu-Instance},{Key=OS,Value=Ubuntu}]'</span>
</code></pre>
</li>
<li><p>Create both Bash script user data scripts to install Docker and Jenkins on the instances. Files are "amazon-linux-user-data.sh" and "ubuntu-user-data.sh"</p>
</li>
</ul>
<p>amazon-linux-user-data.sh</p>
<pre><code class="lang-bash"><span class="hljs-meta">#!/bin/bash</span>

<span class="hljs-comment"># install Jenkins</span>
sudo yum update -y
sudo sudo wget -O /etc/yum.repos.d/jenkins.repo \
    https://pkg.jenkins.io/redhat-stable/jenkins.repo
sudo rpm --import https://pkg.jenkins.io/redhat-stable/jenkins.io-2023.key
sudo yum upgrade
sudo yum install java-17-amazon-corretto -y
sudo yum install jenkins -y
sudo systemctl start jenkins
sudo systemctl <span class="hljs-built_in">enable</span> jenkins
sudo systemctl restart jenkins
sudo usermod -aG jenkins ec2-user

<span class="hljs-comment">#install Docker</span>
sudo yum update -y
sudo yum install python3-pip
sudo amazon-linux-extras install docker
sudo yum install -y docker
sudo service docker start
sudo systemctl <span class="hljs-built_in">enable</span> docker
sudo systemctl restart docker
sudo usermod -aG docker ec2-user
</code></pre>
<p>ubuntu-user-data.sh</p>
<pre><code class="lang-bash"><span class="hljs-meta">#!/bin/bash</span>

<span class="hljs-comment">#Install Jenkins</span>
sudo apt-get update -y
sudo apt-get install openjdk-17-jdk -y
sudo curl -fsSL https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key | sudo tee   /usr/share/keyrings/jenkins-keyring.asc &gt; /dev/null
sudo <span class="hljs-built_in">echo</span> deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc]   https://pkg.jenkins.io/debian-stable binary/ | sudo tee   /etc/apt/sources.list.d/jenkins.list &gt; /dev/null
sudo apt-get update
sudo apt-get install jenkins -y
sudo systemctl start jenkins
sudo systemctl <span class="hljs-built_in">enable</span> jenkins
sudo usermod -aG jenkins ubuntu
sudo ufw allow 8080
sudo ufw allow OpenSSH
sudo ufw enabley

<span class="hljs-comment">#install Docker</span>
sudo apt-get update
sudo apt-get install -y ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
<span class="hljs-built_in">echo</span> \
   <span class="hljs-string">"deb [arch=<span class="hljs-subst">$(dpkg --print-architecture)</span> signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
   <span class="hljs-subst">$(. /etc/os-release &amp;&amp; echo <span class="hljs-string">"<span class="hljs-variable">$VERSION_CODENAME</span>"</span>)</span> stable"</span> | \
   sudo tee /etc/apt/sources.list.d/docker.list &gt; /dev/null
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
sudo systemctl start docker.service
sudo systemctl <span class="hljs-built_in">enable</span> docker.service
sudo usermod -aG docker ubuntu
</code></pre>
<ul>
<li><p>Use the AWS CLI to launch Amazon Linux 2 and Ubuntu EC2 instances:</p>
<p>  In order to run the script to provision the infrastructure by using CLI you must be in the prompt where your AWS CLI is configure and in the default region. Our Default region here is "us-west-2" You can verify this by using command</p>
</li>
</ul>
<pre><code class="lang-bash">aws configure
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722042265708/3cc8df96-a595-4eac-875e-3a37b23418b3.png" alt class="image--center mx-auto" /></p>
<p>Once you are set up you will have to make each script to be executable by changing modification command.</p>
<pre><code class="lang-bash">chmod +x launch-instances.sh
chmod +x amazon-linux-user-data.sh
chmod +x ubuntu-user-data.sh
</code></pre>
<p>As the both user data script are incorporated in the instance script you will only have to run the launch instance script with the command</p>
<pre><code class="lang-bash">./launch-instance.sh
</code></pre>
<p>The process of launching resources will follow and you will get results succeeded.</p>
<p>Linux instance</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722043508802/6818a1ef-451a-4bb4-9aac-2cb099b1b28b.png" alt class="image--center mx-auto" /></p>
<p>Ubuntu instance</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722043521991/54055292-cb61-4478-b33d-5035382bbd66.png" alt class="image--center mx-auto" /></p>
<p>step 3: RESULTS</p>
<p>You can go to the console and see that two instances were created and by connecting to them via SSH you will check for tools installed.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722042283193/d98d38b3-9ecc-403c-933b-dd792b375ac4.png" alt class="image--center mx-auto" /></p>
<p>As you will ssh to each instances you will now check for each applications version to confirm the installation.</p>
<p>Linux instance</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722043482461/a18ae084-30e6-4b4a-9e5e-fbb7fd38c603.png" alt class="image--center mx-auto" /></p>
<p>Ubuntu instance</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722042426063/f925f37b-dbdb-41a0-af8b-479c268b02e5.png" alt class="image--center mx-auto" /></p>
<p>Step 4: CLEAN UP</p>
<p>To delete both instances you can only check for each instances Id and use command terminate-instance to destroy them.</p>
<pre><code class="lang-bash">aws ec2 terminate-instances --instance-ids &lt;instance-id&gt;
</code></pre>
<p>By using the command with the right instances id you will terminated the both instances.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722043453972/916763d0-ce8d-4490-9c77-57d3986b69b2.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-conclusion">📌 Conclusion</h2>
<p>This project showcases three different approaches to launching and configuring EC2 instances with Docker and Jenkins. Each method has its advantages and can be chosen based on specific requirements and preferences. The use of infrastructure as code tools like CloudFormation and Terraform ensures reproducibility and ease of management, while Bash scripting provides flexibility and simplicity.</p>
<h2 id="heading-contributing"><strong>🤝 Contributing</strong></h2>
<p>Your perspective is valuable! Whether you see potential for improvement or appreciate what's already here, your contributions are welcomed and appreciated. Thank you for considering joining us in making this project even better. Feel free to follow me for updates on this project and others, and to explore opportunities for collaboration. Together, we can create something amazing!</p>
<h2 id="heading-license">📄 License</h2>
<p>This project is licensed under the Joebaho Cloud License</p>
]]></content:encoded></item><item><title><![CDATA[Create and Connect to a MySQL Database with Amazon RDS via CLI and GUI]]></title><description><![CDATA[Create and Connect to a MySQL Database with Amazon RDS via CLI and GUI
🚀 Overview:
The project aims to explore the process of creating and connecting to a MySQL database using Amazon RDS (Relational Database Service). Amazon RDS is a managed databas...]]></description><link>https://platform.joebahocloud.com/create-and-connect-to-a-mysql-database-with-amazon-rds-via-cli-and-gui</link><guid isPermaLink="true">https://platform.joebahocloud.com/create-and-connect-to-a-mysql-database-with-amazon-rds-via-cli-and-gui</guid><category><![CDATA[Create and Connect to a MySQL Database with Amazon RDS via CLI and GUI]]></category><dc:creator><![CDATA[Joseph Mbatchou]]></dc:creator><pubDate>Tue, 07 May 2024 03:01:52 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1715048518948/3822fb5f-c3a3-4f53-aaf9-1074b608a1d7.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-create-and-connect-to-a-mysql-database-with-amazon-rds-via-cli-and-gui"><strong>Create and Connect to a MySQL Database with Amazon RDS via CLI and GUI</strong></h1>
<h1 id="heading-overview">🚀 Overview:</h1>
<p>The project aims to explore the process of creating and connecting to a MySQL database using Amazon RDS (Relational Database Service). Amazon RDS is a managed database service that simplifies the process of setting up, operating, and scaling relational databases in the cloud. By leveraging Amazon RDS, users can create MySQL database instances with ease, benefiting from features such as automated backups, high availability, and scalability.</p>
<p>In this project, we will delve into the steps involved in setting up a MySQL database instance on Amazon RDS, configuring security settings, and establishing connections to the database from various client applications. Through hands-on experimentation, we will gain insights into the capabilities of Amazon RDS and understand how it streamlines database management tasks for developers and organizations.</p>
<h2 id="heading-problem-statement">🔧 Problem Statement</h2>
<p>Despite the popularity of MySQL as a relational database management system, setting up and managing MySQL databases can be a complex and time-consuming task, especially for developers and organizations without extensive database administration expertise. Additionally, traditional on-premises database solutions may lack scalability and fault tolerance, leading to performance bottlenecks and downtime. By addressing these objectives, the project aims to equip participants with the knowledge and skills required to leverage Amazon RDS for creating and managing MySQL databases in a cloud environment effectively. Additionally, it seeks to highlight the advantages of adopting cloud-based database solutions for modern application development and deployment scenarios.</p>
<h2 id="heading-techonology-stack">💽 Techonology Stack</h2>
<p>The architecture consists of the following three tiers:</p>
<ul>
<li><p><strong>VPC</strong>: AWS VPC</p>
</li>
<li><p><strong>EC2 Instances</strong>: AWS EC2</p>
</li>
<li><p><strong>Database</strong>: AWS RDS</p>
</li>
<li><p><strong>Database Client Connection:</strong> SQL WorkBench</p>
</li>
</ul>
<h2 id="heading-architecture-diagram">📌 Architecture Diagram</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715049143422/fc44fa26-2ee1-479d-bbcb-da4d270a2a7d.jpeg" alt class="image--center mx-auto" /></p>
<h2 id="heading-project-requirements">🌟 Project Requirements</h2>
<p>Before you get started, make sure you have the following prerequisites in place:</p>
<ul>
<li><p>AWS account with an IAM user credentials configured in.</p>
</li>
<li><p>An Infrastructure (VPC, Subnets, route table, Security groups, NACL...) ready to be use for the lab.</p>
</li>
<li><p>An EC2 instance with MySQL installed.</p>
</li>
</ul>
<h2 id="heading-table-of-contents">📋 Table of Contents</h2>
<p>I - <strong>Infrastructure Configuration</strong></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-flow-logs-on-aws/#-VPC-Configuration">Step 1:</a> VPC Configuration</p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-flow-logs-on-aws/#-VPC-Configuration">Step 2:</a> Launch EC2 Instance</p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-flow-logs-on-aws/#-web-page-output">Step 3:</a> MySQL installation on Instance</p>
<p>II - <strong>Instructions of Deployment</strong></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-flow-logs-on-aws/#-IAM-Role-Creation">Step 1:</a> Creation of the MySQL Database instance</p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-flow-logs-on-aws/#-Log-Group-Creation">Step 2:</a> Download SQL client</p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-flow-logs-on-aws/#-Flow-Logs_creation">Step 3:</a> Connect to MySQL Database instance</p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-flow-logs-on-aws/#-Checking-Logs">Step 4:</a> Clean Up</p>
<h2 id="heading-infrastructure-configuration">✨Infrastructure Configuration</h2>
<p>You need to create all resources needed for the accomplishment of the project. We considered this part easy to do. So, we will present infrastructure that was prebuilt</p>
<h5 id="heading-step-1-vpc-configuration">Step 1: <strong>VPC Configuration</strong></h5>
<p>Here we declare our foundation or networking environment. We have here VPC, subnets, routable, security groups, NACL...</p>
<ul>
<li>VPC</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713926820700/66569bb8-fa21-4f7a-8072-e37755ed47e3.jpeg" alt class="image--center mx-auto" /></p>
<ul>
<li>Web Security group</li>
</ul>
<p>Here we will define the rule or way to access the EC@ instance that will be connect to the Database. Port 80 will be open from everywhere and port 22 SSh will allow access from our computer.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715043348754/3008e927-3e21-439e-811e-cb4770239908.jpeg" alt class="image--center mx-auto" /></p>
<ul>
<li>Database Security group</li>
</ul>
<p>We will be defining the firewall access to the Database. Here we set up access to the database by creating the security group where we gonna open access to port 3306 Mysql to everywhere and all traffic will depend from the security group of the instance.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715043548379/39598d44-4082-4cee-975e-47ba41dcb8ba.jpeg" alt class="image--center mx-auto" /></p>
<ul>
<li>Database Subnet Group</li>
</ul>
<p>We will have here to group subnets where we want the database to be launch in a group. For our case we will by grouping both private subnets. So, go to AWS Console navigate to RDS, choose option " Subnets group" and Create Subnet Group. You will be prompt to fill a name, Description and the VPC where to choose subnets.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1714969348885/c64f3c25-8430-4382-878f-4093abbff3b1.jpeg" alt class="image--center mx-auto" /></p>
<p>Then you will have to chose the AZs Depending if you want to do Simple or Multi-AZ database you have to chose the number of AZs required. In our case we will choose only two AZs. You also have to know the Id of the private subnets you want to add to the group.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1714969545441/fd282145-dd95-44a4-8571-5368ddf010ea.jpeg" alt class="image--center mx-auto" /></p>
<p>Finally hit " Create " and you will have a picture like bellow</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1714969641472/036a7aca-aafc-4ceb-baa7-77565ccdc8bd.jpeg" alt class="image--center mx-auto" /></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-flow-logs-on-aws/#-VPC-Configuration">Step 2:</a> <strong>Launch EC2 Instance</strong></p>
<p>In this step, we will use Amazon EC2 to create a EC2 Instance with name "web-server", Amazon Linux 2023 AMI, t2.micro instance type, One key pair, launch in the current VPC, on a public subnet and Web-SG, 8 GB of storage and we will add an SSM role. As a reminder, all of this is Free Tier eligible.</p>
<p>Open the <a target="_blank" href="https://console.aws.amazon.com/console/home?region=us-east-1">AWS Management Console in a new browser wi</a>ndow, so you can keep this step-by-step guide open. When the console opens, select <strong>EC2</strong> from the left navigation pane and choose <strong>EC2</strong> to open the <strong>launch Instance.</strong></p>
<p>Give a name to the instance in this case will be : "web-server"</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715044753771/35acc753-48ac-4a19-99cb-72538767d262.jpeg" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715044764335/2f9a9239-e451-4498-8907-3c1d6a8bbd92.jpeg" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715044853997/7f879a7e-49a6-49bd-a7a5-09a26d9f473a.jpeg" alt class="image--center mx-auto" /></p>
<p>Network setting:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715044866091/911375bf-e813-4f65-8b32-9b4f17f8333a.jpeg" alt class="image--center mx-auto" /></p>
<p>Volume Storage fixed to 8 Gb</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715044877080/5e3ba8e8-598c-401b-b7ba-567d4d103851.jpeg" alt class="image--center mx-auto" /></p>
<p>Add the role SSM to connect to the instance via the console.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715044886114/f810680a-166e-44cc-8b0f-763b9903ccbc.jpeg" alt class="image--center mx-auto" /></p>
<p>Finally Launch the instance</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715044894905/85919381-7208-44ca-9308-37f97402e091.jpeg" alt class="image--center mx-auto" /></p>
<p>We will get to</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715045415639/666b4e17-72fb-407b-a05c-2d9fac5324ba.jpeg" alt class="image--center mx-auto" /></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-flow-logs-on-aws/#-web-page-output">Step 3:</a><strong>MySQL installation on Instance</strong></p>
<p>To install MySQL on the instance we need to connect to the instance using the connect option.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715045518131/f892e25d-84b5-4504-9c9a-4cfd7c26580d.jpeg" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715045530760/1e651815-16a2-4eb5-8f91-f8f99a679aa3.jpeg" alt class="image--center mx-auto" /></p>
<p>As we are using Amazon Linux 2023 AI we need to install MySQL via the commands :</p>
<p>Update the system with command:</p>
<pre><code class="lang-bash">sudo yum update -y
</code></pre>
<p>Then Install MySQL with command:</p>
<pre><code class="lang-bash">sudo dnf install mariadb105-server
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715046005171/35bc24b2-0b12-42c6-8ef4-331625ea0890.jpeg" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715046014965/3c79f1ff-6636-44da-913d-60986fc3426b.jpeg" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715046025038/2290b0a7-06be-477f-9a88-82bfa6de7144.jpeg" alt class="image--center mx-auto" /></p>
<h2 id="heading-instructions-of-deployment">💼 Instructions of Deployment</h2>
<p>Follow these steps to produce the deployment:</p>
<h5 id="heading-step-1-creation-of-mysql-database-instance">Step 1: <strong>Creation of MySQl database instance</strong></h5>
<p>In this step, we will use Amazon RDS to create a MySQL DB Instance with db.t2.micro DB instance class, 20 GB of storage, and automated backups enabled with a retention period of one day. As a reminder, all of this is Free Tier eligible.</p>
<p>a. Open the <a target="_blank" href="https://console.aws.amazon.com/console/home?region=us-east-1">AWS Management Console in a new browser wi</a>ndow, so you can keep this step-by-step guide open. When the console opens, select <strong>Database</strong> from the left navigation pane and choose <strong>RDS</strong> to open the <strong>Amazon RDS console.</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1714970110761/fba36bda-a2b4-4235-a743-f191ee235700.jpeg" alt class="image--center mx-auto" /></p>
<p>b. In the <strong>Create database</strong> section, choose <strong>Create database</strong>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1714970759720/bcc8c70d-ce73-4194-9605-f4b999869f0c.jpeg" alt class="image--center mx-auto" /></p>
<p>c. You now have options to select your engine. For this tutorial, choose the <strong>MySQL</strong> icon, leave the default value of edition and engine version, and select the <strong>Free Tier</strong> template.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1714970810891/845f2ff2-4c96-4b55-9dc5-f2c9f04aff70.jpeg" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1714970836338/002e1723-3704-4f14-8f0e-80bce4d33733.jpeg" alt class="image--center mx-auto" /></p>
<p><strong>Settings</strong>:</p>
<p>Here you have to choice to define your username and password by yoursel or you can have AWS managed them for you. In our case we will be manage those credentials by ourselves. So we have to provide</p>
<ul>
<li><p><strong>DB instance identifier</strong>: Type a name for the DB instance that is unique for your account in the Region that you selected. For this tutorial, we will name it <strong>Database-1.</strong></p>
</li>
<li><p><strong>Master username</strong>: Type a username that you will use to log in to your DB instance. We will use masterUsername in this example.</p>
</li>
<li><p><strong>Master password</strong>: Type a password that contains from 8 to 41 printable ASCII characters (excluding /,", and @) for your master user password.</p>
</li>
<li><p><strong>Confirm password</strong>: Retype your password</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1714971479532/fbdf1fd7-fd3c-4170-927f-e63e76adbde2.jpeg" alt class="image--center mx-auto" /></p>
<p>  <strong>Instance specifications:</strong></p>
<ul>
<li><p><strong>DB instance class</strong>: Select <strong>db.t3.micro — 2vCPUs, 1 GiB RAM.</strong> This equates to 1 GB memory and 2 vCPUs. To see a list of supported instance classes, see Amazon RDS Pricing.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1714971817489/d439e7bd-3c3d-4ee1-8805-795ad56b5a74.jpeg" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Storage type</strong>: Select <strong>General Purpose(SSD)</strong>. For more information about storage, see Storage for Amazon RDS.</p>
</li>
<li><p><strong>Allocated storage</strong>: Select the default of 20 to allocate 20 GB of storage for your database. You can scale up to a maximum of 64 TB with Amazon RDS for MySQL.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1714971843728/db7f6c67-6dc0-4917-8da5-07cd2a5b6488.jpeg" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Enable storage autoscaling</strong>: If your workload is cyclical or unpredictable, you would enable storage autoscaling to enable Amazon RDS to automatically scale up your storage when needed. This option does not apply to this tutorial.</p>
</li>
<li><p><strong>Multi-AZ deployment</strong>: Disable because we only have 2 AZs. Note that you will have to pay for Multi-AZ deployment. Using a Multi-AZ deployment will automatically provision and maintain a synchronous standby replica in a different Availability Zone. For more information, see High Availability Deployment.</p>
</li>
<li><p><strong>Connectivity</strong></p>
<ul>
<li><p><strong>Compute resource</strong>: Choose <strong>Don’t connect to an EC2 compute resource.</strong> You can manually set up a connection to a compute resource later.</p>
</li>
<li><p><strong>Virtual Private Cloud (VPC)</strong>: Select <strong>Default VPC.</strong> For more information about VPC, see Amazon RDS and Amazon Virtual Private Cloud (VPC).</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1714972039087/5b8eb6ad-4b0d-4173-b346-9c2bcc1febbd.jpeg" alt class="image--center mx-auto" /></p>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p>        <strong>Additional connectivity configurations</strong></p>
<ul>
<li><p><strong>Subnet group</strong>: Choose the subnet group we created before.</p>
</li>
<li><p><strong>Public accessibility</strong>: Choose <strong>Yes.</strong> This will allocate an IP address for your database instance so that you can directly connect to the database from your own device.</p>
</li>
<li><p><strong>VPC security groups</strong>: Select the one we created before which is Database_sg</p>
</li>
<li><p><strong>Availability Zone</strong>: Choose <strong>No preference.</strong> See Regions and Availability Zones for more details.</p>
</li>
<li><p><strong>RDS Proxy</strong>: By using Amazon RDS Proxy, you can allow your applications to pool and share database connections to improve their ability to scale. Leave the <strong>RDS Proxy</strong> unchecked.</p>
</li>
<li><p><strong>Port</strong>: Leave the default value of 3306.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1714972635406/a8b5c3ee-dc80-4f11-83af-3000c846dfdf.jpeg" alt class="image--center mx-auto" /></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1714972642796/6f6a170a-aec3-461e-9be4-96260e13cd28.jpeg" alt class="image--center mx-auto" /></p>
<p>  Amazon RDS supports several ways to authenticate database users. Choose <strong>Password authentication</strong> from the list of options</p>
<p>  <strong>Monitoring</strong></p>
</li>
</ul>
<ul>
<li><p><strong>Enhanced monitoring</strong>: Leave <strong>Enable enhanced monitoring</strong> unchecked to stay within the Free Tier. Enabling enhanced monitoring will give you metrics in real time for the operating system (OS) that your DB instance runs on. For more information, see <a target="_blank" href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Monitoring.html">Viewing DB Instance Metrics.</a></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1714972985559/1ae0063d-d230-4ed9-af74-5e7fde08681e.jpeg" alt class="image--center mx-auto" /></p>
</li>
</ul>
<p>In the <strong>Additional configurations</strong> section</p>
<p><strong>Database options</strong></p>
<ul>
<li><p><strong>Database name</strong>: Enter a database name that is 1 to 64 alphanumeric characters. If you do not provide a name, Amazon RDS will not automatically create a database on the DB instance you are creating.</p>
</li>
<li><p><strong>DB parameter group</strong>: Leave the default value. For more information, see Working with DB Parameter Groups.</p>
</li>
<li><p><strong>Option group</strong>: Leave the default value. Amazon RDS uses option groups to enable and configure additional features. For more information, see Working with Option Groups.</p>
</li>
</ul>
<p><strong>Encryption</strong>: This option is not available in the Free Tier. For more information, see Encrypting Amazon RDS Re<a target="_blank" href="http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html">sources.</a></p>
<p><a target="_blank" href="http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html"><strong>Backup</strong></a></p>
<ul>
<li><p><strong>Backup rete</strong><a target="_blank" href="http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html"><strong>ntion period</strong>: You can choose the number of d</a>ays to retain the backup you take. For this tutorial, set this value to <strong>1 day.</strong></p>
</li>
<li><p><strong>Backup window</strong>: Use the default o<em>f**</em>No preference.**</p>
</li>
</ul>
<p><strong>Maintenance</strong></p>
<ul>
<li><p><strong>Auto minor version upgrade</strong>: Select <strong>Enable auto minor version upgrade</strong> to receive automatic updates when they become available.</p>
</li>
<li><p><strong>Maintenance Window</strong>: Select <strong>No preference.</strong></p>
</li>
</ul>
<p><strong>Deletion protection:</strong> Turn off <strong>Enable deletion protection</strong> for this tutorial. When this option is enabled, you're prevented from accidentally deleting the database.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1714973000098/9a8d0e14-9446-4d88-9943-cae826d95caf.jpeg" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1714973248524/148d88cf-e902-4aec-bab5-7e120b452bd7.jpeg" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1714973256923/7bde862d-aed1-4cc2-8a5d-3efa7c5ea6de.jpeg" alt class="image--center mx-auto" /></p>
<p>Finally create the Database</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1714973278042/6758d914-e08b-4f1f-9c66-f4cbfc1bfdf2.jpeg" alt class="image--center mx-auto" /></p>
<p>Your DB instance is now being created.</p>
<p>Note: Depending on the DB instance class and storage allocated, it could take several minutes for the new DB instance to become available.</p>
<p>The new DB instance appears in the list of DB instances on the RDS console. The DB instance will have a status of <strong>creating</strong> until the DB instance is created and ready for use. When the state changes to <strong>available,</strong> you can connect to a database on the DB instance.</p>
<p>Feel free to move on to the next step as you wait for the DB instance to become available.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1714974410144/3f511d40-5b6b-4e77-b632-870542c17955.jpeg" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-2http1270018000vpc-flow-logs-on-aws-log-group-creation-download-sql-client"><a target="_blank" href="http://127.0.0.1:8000/vpc-flow-logs-on-aws/#-Log-Group-Creation">Step 2:</a> Download SQL client</h5>
<p>Once the database instance creation is complete and the status changes to <strong>available,</strong> you can connect to a database on the DB instance using any standard SQL client. In this step, we will download MySQL Workbench, which is a popular SQL client.</p>
<p>a. Go to the <a target="_blank" href="http://dev.mysql.com/downloads/workbench/">Download MySQL Workbench page to download and</a> install MySQL Workbench. For more information on using MySQL, see the <a target="_blank" href="http://dev.mysql.com/doc/">MySQL Documentation.</a></p>
<p><a target="_blank" href="http://dev.mysql.com/doc/">Note:  Remembe</a>r to run MySQL Workbench from the same device from which you created the DB instance. The security group your datab<a target="_blank" href="http://dev.mysql.com/downloads/workbench/">ase is placed in is conf</a>igured to allow connection only from the device from which you created the DB instance. base on which Operating System you are using you have to choose. For our case we are using MAC.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1714973860464/074b6bf2-050a-40b6-8c36-df59d0efa910.jpeg" alt class="image--center mx-auto" /></p>
<p>b. You will be prompted to log in, sign up, or begin your download. You can choose <strong>No thanks, just start my download</strong> for a quick download.</p>
<p><img src="https://d1.awsstatic.com/getting-started-guides/51-create-mysql-db-steps/download-client-b.c0251d6996db538c832bd2cde9189765ab014116.png" alt="You will be prompted to login, sign up, or begin your download." /></p>
<p>After Downloading and install client to your computer. You can access it by double clicking on the icon and you will be promt to the image bellow.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1714974183448/021a6972-5e4c-40fb-8014-82157ddf73c4.jpeg" alt class="image--center mx-auto" /></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-flow-logs-on-aws/#-Flow-Logs_creation">Step 3:</a> Connect to MySQL Database instance</p>
<p>To connect to the Database we can use two methods. We can go through the CLI if we want to use command line or via the GUI for access through a graphic or console.</p>
<p>With the CLI we have to use command</p>
<pre><code class="lang-bash">mysql -h hostname -u admin -p
</code></pre>
<p>The hostname is the endpoint of the database, the username will be admin and password</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715047049069/516f987e-4767-44a9-8c63-5f17d0752bc5.jpeg" alt class="image--center mx-auto" /></p>
<p>So the final command will be</p>
<pre><code class="lang-bash">mysql -h database-1.c0ywy57rsm6g.us-west-1.rds.amazonaws.com -u admin -p
</code></pre>
<p>We will be prompt to enter the password of the Database. Then we will have the image bellow:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715047537244/216c18d6-edc8-4b8a-8486-e5990cde1425.jpeg" alt class="image--center mx-auto" /></p>
<p>To connect via the MySQL client using a clear graphic GUI. Go open the MySQL client downloaded before. Then, hit the "plus sign"</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1714979944693/73fd7680-9783-45d0-a881-81e9b11417f0.jpeg" alt class="image--center mx-auto" /></p>
<p>A dialog box appears. Enter the following:</p>
<ul>
<li><p><strong>Hostname</strong>: You can find your hostname on the Amazon RDS console as shown in the screenshot.</p>
</li>
<li><p><strong>Port</strong>: The default value should be 3306.</p>
</li>
<li><p><strong>Username</strong>: Type in the username you created for the Amazon RDS database. In this tutorial, it is '<em>masterUsername</em>.'</p>
</li>
<li><p><strong>Password</strong>: Choose <strong>Store in Vault</strong> (or <strong>Store in Keychain</strong> on MacOS) and enter the password that you used when creating the Amazon RDS database.</p>
</li>
</ul>
<p>Choose <strong>OK.</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1714979980262/66689e7c-8784-4e60-82f9-bd3e27fda4ab.jpeg" alt class="image--center mx-auto" /></p>
<p>You will now enter the database and you will be able to add or remove date.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715050680197/8b3933ee-d4d4-41d2-b272-e042dd570fbc.jpeg" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-4http1270018000vpc-flow-logs-on-aws-checking-logs-clean-up"><a target="_blank" href="http://127.0.0.1:8000/vpc-flow-logs-on-aws/#-Checking-Logs">Step 4:</a> Clean Up</h5>
<p>The clean up here will consist of deleting the Database and Ec2 Instance just by terminating each of them.</p>
<p>For the Database you must select the database then chose "Actions option then delete.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715049993099/5043baab-d0fb-4b6d-b5c4-a9170b10b67a.jpeg" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715050007707/8c5db8bb-bef1-437b-848b-378690a59a86.jpeg" alt class="image--center mx-auto" /></p>
<p>The process of deletion will follow. It will take some minutes be patient.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715050141587/c06950f3-fdbc-4146-8163-6b4aad224cb6.jpeg" alt class="image--center mx-auto" /></p>
<p>For the EC2 select the instance to delete then chose "Instance state" option then "Terminate instance".</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715050402280/e3e2ada7-d84c-435e-b84f-fbeeb84ddcc4.jpeg" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715050410402/d28b9b90-3890-4590-92f3-519f17f61293.jpeg" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715050418850/9d5eef62-6de4-4fef-950a-4ed3552c4b0b.jpeg" alt class="image--center mx-auto" /></p>
<h2 id="heading-contributing">🤝 Contributing</h2>
<p>Your perspective is valuable! Whether you see potential for improvement or appreciate what's already here, your contributions are welcomed and appreciated. Thank you for considering joining us in making this project even better. Feel free to follow me for updates on this project and others, and to explore opportunities for collaboration. Together, we can create something amazing!</p>
<h2 id="heading-license">📄 License</h2>
<p>This project is licensed under the Joebaho Cloud License</p>
]]></content:encoded></item><item><title><![CDATA[Deploy secure static website containing an online Resume using code pipeline on AWS and Github]]></title><description><![CDATA[Deploy secure static website containing an online Resume using code pipeline on AWS and Github
🚀 Overview:
Building a resume on a secure static website and deploying it via a code pipeline on AWS involves creating a personal website to showcase your...]]></description><link>https://platform.joebahocloud.com/deploy-secure-static-website-containing-an-online-resume-using-code-pipeline-on-aws-and-github</link><guid isPermaLink="true">https://platform.joebahocloud.com/deploy-secure-static-website-containing-an-online-resume-using-code-pipeline-on-aws-and-github</guid><dc:creator><![CDATA[Joseph Mbatchou]]></dc:creator><pubDate>Tue, 16 Apr 2024 05:12:36 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1713241236284/504ba89b-136c-474d-b51e-342650ca1b33.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-deploy-secure-static-website-containing-an-online-resume-using-code-pipeline-on-aws-and-github">Deploy secure static website containing an online Resume using code pipeline on AWS and Github</h1>
<h2 id="heading-overview">🚀 <mark>Overview:</mark></h2>
<p>Building a resume on a secure static website and deploying it via a code pipeline on AWS involves creating a personal website to showcase your resume and portfolio securely. This project utilizes static website hosting on AWS S3, along with AWS CodePipeline for automating the deployment process. By leveraging AWS services, you can ensure the reliability, scalability, and security of your website while streamlining the deployment workflow.</p>
<h2 id="heading-problem-statement">🔧 <mark>Problem Statement</mark></h2>
<p>The goal of this project is to address the challenge of creating and maintaining a secure and professional-looking website to showcase their resume and portfolio. Then, manual deployment processes to avoid time-consuming and error-prone, leading to delays and inconsistencies in website updates. We will be creating a secure static website to host your resume and portfolio, and automating the deployment process using AWS services. By building a resume website on AWS S3 and deploying it via a code pipeline, you can ensure that your website is always up-to-date, secure, and accessible to potential employers or clients. This project will empower you to establish a professional online presence with minimal effort, allowing you to focus on advancing your career or growing your business.</p>
<h2 id="heading-techonology-stack">💽 <mark>Techonology Stack</mark></h2>
<p>We will be using:</p>
<ul>
<li><p><strong>Repository</strong>: Github</p>
</li>
<li><p><strong>S3</strong>: AWS S3</p>
</li>
<li><p><strong>Route 53</strong>: AWS Route 53</p>
</li>
<li><p><strong>Code Pipeline</strong>: AWS CodePipeline</p>
</li>
<li><p><strong>CloudFront</strong>: AWS CloudFront</p>
</li>
<li><p><strong>Certificate manager</strong>: AWS ACM</p>
</li>
</ul>
<h2 id="heading-architecture-diagram">📌 <mark>Architecture Diagram</mark></h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713238123541/9fd57cb0-70e3-4628-82cf-0387cb72a84c.jpeg" alt class="image--center mx-auto" /></p>
<h2 id="heading-project-requirements">🌟 <mark>Project Requirements</mark></h2>
<p>Before you get started, make sure you have the following prerequisites in place:</p>
<ul>
<li><p>AWS IAM credentials configured in your text editor. In this case we will use VSCODE.</p>
</li>
<li><p>Git installed on your local machine and Github account set up <a target="_blank" href="https://www.github.com/">Github</a></p>
</li>
<li><p>Git for cloning the repository.</p>
</li>
<li><p>Files for the static web site are already dowloaded from internet to our local computer. All files were also modified to our use case.</p>
</li>
</ul>
<h2 id="heading-table-of-contents">📋 <mark> Table of Contents</mark></h2>
<p><a target="_blank" href="http://127.0.0.1:8000/resume-on-aws/#-Github-Repo-Creation">Step 1: Create S3 Github repository</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/resume-on-aws/#-Github-repo-clone">Step 2: Clone the repository to the local machine</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/resume-on-aws/#-Download-files-and-push-files">Step 3: Download files in the repo and push code in github</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/resume-on-aws/#-S3-Bucket_Creation">Step 4: Create S3 bucket and configure to web static</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/resume-on-aws/#-Downloading-Files">Step 5: Download files and folder in the S3 bucket</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/resume-on-aws/#-Domain-Name-Registration">Step 6: Register a domain name with Route 53</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/resume-on-aws/#-Creation-Of-Record">Step 7: Create record with S3 bucket as endpoint</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/resume-on-aws/#-Setup-ACM-Certificate">Step 8: Set up SSl/Https with AWS Certificate Manager</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/resume-on-aws/#-Cloudfront-Distribution">Step 9: Create a cloudfront Distribution</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/resume-on-aws/#-Codepipeline-Configuration">Step 10: Configuration Pipeline</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/resume-on-aws/#-Deployment">Step 11: Deployment</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/resume-on-aws/#-Web-Page">Step 12: Web page</a></p>
<h2 id="heading-instructions-of-configuration-and">💼 <mark>Instructions of Configuration and</mark></h2>
<h2 id="heading-deployment"><mark>Deployment</mark></h2>
<p>Step 1: <strong><em>Github repository creation</em></strong></p>
<p>Here's a step-by-step guide on how to create a GitHub repository:</p>
<p><strong>Sign in to GitHub:</strong></p>
<p>Open your web browser and go to the GitHub website <a target="_blank" href="https://github.com/">Github</a>. Sign in to your GitHub account. If you don't have an account, you can sign up for free.</p>
<p><strong>Navigate to Your Profile:</strong></p>
<p>Once logged in, you'll be directed to your GitHub dashboard. Click on your profile icon in the top-right corner of the page to access your profile.</p>
<p><strong>Create a New Repository:</strong></p>
<p>On your profile page, click on the "Repositories" tab. Then click on the green "New" button located on the right side of the page.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713243540850/236f5766-7d70-4822-b07e-ed417f07492f.jpeg" alt class="image--center mx-auto" /></p>
<p><strong>Fill in Repository Details:</strong></p>
<p>You'll be taken to the "Create a new repository" page. Enter a name for your repository in the "Repository name" field. Choose a descriptive name that reflects the purpose of your project. Optionally, you can add a description for your repository in the "Description" field. Choose the visibility of your repository (public or private) by selecting the appropriate radio button. If you want to initialize your repository with a README file, check the box next to "Initialize this repository with a README". You can also add a .gitignore file and choose a license for your project from the respective dropdown menus. Create the Repository: In our case the repository will be named " Resume", it will be public and no READMe.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713235607988/c1dc3bd5-7f98-4f3d-b95e-3356e77ca54b.jpeg" alt class="image--center mx-auto" /></p>
<p>Once you've filled in the necessary details, click on the green "Create repository" button at the bottom of the page. Congratulations!</p>
<p>Your GitHub repository has been successfully created. You'll be redirected to the repository page where you can start adding files, making commits, and collaborating with others. You will get a picture like bellow:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713235655179/0c695c12-33c1-41fa-b947-798723041545.jpeg" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-2-github-repository-clone-to-the-local-machine">Step 2: <strong><em>Github repository clone to the local machine</em></strong></h5>
<p>If you want to work on your repository locally, you can clone it to your computer using Git. Click on the green "Code" button located on the right side of the page. Copy the URL provided (HTTPS or SSH) and use it to clone the repository using the Git command line or a Git client.Clone the repository in your local machine using the command "git clone"</p>
<pre><code class="lang-bash">   git <span class="hljs-built_in">clone</span> https://github.com/Joebaho/RESUME.git
</code></pre>
<p>You will get image like the bellow</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713235709194/3468bbc2-4e57-4c0e-9f69-1692e7e02689.jpeg" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-3-download-files-and-push-code-to-github">Step 3: <strong><em>Download files and push code to Github</em></strong></h5>
<p>We have the index.html file, javascript and css folders ready in our local machine. We just need to copy and paste them in the local repository. After this process we will be pushing them to remote Github repository created before. These will follow a set of commands.</p>
<p>To check status of the repository and see which branch and what has been made in the repository use</p>
<pre><code class="lang-bash">   git status
</code></pre>
<p>To add all new files of folders or any change made in the repo use</p>
<pre><code class="lang-bash">   git add .
</code></pre>
<p>To commit change or informe your collaborator on what has been made on the repo you use command</p>
<pre><code class="lang-bash">   git commit -M <span class="hljs-string">" Message to commit"</span>
</code></pre>
<p>To create a branch to work on use command bellow. In this case we will use branch Master.</p>
<pre><code class="lang-bash">   git branch -M master
</code></pre>
<p>For you first push you mus speecific origin in order to synchronise with the remote repo with the command</p>
<pre><code class="lang-bash">git remote add origin https://github.com/Joebaho/RESUME.git
</code></pre>
<p>Then push all files in Github with command</p>
<pre><code class="lang-bash">   git push -u origin master
</code></pre>
<p>You will have all files and folders copy in Github repository. Check the image</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713235795890/ae535690-8751-4690-96d5-0b245190f347.jpeg" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-4-s3-bucket-creation">Step 4: <strong><em>S3 Bucket creation</em></strong></h5>
<p>Here we create the S3 bucket and will configure it as hosting static web site</p>
<p><strong>Sign in to the AWS Management Console:</strong></p>
<p>Open your web browser and go to the AWS Management Console <a target="_blank" href="https://aws.amazon.com/console/">aws/console</a>. Sign in to your AWS account.</p>
<p><strong>Navigate to S3:</strong></p>
<p>In the AWS Management Console, search for "S3" or navigate to the "Storage" section. Click on "S3" to open the Amazon S3 console.</p>
<p><strong>Create a New Bucket:</strong></p>
<p>In the S3 console, click on the "Create bucket" button. Enter a unique name for your bucket in the "Bucket name" field. This name must be globally unique across all existing S3 bucket names. Choose the AWS region where you want to create your bucket. Select the region that is closest to your target audience for improved performance. Click on the "Create" button to create the bucket. Because we creating the bucket for hosting the static web site the bucket name must match with the mane of the domain. So, here our bucket name will be "www.joebahoucloud.com" See image bellow</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713235865568/1b42a4f9-4f56-47ea-833f-c3cdf2d67ca9.jpeg" alt class="image--center mx-auto" /></p>
<p><strong>Configure Bucket Properties:</strong></p>
<p>Once the bucket is created, select the bucket from the list to open its properties. Click on the "Properties" tab and then click on "Static website hosting". Enable Static Website Hosting: In the "Static website hosting" panel, select the option to "Use this bucket to host a website". Enter the name of the index document (e.g., index.html) and optionally, the name of the error document. Click on the "Save" button to save the changes.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713235907489/859bc391-97a6-4721-9e51-02dadd9b29cb.jpeg" alt class="image--center mx-auto" /></p>
<p><strong>Set Bucket Policy:</strong></p>
<p>Next, you need to add a bucket policy to make the contents of your bucket publicly accessible. Click on the "Permissions" tab and then click on "Bucket Policy". Paste the following bucket policy into the bucket policy editor, replacing arn:aws:s3:::www.joebahocloud.com with the name of your bucket:</p>
<pre><code class="lang-bash">json
{
  <span class="hljs-string">"Version"</span>: <span class="hljs-string">"2012-10-17"</span>,
  <span class="hljs-string">"Statement"</span>: [
    {
      <span class="hljs-string">"Sid"</span>: <span class="hljs-string">"PublicReadGetObject"</span>,
      <span class="hljs-string">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
      <span class="hljs-string">"Principal"</span>: <span class="hljs-string">"*"</span>,
      <span class="hljs-string">"Action"</span>: <span class="hljs-string">"s3:GetObject"</span>,
      <span class="hljs-string">"Resource"</span>: <span class="hljs-string">"arn:aws:s3:::arn:aws:s3:::www.joebahocloud.com/*"</span>
    }
  ]
}
</code></pre>
<p>Click on the "Save" button to save the bucket policy.</p>
<h5 id="heading-step-5-downloading-files-in-the-s3-bucket">Step 5: <strong><em>Downloading files in the S3 Bucket</em></strong></h5>
<p><strong>Upload Website Files:</strong></p>
<p>Go back to the "Overview" tab of your bucket. Click on the "Upload" button to upload your website files to the bucket. Make sure to include your index.html and any other assets (e.g., CSS, JavaScript) needed for your website. Once the files are uploaded, they will be publicly accessible via the bucket's website endpoint. You will have something like image bellow</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713236052676/def90892-d9ba-44fc-ae61-ba9b33f2440b.jpeg" alt class="image--center mx-auto" /></p>
<p><strong>Access Your Website:</strong></p>
<p>To access your website, find the "Endpoint" URL under the "Static website hosting" panel in the bucket properties. Open a web browser and navigate to the endpoint URL to view your static website.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713236100016/2292e8be-726b-4228-888f-d2c30495ceb4.jpeg" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-6-domain-name-registration">Step 6: <strong><em>Domain Name Registration</em></strong></h5>
<p>Go back in to the AWS Management Console and navigate to Route 53:</p>
<p>In the AWS Management Console, search for "Route 53" or navigate to the "Networking &amp; Content Delivery" section. Click on "Route 53" to open the Route 53 console.</p>
<p><strong>Register a New Domain:</strong></p>
<p>In the Route 53 console, click on "Registered domains" in the left sidebar. Click on the "Register domain" button.</p>
<p><strong>Search for a Domain Name:</strong></p>
<p>Enter the domain name you want to register in the search box. In this case we will use "joebahocloud.com" Click on the "Check" button to see if the domain name is available. If the domain name is available, you'll see a message indicating that it's available for registration.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713239010334/83bd2ed7-9927-43d4-9e00-64fda9cbaff7.jpeg" alt class="image--center mx-auto" /></p>
<p><strong>Add the Domain to Your Cart:</strong></p>
<p>If the domain name is available, click on the "Add to cart" button next to the domain name. Review the details of the domain registration, including the registration period and pricing. Click on the "Continue" button to add the domain to your cart.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713239364461/161ef1b3-3c06-48e0-b2a2-c1e01779d385.jpeg" alt class="image--center mx-auto" /></p>
<p><strong>Complete the Registration Process:</strong></p>
<p>Follow the prompts to complete the registration process. Provide the necessary information, such as contact details and payment information. Review and accept the terms and conditions of the registration. Click on the "Complete Purchase" or "Buy Now" button to finalize the registration.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713239437607/67fb7a2e-1ed1-4b29-b4db-640e375bfe55.jpeg" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-7-creation-of-record">Step 7: <strong><em>Creation of Record</em></strong></h5>
<p><strong>Configure DNS Settings:</strong></p>
<p>Once the domain registration is complete, you'll be redirected to the Route 53 console. Click on "Hosted zones" in the left sidebar to configure DNS settings for your domain. Click on the "Create Hosted Zone" button. Enter your domain name in the "Domain Name" field. Optionally, you can choose to associate the hosted zone with an existing VPC or enable DNSSEC. Click on the "Create" button to create the hosted zone.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713236476873/bdf72940-d960-48a1-bd59-80204fe9551e.jpeg" alt class="image--center mx-auto" /></p>
<p><strong>Update Name Servers:</strong></p>
<p>After creating the hosted zone, Route 53 will provide you with a set of name servers. Go to your domain registrar's website and update the name servers for your domain to point to the Route 53 name servers. This step may take some time to propagate DNS changes across the internet.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713236586954/ef8d31d7-560c-4e45-9ba4-c59bbe456ada.jpeg" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-8-setup-acm-certificate">Step 8: <strong><em>Setup ACM Certificate</em></strong></h5>
<p>Go back in to the AWS Management Console and navigate to ACM:</p>
<p>In the AWS Management Console, search for "ACM" or navigate to the "Security, Identity, &amp; Compliance" section. Click on "Certificate Manager" to open the ACM console.</p>
<p><strong>Request a Certificate:</strong></p>
<p>In the ACM console, click on the "Request a certificate" button.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713236633119/b03efb72-d79a-48b9-86f2-ff97f60bf075.jpeg" alt class="image--center mx-auto" /></p>
<p><strong>Specify Domain Names:</strong></p>
<p>On the "Request a certificate" page, enter the domain names you want to include in the certificate. You can enter multiple domain names separated by commas. Here our domain will be "joebaocloud.com" Optionally, you can also specify additional domain names (*.joebahocloud.com) using the "Add another name to this certificate" link.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713236674018/c829f4e3-87c7-4d62-af7f-35651a66a005.jpeg" alt class="image--center mx-auto" /></p>
<p><strong>Select Validation Method:</strong></p>
<p>Choose a validation method to prove ownership of the domain(s). You can choose between DNS validation and Email validation. DNS validation requires you to add a DNS record to your domain's DNS configuration. Email validation requires you to respond to email verification requests sent to the domain owner's email addresses.</p>
<p><strong>Review and Confirm:</strong></p>
<p>Review the domain names and validation method you selected. Click on the "Confirm and request" button to submit the certificate request.</p>
<p><strong>Validation:</strong></p>
<p>If you selected DNS validation, ACM will provide you with a DNS record that you need to add to your domain's DNS configuration. If you selected Email validation, ACM will send verification emails to the domain owner's email addresses. Follow the instructions in the email to validate the domain ownership.</p>
<p><strong>Wait for Validation:</strong></p>
<p>Once you've completed the validation process, wait for ACM to validate the domain ownership. This may take a few minutes to several hours depending on the validation method you selected. Certificate Issued:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713236724003/69352ea1-150e-432f-a0a3-92edde7253bc.jpeg" alt class="image--center mx-auto" /></p>
<p>Once the domain ownership is validated, ACM will issue the SSL/TLS certificate. You can view and manage your certificates in the ACM console under the "Certificates" section.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713236771464/dabf8173-8428-480e-b403-9ae1aaf29bd5.jpeg" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-9-cloudfront-distribution">Step 9: <strong><em>CloudFront Distribution</em></strong></h5>
<p>Go back in to the AWS Management Console and navigate to CloudFront:</p>
<p>In the AWS Management Console, search for "CloudFront" or navigate to the "Networking &amp; Content Delivery" section. Click on "CloudFront" to open the CloudFront console.</p>
<p><strong>Create a Distribution:</strong></p>
<p>In the CloudFront console, click on the "Create Distribution" button.</p>
<p><strong>Choose a Delivery Method:</strong></p>
<p>Select the delivery method for your content. You can choose between: Web: For distributing websites and other HTTP content. RTMP: For streaming media files using Adobe Flash Media Server.</p>
<p><strong>Configure Distribution Settings:</strong></p>
<p>In the "Origin Settings" section, specify the origin of your content. This can be an S3 bucket, an EC2 instance, or a custom origin (e.g., a web server). Configure other settings such as cache behavior, viewer protocol policy, and allowed HTTP methods according to your requirements.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713243010692/195d018e-c552-4174-8ca2-c8e63becdb9b.jpeg" alt class="image--center mx-auto" /></p>
<p><strong>Configure Cache Settings:</strong></p>
<p>Configure caching behavior for your distribution, including TTL (time to live), cache control headers, and query string parameters. You can also configure cache behaviors based on URL patterns.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713242916257/16031f99-b0a8-4cad-bad4-a758bc25a8b6.jpeg" alt class="image--center mx-auto" /></p>
<p><strong>Configure Distribution Settings:</strong></p>
<p>Configure additional settings such as logging, custom SSL certificate (if required), and price class (the regions where you want your content to be distributed). Review the distribution settings and make any necessary adjustments.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713242439816/f8074feb-7aa1-4352-876e-7228dc7e9f8d.jpeg" alt class="image--center mx-auto" /></p>
<p><strong>Create the Distribution</strong></p>
<p>Click on the "Create Distribution" button to create the CloudFront distribution. Note the distribution ID and other details provided by CloudFront.</p>
<p><strong>Wait for Distribution Deployment</strong></p>
<p>It may take some time for the CloudFront distribution to be deployed globally. Monitor the status of the distribution in the CloudFront console until it shows a status of "Enabled".</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713242605649/2214555b-d7b5-4afe-8936-e7d52be54428.jpeg" alt class="image--center mx-auto" /></p>
<p><strong>Test the Distribution:</strong></p>
<p>Once the distribution is deployed, you can test it by accessing your content through the CloudFront domain name (e.g., d123456789.cloudfront.net). Verify that your content is being served correctly and that caching behavior is as expected.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713242694161/d3d352da-73ec-42fd-a743-891425900632.jpeg" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-10-code-pipeline-configuration">Step 10: <strong><em>Code Pipeline Configuration</em></strong></h5>
<p>Go back in to the AWS Management Console and navigate to CodePipeline:</p>
<p>In the AWS Management Console, search for "CodePipeline" or navigate to the "Developer Tools" section. Click on "CodePipeline" to open the CodePipeline console.</p>
<p><strong>Create a New Pipeline:</strong></p>
<p>In the CodePipeline console, click on the "Create pipeline" button. Configure Pipeline Settings:</p>
<p>Enter a name for your pipeline in the "Pipeline name" field. Optionally, you can enter a description for your pipeline. Choose a service role for CodePipeline to use when executing pipeline actions. You can create a new service role or choose an existing one. Click on the "Next" button to proceed.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713236849286/af3060da-ef67-4fbc-9120-3458c61cf5c7.jpeg" alt class="image--center mx-auto" /></p>
<p><strong>Add Source Stage:</strong></p>
<p>In the "Source" stage configuration, select the source provider for your code repository. This can be AWS CodeCommit, GitHub, Amazon S3, or Bitbucket. For this case we will use Github as our code is located in Github</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713236883068/a0b64de1-2009-4a93-a2c4-2019b9baaa5c.jpeg" alt class="image--center mx-auto" /></p>
<p>Configure the source settings, such as the repository name, branch, and authentication method.We will be promt to link with Github. hit on the link then provide Github credentials and choose which repository to source of the code.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713236939422/5ce80079-88ab-486b-a80c-530b4c841c99.jpeg" alt class="image--center mx-auto" /></p>
<p>Click on the "Next" button to proceed. We will skip the build stage and go directly to deploy stage</p>
<p><strong>Configure Deploy Stage:</strong></p>
<p>In the "Deploy" stage configuration, select the deployment provider for your application. This can be AWS Elastic Beanstalk, AWS ECS (Elastic Container Service), AWS Lambda, or AWS S3. The configuration here will be for S3 Configure the deployment settings, such as the deployment group, application name, and deployment configuration. Click on the "Next" button to proceed.</p>
<p><strong>Review and Create:</strong></p>
<p>Review the pipeline configuration to ensure that everything is set up correctly. Click on the "Create pipeline" button to create the pipeline. Run Pipeline:</p>
<p>Once the pipeline is created, it will automatically start running. Monitor the progress of the pipeline in the CodePipeline console.</p>
<h5 id="heading-step-11-deployment">Step 11: <strong><em>Deployment</em></strong></h5>
<p>When everything are well done and no error or mistake you see the deployment of the code in the page. See image bellow:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713236995681/8e106e19-f03c-4669-b568-712b1d8d884d.jpeg" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-12-web-page">Step 12: <strong><em>Web page</em></strong></h5>
<p>In the browser type the url address https://www.joebahocloud.com and you will get the web page. See image bellow</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713237049966/0e8e7946-e65b-4990-ae13-3eeea808587b.jpeg" alt class="image--center mx-auto" /></p>
<h2 id="heading-contributing">🤝 <mark>Contributing</mark></h2>
<p>Your perspective is valuable! Whether you see potential for improvement or appreciate what's already here, your contributions are welcome and appreciated. Thank you for considering joining us in making this project even better. Feel free to follow me for updates on this project and others, and to explore opportunities for collaboration. Together, we can create something amazing!</p>
<h2 id="heading-license">📄 <mark>License</mark></h2>
<p>This project is licensed under the Joebahocloud License</p>
]]></content:encoded></item><item><title><![CDATA[Ansible Configuration on AWS using Terraform]]></title><description><![CDATA[Ansible Configuration on AWS using Terraform
🚀 Overview:
In modern cloud environments, managing infrastructure efficiently and securely is crucial for ensuring smooth operations and maximizing resource utilization. Ansible, a powerful automation too...]]></description><link>https://platform.joebahocloud.com/ansible-configuration-on-aws-using-terraform</link><guid isPermaLink="true">https://platform.joebahocloud.com/ansible-configuration-on-aws-using-terraform</guid><category><![CDATA[Ansible Configuration on AWS with Terraform]]></category><dc:creator><![CDATA[Joseph Mbatchou]]></dc:creator><pubDate>Mon, 01 Apr 2024 07:39:22 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1711956978423/3e8ef435-4415-4337-b760-94279a6aa2e6.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-ansible-configuration-on-aws-using-terraform">Ansible Configuration on AWS using Terraform</h1>
<h2 id="heading-overview">🚀 Overview:</h2>
<p>In modern cloud environments, managing infrastructure efficiently and securely is crucial for ensuring smooth operations and maximizing resource utilization. Ansible, a powerful automation tool, provides a flexible and scalable solution for configuration management, provisioning, and orchestration of infrastructure on AWS. By leveraging Ansible, organizations can streamline deployment processes, enforce consistent configurations, and accelerate time-to-market for their applications. The primary objective of this project is to design and deploy a controller Linux server, two Linux client servers and two Ubuntu client server on AWS. Then we will be writing a playbook that will be installing and configuring apache on all clients servers. The playbook will include also git installation.</p>
<h2 id="heading-problem-statement">🔧 Problem Statement</h2>
<p>Terraform is an IaC software tool that provides a consistent command line interface (CLI) workflow to manage hundreds of cloud services. Terraform codifies cloud APIs into declarative configuration files. In this specific case you need to create 5 EC2 instances such as 1 Amazon Linux 2 that will call controller, then two clients or nodes with Amazon Linux 2 and 2 other with Ubuntu. Terraform will automatically use the configuration files to provide those resources and the user data placed in te controller will install Ansible on it. We will then use the Ansible playbook to update all hosts, install Apache and Git then copy te index.html file on each hosts. This deployment will provide all necessary tools and needed elements avoiding us to use the console. All process of working on nodes will be automate that will ensuring consistency and reducing human error.</p>
<h2 id="heading-techonology-stack">💽 Techonology Stack</h2>
<p>We will be using:</p>
<ul>
<li><p><strong>EC2</strong>: AWS EC2</p>
</li>
<li><p><strong>Ansible Playbook</strong>: Ansible</p>
</li>
<li><p><strong>File Configuration</strong>: Terraform</p>
</li>
</ul>
<h2 id="heading-architecture-diagram">📌 Architecture Diagram</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715052958797/9789fc26-f8e6-4dce-ac0d-4f901ae99401.jpeg" alt class="image--center mx-auto" /></p>
<h2 id="heading-project-requirements">🌟 Project Requirements</h2>
<p>Before you get started, make sure you have the following prerequisites in place:</p>
<ul>
<li><p><a target="_blank" href="https://www.terraform.io/">Terraform</a> installed on your local machine.</p>
</li>
<li><p>AWS IAM credentials configured in your text editor. In this case we will use VSCODE.</p>
</li>
<li><p>Git installed on your local machine and Github account set up <a target="_blank" href="https://www.github.com/">Github</a></p>
</li>
<li><p>Git for cloning the repository.</p>
</li>
</ul>
<p>You must know and understand:</p>
<ul>
<li><p><strong>Ansible playbook</strong>: is a YAML file that defines a series of tasks to be executed on remote hosts.</p>
</li>
<li><p><strong>Apache web Server</strong>: The Apache HTTP Server, commonly referred to as Apache, is an open-source web server software developed and maintained by the Apache Software Foundation. It is one of the most widely used web server applications globally and powers a significant portion of websites on the internet.</p>
</li>
<li><p><strong>Git</strong>: is a distributed version control system (VCS) that is widely used for tracking changes in source code during software development.</p>
</li>
</ul>
<p>You must also know Terraform workflow</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715054326111/739a878f-c3ee-40f5-979f-e63e85041a29.jpeg" alt class="image--center mx-auto" /></p>
<h2 id="heading-table-of-contents">📋 Table of Contents</h2>
<p>I - Terraform Configuration files</p>
<p><a target="_blank" href="http://127.0.0.1:8000/ansible-terraform-on-aws/#-Provider-configuration">Step 1: Provider Configuration</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/ansible-terraform-on-aws/#-variables-configuration">Step 2: Variables Configuration</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/ansible-terraform-on-aws/#-VPC-configuration">Step 3: Instances Configuration</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/ansible-terraform-on-aws/#-Output-configuration">Step 4: Output Configuration</a></p>
<p>II - Instructions of Deployment</p>
<p><a target="_blank" href="http://127.0.0.1:8000/ansible-terraform-on-aws/#-Clone-Repository">Step 1: Clone Repository</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/ansible-terraform-on-aws/#-Initialize-Folder">Step 2: Initialize Folder</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/ansible-terraform-on-aws/#-Format-Files">Step 3: Format Files</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/ansible-terraform-on-aws/#-Validate-Files">Step 4: Validate Files</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/ansible-terraform-on-aws/#-Plan">Step 5: Plan</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/ansible-terraform-on-aws/#-Apply">Step 6: Apply</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/ansible-terraform-on-aws/#-Review-Of-Resources">Step 7: Review of Resources</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/ansible-terraform-on-aws/#-Connect-To-The-Controller">Step 8: Connect to the Controller</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/ansible-terraform-on-aws/#-Connect-To-All-Hosts">Step 9: Connect to all Hosts</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/ansible-terraform-on-aws/#-Generate-Copy-Key-Pair">Step 10: Generate &amp; Copy Key Pair</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/ansible-terraform-on-aws/#-Configuration-On-Controller">Step 11: Configuration on Controller</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/ansible-terraform-on-aws/#-Run-Ansible-Command">Step 12: Run Ansible Command</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/ansible-terraform-on-aws/#-Review-Of-Changes">Step 13: Review of Changes</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/ansible-terraform-on-aws/#-Destroy">Step 14: Destroy</a></p>
<h2 id="heading-terraform-configuration-files">✨Terraform Configuration files</h2>
<p>You need to write different files generating resources</p>
<h5 id="heading-step-1-provider-configuration">Step 1: <strong><em>Provider Configuration</em></strong></h5>
<p>Here we declare our cloud provider and we specify the region where we will be launching resources</p>
<ul>
<li><a target="_blank" href="https://github.com/Joebaho/Joebaho-Cloud-Platform/blob/main/site/ansible-terraform-on-aws/providers.tf">provider Configuration</a></li>
</ul>
<h5 id="heading-step-2-variables-configuration">Step 2: <strong><em>Variables Configuration</em></strong></h5>
<p>This is where we declare all variables and their value. It includes</p>
<ul>
<li><p><strong>Variables</strong>: List of element that can vary or change. They can be reuse values throughout our code without repeating ourselves and help make the code dynamic</p>
</li>
<li><p><strong>values</strong>: values attributed to each variables.</p>
</li>
</ul>
<p>Reminder: Never push " terraform.tfvars" file on Github</p>
<p>We have</p>
<ul>
<li><p><a target="_blank" href="https://github.com/Joebaho/Joebaho-Cloud-Platform/blob/main/site/ansible-terraform-on-aws/variables.tf">variables Configuration</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/Joebaho/Joebaho-Cloud-Platform/blob/main/site/ansible-terraform-on-aws/terraform.tfvars">value Configuration</a></p>
</li>
</ul>
<h5 id="heading-step-3-instances-configuration">Step 3: <strong><em>Instances Configuration</em></strong></h5>
<p>This is where you create all fives instances. The controller and two other nodes will be running on Amazon linux 2 and two other on Ubuntu. We will open port 80 (http) and 22 (ssh) on the security group.</p>
<ul>
<li><a target="_blank" href="https://github.com/Joebaho/Joebaho-Cloud-Platform/blob/main/site/ansible-terraform-on-aws/main.tf">Instances Configuration</a></li>
</ul>
<h5 id="heading-step-4-output-configuration">Step 4: <strong><em>Output Configuration</em></strong></h5>
<p>Know as Output Value : it is a convenient way to get useful information about your infrastructure printed on the CLI. It is showing the ARN, name or ID of a resource. In this case we are bringing out the DNS name of the web application Load balancer.</p>
<ul>
<li><a target="_blank" href="https://github.com/Joebaho/Joebaho-Cloud-Platform/blob/main/site/ansible-terraform-on-aws/outputs.tf">Output Configuration</a></li>
</ul>
<h2 id="heading-instructions-of-deployment">💼 Instructions of Deployment</h2>
<p>Follow these steps to deploy the architecture:</p>
<h5 id="heading-step-1-clone-repository">Step 1: <strong><em>Clone Repository</em></strong></h5>
<p>Clone the repository in your local machine using the command "git clone"</p>
<pre><code class="lang-bash">   git <span class="hljs-built_in">clone</span> https://github.com/Joebaho/Joebaho-Cloud-Platform/blob/main/site/ansible-terraform-on-aws
</code></pre>
<h5 id="heading-step-2-initialize-folder">Step 2: <strong><em>Initialize Folder</em></strong></h5>
<p>Initialize the folder containing configuration files that were clone to Terraform and apply the configuration by typing the following command</p>
<pre><code class="lang-bash">   terraform init
</code></pre>
<p>You must see this image</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715053146228/f6ad13de-ac43-4063-96f4-5d46fdd2af24.jpeg" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-3-format-files">Step 3: <strong><em>Format Files</em></strong></h5>
<p>Apply any changes on files and Review the changes and confirm the good format with command:</p>
<pre><code class="lang-bash">   terraform fmt
</code></pre>
<h5 id="heading-step-4-validate-files">Step 4: <strong><em>Validate Files</em></strong></h5>
<p>Ensure that every files are syntactically valid and ready to go with the command:</p>
<pre><code class="lang-bash">   terraform validate
</code></pre>
<p>If everything is good you will have something like this</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715053285400/a644d478-94de-48a7-98c6-037544a5c404.jpeg" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-5-plan">Step 5: <strong><em>Plan</em></strong></h5>
<p>Create an execution plan to provide the achievement of the desired state. It Check and confirm the numbers of resources that will be create. Use command:</p>
<pre><code class="lang-bash">   terraform plan
</code></pre>
<p>The list of all resources in stage of creation will appear and you can see all properties(arguments and attributs) of each resources</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715053318452/6c98a172-5388-4cee-9b50-42811cf8385e.jpeg" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715053339249/d32f701d-0cf4-4e0b-849b-8968a64d77f5.jpeg" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-6-apply">Step 6: <strong><em>Apply</em></strong></h5>
<p>Bring all desired state resources on life. It Launch and create all resources listed in the configuration files. The command to perform the task is:</p>
<pre><code class="lang-bash">   terraform apply -auto-approve
</code></pre>
<p>After typing this command the process of creation will start and you will be able to see which resources is on the way to be create and the time it taking to create.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715053405289/9301b053-cfcf-44fc-acc1-0f9e94556266.jpeg" alt class="image--center mx-auto" /></p>
<p>At the end you will receive a prompt message showing all resources status: created, changed and the numbers of them.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715053430959/d75f1a4f-95ea-4025-b333-9ab23656787c.jpeg" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-7-review-of-resources">Step 7: <strong><em>Review of resources</em></strong></h5>
<p>Go back on the console and check all actual state resources one by one to see. You will have</p>
<p>Instances running</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715053507457/62281b51-043c-4112-a779-e8fd3c07cc7b.jpeg" alt class="image--center mx-auto" /></p>
<p>Security groups</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715053529913/c6484050-c773-476b-a36d-e64b7358ca1f.jpeg" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-8-connect-to-the-controller">Step 8: <strong><em>Connect to the Controller</em></strong></h5>
<p>As we have our security group rule open on port SSH(22) we gonna use the CLI to connect to the controller instance. We must be on the folder that contains our private key pair. Then we go on the console select the instance hit on connect. you will have an image like the bellow:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715053563513/fbd8a9ba-75a7-4579-a4ee-8411776ae64a.jpeg" alt class="image--center mx-auto" /></p>
<p>Copy the Chmod command and paste on the CLI then type enter. Nothing will change. This command just chane the permission for SSH.</p>
<p>Copy the link second link and paste again CLI. Some instructions will run and you will be prompt to validate if you want to connect : type "yes" and you will have an image like bellow showing the connection to the server is been made.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715053600053/96e49e15-ed88-4ec3-a85a-8dabb8e73eae.jpeg" alt class="image--center mx-auto" /></p>
<p>The user date placed in the configuration file will install prerequisites needed to run Ansible such as Python3, amazon-linux extras and Ansible. take a look on the user data.</p>
<ul>
<li><a target="_blank" href="https://github.com/Joebaho/Joebaho-Cloud-Platform/blob/main/site/ansible-terraform-on-aws/user_data.sh">user Data</a></li>
</ul>
<p>We can now change the static name or IP address to the name " controller " with the command</p>
<pre><code class="lang-bash">   $ sudo hostnamectl set-hostname controller
</code></pre>
<p>After you must exit and reconnect (ssh) again in order to have the new name appear in the path.</p>
<p>We now need to go on root to perform other command. We can do that with the command</p>
<pre><code class="lang-bash">   $ sudo -i
</code></pre>
<p>We can verify if Ansible was correctly install by typing command:</p>
<pre><code class="lang-bash">   $ ansible --version
</code></pre>
<p>There we will have this result</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715053672224/d42b2505-da67-4010-b063-8b6e437d48c3.jpeg" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-9-connect-to-all-hosts">Step 9: <strong><em>Connect to all Hosts</em></strong></h5>
<p>We also, going to use the same process we used to connect to the controller meaning we will be SSH on each server.</p>
<p>Go to each server on the console select one by one and connect via option connect. In the tab Copy the Chmod command and paste on the CLI then type enter. Nothing will change. this command just chane the permission for SSH.</p>
<p>Copy the link second link and paste again CLI. Some instructions will run and you will be prompt to validate if you want to connect : type "yes" the process will now connect to the server. Linux2 and Ubuntu. You going to use the command bellow to change name for each server</p>
<pre><code class="lang-bash">   $ sudo hostnamectl set-hostname &lt;name of the server&gt;
</code></pre>
<p>For the Amazon Linux type : Linux-nodes1 and Linux-nodes2.<br />For Ubuntu type : ubuntu-nodes1 and ubuntu-nodes2.<br />Remember to exit and reconnect again to have the command take effect. Right after you must go to the root of the server by typing command</p>
<pre><code class="lang-bash">   $ sudo -i
</code></pre>
<p>At the end you will have an image with all servers CLI like bellow showing the connection to the server is been made.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715053717133/d24ba3bf-40f2-4388-a725-a1500830dd32.jpeg" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-10-generate-amp-copy-key-pair">Step 10: <strong><em>Generate &amp; Copy Key Pair</em></strong></h5>
<p>Return to the controller to generate key pair that will be use to link the host server and clients or nodes server. We will generate a key pair which have a public and private key. You will have to copy the public and paste it in the folder named authorized_key of each nodes.</p>
<p>-Generate key pair with the command: must be on the root to perform this</p>
<pre><code class="lang-bash">   $ ssh-keygen -t rsa
</code></pre>
<p>Keep typing enter after all question asked. The command will generate public key (id_rsa.pub) and private key (id_rsa). Both will be in the folder name ".ssh" Enter the folder to see both keys. Type</p>
<pre><code class="lang-bash">   $ <span class="hljs-built_in">cd</span> .ssh
</code></pre>
<p>Then the next command will list the contains of the folder</p>
<pre><code class="lang-bash">   $ ls -al
</code></pre>
<p>See the result on the image bellow</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715053768071/6225cabd-ee1e-459a-b7eb-86a627af05da.jpeg" alt class="image--center mx-auto" /></p>
<p>-The private key will stay in the controller and the public key will be copy and paste in the nodes.</p>
<pre><code class="lang-bash">   $ cat id_rsa.pub
</code></pre>
<p>Copy the content and go paste it in the authorized_keys of the nodes. You will have to go to each nodes be on the root and then enter the folder ".SSH " by typing command</p>
<pre><code class="lang-bash">   $ <span class="hljs-built_in">cd</span> .ssh
</code></pre>
<p>You can list what is the contain of the folder with command</p>
<pre><code class="lang-bash">   $ ls -al
</code></pre>
<p>You will have a file name authorized_keys. You will have to enter the file by:</p>
<pre><code class="lang-bash">   $ vi authorized_keys
</code></pre>
<p>The file will have the private key already. So, you will have to type I (insert) then " fn + right arrow " to go to the end, then enter to go down and paste the copied text there. Lastly type Esc + :wq! To save the change. Do the same for all your nodes.</p>
<p>Here are images for all steps:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715053864201/09c546e5-846b-4904-ba05-767fc785c270.jpeg" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715053878033/dbc2064f-ea7c-438b-b718-3c1ed646889f.jpeg" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715053893072/8822c592-a72c-4cb7-99d3-82702d29cd82.jpeg" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-11-configuration-on-controller">Step 11: <strong><em>Configuration on Controller</em></strong></h5>
<p>You can now return to the controller and follow the configuration of the nodes by copy and paste the nodes Ip address in order to create the link between controller and nodes. Make sure to be in the ".SSH " folder then from there type version command</p>
<pre><code class="lang-bash">   $ ansible --version
</code></pre>
<p>There will be a config file with destination /etc/ansible/ansible.cfg. You must enter in the Ansible folder</p>
<pre><code class="lang-bash">   $ <span class="hljs-built_in">cd</span> /etc/
</code></pre>
<p>Then</p>
<pre><code class="lang-bash">   $ <span class="hljs-built_in">cd</span> ansible
</code></pre>
<p>And this command to see the content of the folder.</p>
<pre><code class="lang-bash">   $ ls -al
</code></pre>
<p>There will be three files (ansilbe.cfg, hosts and roles). You must enter the files hosts to add all hosts meaning all private IP of all the nodes in the Ex2 collection section. If you group make sure to put the name of the group in brackets [ubuntu-nodes] and [linux-nodes] and down the public or private IP address of the nodes.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715053953078/3142fca7-dad3-4195-9ec4-d5b0b6c4a635.jpeg" alt class="image--center mx-auto" /></p>
<p>Enter the hosts file to perform all changes:</p>
<pre><code class="lang-bash">   $ vi hosts
</code></pre>
<p>Type i (insert) then copy/paste all IP address of nodes in the IP section of the file, Lastly type Esc : wq! To save the change.</p>
<p>You must create a folder on the root named " web " and you must drop the "index.html" inside the folder.</p>
<h5 id="heading-step-12-run-ansible-command">Step 12: <strong><em>Run Ansible Command</em></strong></h5>
<p>To test the connectivity you can try with one nodes or all nodes One nodes type</p>
<pre><code class="lang-bash">   $ ssh root@&lt;private Ip placed on the hosts lists&gt;.                   Ex: $ ssh root@10.0.1.10
</code></pre>
<p>You will be prompt to valid the connection so you will press "y" to confirm and the connection will go through. To logout type exit.</p>
<p>For all servers type:</p>
<pre><code class="lang-bash">   $ ansible -m ping all
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715053992771/4da2ea92-531d-4906-bee6-2561aac0269c.jpeg" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-13-review-of-changes">Step 13: <strong><em>Review of Changes</em></strong></h5>
<p>After making sure all connectives are done between the controller and nodes. We have to return to the root and create a folder named "web" with the command</p>
<pre><code class="lang-bash">   $ mkdir web
</code></pre>
<p>Then enter the folder with the command</p>
<pre><code class="lang-bash">   $ <span class="hljs-built_in">cd</span> web
</code></pre>
<p>Once you inside the folder you must create two files " index.html " and " playbook.yml "</p>
<ul>
<li><p><a target="_blank" href="https://github.com/Joebaho/Joebaho-Cloud-Platform/blob/main/site/ansible-terraform-on-aws/index.html">index.html</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/Joebaho/Joebaho-Cloud-Platform/blob/main/site/ansible-terraform-on-aws/playbook.yml">playbook.yml</a></p>
</li>
</ul>
<p>As we have both files setup in the web folder. We will remain on the folder and we will use the Ansible command to perform all the requirements of the playbook file. Type command;</p>
<pre><code class="lang-bash">   $ ansible-playbook playbook.yml
</code></pre>
<p>the process will follow and you will have all the steps or plays written in the playbook been deploy. images will show what exactly are the outputs:</p>
<p>All tasks on the Linux nodes:</p>
<ul>
<li><p>Upgrade nodes</p>
</li>
<li><p>Install latest Apache server</p>
</li>
<li><p>Start Apache</p>
</li>
<li><p>Enable Apache</p>
</li>
<li><p>Copy Index.html from source to destination</p>
</li>
<li><p>Install Git</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715054052221/f488769f-6529-4cb7-9f86-19829931d60f.jpeg" alt class="image--center mx-auto" /></p>
<p>The same tasks executed on Linux will also be done on Ubuntu nodes</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715054072938/d3c3e4b6-4972-41db-b341-1b7510fc5a2a.jpeg" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715054091607/a66f7166-f637-42f0-aacb-850b7e4373a2.jpeg" alt class="image--center mx-auto" /></p>
<p>Here is now the recap of all tasks done.</p>
<p>You can test the web page by copying any public Ip of nodes then paste it in the browser you will have this bellow result.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715054138719/a6ecf3f3-1984-4ccf-bf80-79771b8dc3eb.jpeg" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-14-destroy">Step 14: <strong><em>Destroy</em></strong></h5>
<p>Destroy the terraform managed infrastructure meaning all resources created will be shut down. This action can be done with the command "terraform destroy"</p>
<pre><code class="lang-bash">   terraform destroy -auto-approve
</code></pre>
<p>At the end you will receive a prompt message showing all resources has been destroyed</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715054277936/199fef6c-a057-4039-afc6-fc6059a547d8.jpeg" alt class="image--center mx-auto" /></p>
<h2 id="heading-contributing">🤝 Contributing</h2>
<p>Your perspective is valuable! Whether you see potential for improvement or appreciate what's already here, your contributions are welcomed and appreciated. Thank you for considering joining us in making this project even better. Feel free to follow me for updates on this project and others, and to explore opportunities for collaboration. Together, we can create something amazing!</p>
<h2 id="heading-license">📄 License</h2>
<p>This project is licensed under the Joebahocloud License</p>
]]></content:encoded></item><item><title><![CDATA[Static web site hosting on S3 using Terraform]]></title><description><![CDATA[Static web site hosting on S3 using Terraform
🚀 Overview:
The Static web site project on AWS using Terraform aims to create a scalable and resilient web static site hosted on S3 which is an Amazon Web Services (AWS) cloud platform. This project util...]]></description><link>https://platform.joebahocloud.com/static-web-site-hosting-on-s3-using-terraform</link><guid isPermaLink="true">https://platform.joebahocloud.com/static-web-site-hosting-on-s3-using-terraform</guid><category><![CDATA[Static Website]]></category><dc:creator><![CDATA[Joseph Mbatchou]]></dc:creator><pubDate>Thu, 28 Mar 2024 19:18:15 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1711681238731/3fc7bf4b-73f2-426c-9c7a-4ff0386079ea.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-static-web-site-hosting-on-s3-using-terraform">Static web site hosting on S3 using Terraform</h1>
<h2 id="heading-overview">🚀 Overview:</h2>
<p>The Static web site project on AWS using Terraform aims to create a scalable and resilient web static site hosted on S3 which is an Amazon Web Services (AWS) cloud platform. This project utilizes Terraform, an Infrastructure as Code (IaC) tool, to provision and manage the S3 bucket with all the parameters that will make it publicly accessible. The goal of this project is to design and deploy a web site host on S3 that will include all necessary components of the S3 bucket.</p>
<h2 id="heading-problem-statement">🔧 Problem Statement</h2>
<p>Terraform is an IaC software tool that provides a consistent command line interface (CLI) workflow to manage hundreds of cloud services. Terraform codifies cloud APIs into declarative configuration files. In this specific case you need to create S3 bucket that will host a static web site. Terraform will use this deployment to provide all needed S3 elements that will make the web site to be accessible avoiding us to use the console and it will automate the setup, ensuring consistency and reducing human error.</p>
<h2 id="heading-techonology-stack">💽 Techonology Stack</h2>
<p>The architecture consists of the following three tiers:</p>
<ul>
<li><strong>S3</strong>: AWS VPC</li>
</ul>
<h2 id="heading-architecture-diagram">📌 Architecture Diagram</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1711943383051/06404dbf-4a65-4b03-8037-c05433ca172d.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-project-requirements">🌟 Project Requirements</h2>
<p>Before you get started, make sure you have the following prerequisites in place:</p>
<ul>
<li><p><a target="_blank" href="https://www.terraform.io/">Terraform</a> installed on your local machine.</p>
</li>
<li><p>AWS IAM credentials configured in your text editor. In this case we will use VSCODE.</p>
</li>
<li><p>Git installed on your local machine and Github account set up <a target="_blank" href="https://www.github.com/">Github</a></p>
</li>
<li><p>Git for cloning the repository.</p>
</li>
</ul>
<p>You must also know Terraform workflow</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715054684591/993c73f1-4ca4-45ec-a08a-7e59b9512db1.jpeg" alt class="image--center mx-auto" /></p>
<h2 id="heading-table-of-contents">📋 Table of Contents</h2>
<p>I - <strong>Terraform Configuration files</strong></p>
<p><a target="_blank" href="http://127.0.0.1:8000/static-web-terraform-on-aws/#-Provider-configuration">Step 1: Provider Configuration</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/static-web-terraform-on-aws/#-VPC-configuration">Step 2: S3 Configuration</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/static-web-terraform-on-aws/#-Output-configuration">Step 4: Output Configuration</a></p>
<p>II - <strong>Instructions of Deployment</strong></p>
<p><a target="_blank" href="http://127.0.0.1:8000/static-web-terraform-on-aws/#-Clone-Repository">Step 1: Clone Repository</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/static-web-terraform-on-aws/#-Initialize-Folder">Step 2: Initialize Folder</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/static-web-terraform-on-aws/#-Format-Files">Step 3: Format Files</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/static-web-terraform-on-aws/#-Validate-Files">Step 4: Validate Files</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/static-web-terraform-on-aws/#-Plan">Step 5: Plan</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/static-web-terraform-on-aws/#-Apply">Step 6: Apply</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/static-web-terraform-on-aws/#-Review-Of-Resources">Step 7: Review of Resources</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/static-web-terraform-on-aws/#-Destroy">Step 8: Destroy</a></p>
<h2 id="heading-terraform-configuration-files">✨Terraform Configuration files</h2>
<p>You need to write different files generating resources</p>
<h5 id="heading-step-1-provider-configuration">Step 1: <strong><em>Provider Configuration</em></strong></h5>
<p>Here we declare our cloud provider and we specify the region where we will be launching resources</p>
<ul>
<li><a target="_blank" href="https://github.com/Joebaho/Joebaho-Cloud-Platform/blob/main/site/static-web-terraform-on-aws/providers.tf">provider Configuration</a></li>
</ul>
<h5 id="heading-step-3-s3-configuration">Step 3: <strong><em>S3 Configuration</em></strong></h5>
<p>This is where you create the basement, foundation and networking where all the resources will be launch. It includes VPC, Subnets, IGW, NatGateway, EIP and Route tables</p>
<ul>
<li><a target="_blank" href="https://github.com/Joebaho/Joebaho-Cloud-Platform/blob/main/site/static-web-terraform-on-aws/main.tf">S3 Configuration</a></li>
</ul>
<p>We have here</p>
<ul>
<li><p><strong>Bucket name</strong>:</p>
</li>
<li><p><strong>Permissions</strong>:</p>
</li>
<li><p><strong>Objects</strong>:</p>
</li>
<li><p><strong>Properties</strong>:</p>
</li>
</ul>
<h5 id="heading-step-4-output-configuration">Step 4: <strong><em>Output Configuration</em></strong></h5>
<p>Know as Output Value : it is a convenient way to get useful information about your infrastructure printed on the CLI. It is showing the ARN, name or ID of a resource. In this case we are bringing out the DNS name of the web application Load balancer.</p>
<ul>
<li><a target="_blank" href="https://github.com/Joebaho/Joebaho-Cloud-Platform/blob/main/site/static-web-terraform-on-aws/outputs.tf">Output Configuration</a></li>
</ul>
<h2 id="heading-instructions-of-deployment">💼 Instructions of Deployment</h2>
<p>Follow these steps to deploy the architecture:</p>
<h5 id="heading-step-5-clone-repository">Step 5: <strong><em>Clone Repository:</em></strong></h5>
<p>Clone the repository in your local machine using the command "git clone"</p>
<blockquote>
<p>git clone https://github.com/Joebaho/Joebaho-Cloud-Platform/tree/main/docs/static-web-terraform-on-aws</p>
</blockquote>
<h5 id="heading-step-6-initialize-folder">Step 6: <strong><em>Initialize Folder</em></strong></h5>
<p>Initialize the folder containing configuration files that were clone to Terraform and apply the configuration by typing the following command</p>
<blockquote>
<p>terraform init</p>
</blockquote>
<p>You must see this image</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715054741743/ca4c37ac-bc19-4bea-b482-281ad80af272.jpeg" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-7-format-files">Step 7: <strong><em>Format Files</em></strong></h5>
<p>Apply any changes on files and Review the changes and confirm the good format with command:</p>
<blockquote>
<p>terraform fmt</p>
</blockquote>
<h5 id="heading-step-8-validate-files">Step 8: <strong><em>Validate Files</em></strong></h5>
<p>Ensure that every files are syntactically valid and ready to go with the command:</p>
<blockquote>
<p>terraform validate</p>
</blockquote>
<p>If everything is good you will have something like this</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715054781981/ef5295a1-6e9b-460f-bbe8-defd5b8658a1.jpeg" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-9-plan">Step 9: <strong><em>Plan</em></strong></h5>
<p>Create an execution plan to provide the achievement of the desired state. It Check and confirm the numbers of resources that will be create. Use command:</p>
<blockquote>
<p>terraform plan</p>
</blockquote>
<p>The list of all resources in stage of creation will appear and you can see all properties(arguments and attributs) of each resources</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715054819778/f1b9e04b-1e9e-4e83-b6a9-cde631325b73.jpeg" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715054840712/401acf60-e600-40b3-a3e5-c45d0ad673d7.jpeg" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-10-apply">Step 10: <strong><em>Apply</em></strong></h5>
<p>Bring all desired state resources on life. It Launch and create all resources listed in the configuration files. The command to perform the task is:</p>
<blockquote>
<p>terraform apply -auto-approve</p>
</blockquote>
<p>The process of creation will start and you will be able to see which resourse is on the way to be create and the time it taking to create.</p>
<p>At the end you will receive a prompt message showing all resources status: created, changed and the numbers of them.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715054888811/d8c3436c-732a-4666-bd23-7cd3616fe209.jpeg" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-11-review-of-resources">Step 11: <strong><em>Review of resources</em></strong></h5>
<p>Go back on the console and check all actual state resources one by one to see. You will have</p>
<ul>
<li><p><strong>S3</strong></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715054930225/e474a5b5-f887-424e-81e7-bf5efa597a7e.jpeg" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Permissions</strong></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715054967611/91ee218f-b487-4a4f-b862-3dca59c88f23.jpeg" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>objects</strong></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715054994427/b0e344e7-de33-476b-ba77-4bb090679721.jpeg" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>properties</strong></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715055018433/c4f56230-2788-4f26-a05e-8be2020d3d12.jpeg" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>web page</strong></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715055050798/e5e4d94d-0730-4d1d-85b4-7fc969e74e96.jpeg" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>error page</strong></p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715055074188/4388bbff-526b-4f5d-a820-988ee1fd0ca9.jpeg" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-12-destroy">Step 12: <strong><em>Destroy</em></strong></h5>
<p>Destroy the terraform managed infrastructure meaning all resourcescreated will be shut down. This action can be done with the command "terraform destroy"</p>
<blockquote>
<p>terraform destroy -auto-approve</p>
</blockquote>
<p>At the end you will receive a prompt message showing all resources has been destroyed</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715055102370/6857ade3-4928-427c-8973-bea9a068ff10.jpeg" alt class="image--center mx-auto" /></p>
<h2 id="heading-contributing">🤝 Contributing</h2>
<p>Your perspective is valuable! Whether you see potential for improvement or appreciate what's already here, your contributions are welcomed and appreciated. Thank you for considering joining us in making this project even better. Feel free to follow me for updates on this project and others, and to explore opportunities for collaboration. Together, we can create something amazing!</p>
<h2 id="heading-license">📄 License</h2>
<p>This project is licensed under the Joebaho Cloud License</p>
]]></content:encoded></item><item><title><![CDATA[Virtual Private Cloud Architecture on AWS using Terraform]]></title><description><![CDATA[Virtual Private Cloud Architecture on AWS using Terraform
🚀 Overview:
The VPC Architecture project on AWS using Terraform aims to create a scalable and resilient infrastructure that leverages the power of Amazon Web Services (AWS) cloud platform. Th...]]></description><link>https://platform.joebahocloud.com/virtual-private-cloud-architecture-on-aws-using-terraform</link><guid isPermaLink="true">https://platform.joebahocloud.com/virtual-private-cloud-architecture-on-aws-using-terraform</guid><category><![CDATA[Custom AWS VPC with Terraform]]></category><dc:creator><![CDATA[Joseph Mbatchou]]></dc:creator><pubDate>Thu, 28 Mar 2024 19:01:23 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1711682342411/8ec40ba3-2107-4430-8f0d-6335fbb17a6c.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-virtual-private-cloud-architecture-on-aws-using-terraform">Virtual Private Cloud Architecture on AWS using Terraform</h1>
<h2 id="heading-overview">🚀 Overview:</h2>
<p>The VPC Architecture project on AWS using Terraform aims to create a scalable and resilient infrastructure that leverages the power of Amazon Web Services (AWS) cloud platform. This project utilizes Terraform, an Infrastructure as Code (IaC) tool, to provision and manage the infrastructure components, enabling automation, repeatability, and scalability. The primary objective of this project is to design and deploy a virtual Private Cloud or Networking architecture on AWS that consists of multiple components, including basement, networking, traffic flow. All components will be deployed across two Availability Zones (AZs) for high availability and fault tolerance.</p>
<h2 id="heading-problem-statement">🔧 Problem Statement</h2>
<p>Terraform is an IaC software tool that provides a consistent command line interface (CLI) workflow to manage hundreds of cloud services. Terraform codifies cloud APIs into declarative configuration files. In this specific case you need to create foundation Networking(VPC, Subnets, route table, IGW, NAT Gateway...), Terraform will automatically use the configuration files to provide the infrastructure resources where we can run application needed. Terraform will use his deployment to provide all AWS needed elements avoiding us to use the console and it will automate the setup, ensuring consistency and reducing human error.</p>
<h2 id="heading-techonology-stack">💽 Techonology Stack</h2>
<p>The architecture consists of the following three tiers:</p>
<ul>
<li><p><strong>VPC</strong>: AWS VPC</p>
</li>
<li><p><strong>Subnets</strong>: AWS Subnets</p>
</li>
<li><p><strong>Route table</strong>: AWS route table</p>
</li>
<li><p><strong>NACL</strong>: AWS NACL</p>
</li>
<li><p><strong>Internet Gateway</strong>: AWS IGW</p>
</li>
</ul>
<h2 id="heading-architecture-diagram">📌 Architecture Diagram</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715055442112/e3dde262-6f54-4a4f-ab3f-333fd65519c2.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-project-requirements">🌟 Project Requirements</h2>
<p>Before you get started, make sure you have the following prerequisites in place:</p>
<ul>
<li><p><a target="_blank" href="https://www.terraform.io/">Terraform</a> installed on your local machine.</p>
</li>
<li><p>AWS IAM credentials configured in your text editor. In this case we will use VSCODE.</p>
</li>
<li><p>Git installed on your local machine and Github account set up <a target="_blank" href="https://www.github.com/">Github</a></p>
</li>
<li><p>Git for cloning the repository.</p>
</li>
</ul>
<p>You must also know Terraform workflow</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715055468795/5726d060-0df2-404e-a0b1-3ce1889018ac.jpeg" alt class="image--center mx-auto" /></p>
<h2 id="heading-table-of-contents">📋 Table of Contents</h2>
<p>I - <strong>Terraform Configuration files</strong></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Provider-configuration">Step 1: Provider Configuration</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-variables-configuration">Step 2: Variables Configuration</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-VPC-configuration">Step 3: VPC Configuration</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Output-configuration">Step 4: Output Configuration</a></p>
<p>II - <strong>Instructions of Deployment</strong></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Clone-Repository">Step 5: Clone Repository</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Initialize-Folder">Step 6: Initialize Folder</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Format-Files">Step 7: Format Files</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Validate-Files">Step 8: Validate Files</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Plan">Step 9: Plan</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Apply">Step 10: Apply</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Review-Of-Resources">Step 11: Review of Resources</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-foundation-terraform-on-aws/#-Destroy">Step 12: Destroy</a></p>
<h2 id="heading-terraform-configuration-files">✨Terraform Configuration files</h2>
<p>You need to write different files generating resources</p>
<h5 id="heading-step-1-provider-configuration">Step 1: <strong><em>Provider Configuration</em></strong></h5>
<p>Here we declare our cloud provider and we specify the region where we will be launching resources</p>
<ul>
<li><a target="_blank" href="https://github.com/Joebaho/Joebaho-Cloud-Platform/blob/main/site/vpc-foundation-terraform-on-aws/providers.tf">provider Configuration</a></li>
</ul>
<h5 id="heading-step-2-variables-configuration">Step 2: <strong><em>Variables Configuration</em></strong></h5>
<p>This is where we declare all variables and their value. It includes</p>
<ul>
<li><p><strong>Variables</strong>: List of element that can vary or change. They can be reuse values throughout our code without repeating ourselves and help make the code dynamic</p>
</li>
<li><p><strong>values</strong>: values attributed to each variables.</p>
</li>
</ul>
<p>We have</p>
<ul>
<li><p><a target="_blank" href="https://github.com/Joebaho/Joebaho-Cloud-Platform/blob/main/site/vpc-foundation-terraform-on-aws/variables.tf">variables Configuration</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/Joebaho/Joebaho-Cloud-Platform/blob/main/site/vpc-foundation-terraform-on-aws/terraform.tfvars">value Configuration</a></p>
</li>
</ul>
<h5 id="heading-step-3-vpc-configuration">Step 3: <strong><em>VPC Configuration</em></strong></h5>
<p>This is where you create the basement, foundation and networking where all the resources will be launch. It includes VPC, Subnets, IGW, NatGateway, EIP and Route tables</p>
<ul>
<li><a target="_blank" href="https://github.com/Joebaho/Joebaho-Cloud-Platform/blob/main/site/vpc-foundation-terraform-on-aws/main.tf">VPC Configuration</a></li>
</ul>
<p>We have here</p>
<ul>
<li><p><strong>VPC</strong>: Virtual Private Cloud the main and private environment where all resources will be launch</p>
</li>
<li><p><strong>Subnets</strong>: is a segmented portion of a virtual private cloud (VPC) that allows you to partition your network resources. Subnets are used to organize and manage your cloud resources more effectively by providing isolation and control over network traffic.</p>
</li>
<li><p><strong>Internet Gateway</strong>: it plays a crucial role in enabling internet connectivity for resources within a VPC, allowing instances to access services, applications, and data hosted on the public internet while providing scalability, redundancy, and security features.</p>
</li>
<li><p><strong>Route Tables</strong>: is a fundamental networking component that controls the routing of network traffic within a Virtual Private Cloud (VPC). Route tables define the rules for directing traffic from one subnet to another or to external networks, such as the internet or on-premises networks.</p>
</li>
<li><p><strong>NCAL</strong>: Network Access Control Lists (NACLs) are a security layer in AWS that act as a firewall for controlling traffic in and out of one or more subnets within a Virtual Private Cloud (VPC).</p>
</li>
<li><p><strong>Security Groups</strong>: a security group acts as a virtual firewall for controlling inbound and outbound traffic to AWS resources, such as EC2 instances, RDS databases, and other services within a Virtual Private Cloud (VPC). Security groups allow you to define rules that specify the type of traffic allowed or denied based on protocols, ports, and IP addresses.</p>
</li>
</ul>
<h5 id="heading-step-4-output-configuration">Step 4: <strong><em>Output Configuration</em></strong></h5>
<p>Know as Output Value : it is a convenient way to get useful information about your infrastructure printed on the CLI. It is showing the ARN, name or ID of a resource. In this case we are bringing out the DNS name of the web application Load balancer.</p>
<ul>
<li><a target="_blank" href="https://github.com/Joebaho/Joebaho-Cloud-Platform/blob/main/site/vpc-foundation-terraform-on-aws/outputs.tf">Output Configuration</a></li>
</ul>
<h2 id="heading-instructions-of-deployment">💼 Instructions of Deployment</h2>
<p>Follow these steps to deploy the architecture:</p>
<h5 id="heading-step-5-clone-repository">Step 5: <strong><em>Clone Repository:</em></strong></h5>
<p>Clone the repository in your local machine using the command "git clone"</p>
<blockquote>
<p>git clone https://github.com/Joebaho/Joebaho-Cloud-Platform/tree/main/site/vpc-foundation-terraform-on-aws</p>
</blockquote>
<h5 id="heading-step-6-initialize-folder">Step 6: <strong><em>Initialize Folder</em></strong></h5>
<p>Initialize the folder containing configuration files that were clone to Terraform and apply the configuration by typing the following command</p>
<blockquote>
<p>terraform init</p>
</blockquote>
<p>You must see this image</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715055540427/852269b8-34b2-425b-9d9b-7648a56da1ed.jpeg" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-7-format-files">Step 7: <strong><em>Format Files</em></strong></h5>
<p>Apply any changes on files and Review the changes and confirm the good format with command:</p>
<blockquote>
<p>terraform fmt</p>
</blockquote>
<h5 id="heading-step-8-validate-files">Step 8: <strong><em>Validate Files</em></strong></h5>
<p>Ensure that every files are syntactically valid and ready to go with the command:</p>
<blockquote>
<p>terraform validate</p>
</blockquote>
<p>If everything is good you will have something like this</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715055565642/c49c36e5-2914-4bdf-9243-f04edd658445.jpeg" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-9-plan">Step 9: <strong><em>Plan</em></strong></h5>
<p>Create an execution plan to provide the achievement of the desired state. It Check and confirm the numbers of resources that will be create. Use command:</p>
<blockquote>
<p>terraform plan</p>
</blockquote>
<p>The list of all resources in stage of creation will appear and you can see all properties(arguments and attributs) of each resources</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715055597147/1cb0fc0e-ad72-4c4e-8804-0582d9af8c5b.jpeg" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715055611261/449a461a-13e2-46b4-a672-09f196e63b48.jpeg" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-10-apply">Step 10: <strong><em>Apply</em></strong></h5>
<p>Bring all desired state resources on life. It Launch and create all resources listed in the configuration files. The command to perform the task is:</p>
<blockquote>
<p>terraform apply -auto-approve</p>
</blockquote>
<p>Now, the creation will start and you will be able to see which resources is on the way to be create and the time it taking to create.</p>
<p>At the end you will receive a prompt message showing all resources status: created, changed and the numbers of them.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715055649673/365f8b98-675a-4c6a-bb23-5af4a48c1680.jpeg" alt class="image--center mx-auto" /></p>
<p>Here are the outputs :</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715055882756/9f471f80-9478-4d89-a3d6-7dac593efe48.jpeg" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-11-review-of-resources">Step 11: <strong><em>Review of resources</em></strong></h5>
<p>Go back on the console and check all actual state resources one by one to see. You will have</p>
<ul>
<li><strong>VPC</strong></li>
</ul>
<ul>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715055683892/fd1d2833-ff24-4f45-9f87-0732952d8f03.jpeg" alt class="image--center mx-auto" /></p>
<p>  <strong>Subnets</strong></p>
</li>
</ul>
<ul>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715055711140/a4eeda54-9584-405a-8f92-7ba16de017f8.jpeg" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>IGW</strong></p>
</li>
</ul>
<ul>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715055730741/44710587-f727-4849-b845-937167481869.jpeg" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Route Tables</strong></p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715055760199/0df6c37c-87e0-4037-968c-9aa284760dce.jpeg" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>NCAL</strong></p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715055781416/013d9279-d6a3-4461-bee4-020d6abbc537.jpeg" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-12-destroy">Step 12: <strong><em>Destroy</em></strong></h5>
<p>Destroy the terraform managed infrastructure meaning all resources created will be shut down. This action can be done with the command "terraform destroy"</p>
<blockquote>
<p>terraform destroy -auto-approve</p>
</blockquote>
<p>At the end you will receive a prompt message showing all resources has been destroyed</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715055802172/6848c990-64a8-4842-85b4-e29f67c800c6.jpeg" alt class="image--center mx-auto" /></p>
<h2 id="heading-contributing">🤝 Contributing</h2>
<p>Your perspective is valuable! Whether you see potential for improvement or appreciate what's already here, your contributions are welcomed and appreciated. Thank you for considering joining us in making this project even better. Feel free to follow me for updates on this project and others, and to explore opportunities for collaboration. Together, we can create something amazing!</p>
<h2 id="heading-license">📄 License</h2>
<p>This project is licensed under the JoebahoCloud License</p>
]]></content:encoded></item><item><title><![CDATA[AWS VPC FlowLogs]]></title><description><![CDATA[AWS VPC FlowLogs
🚀 Overview:
The project aims to leverage VPC flow logs, a feature provided by Amazon Web Services (AWS), to enhance network visibility, security monitoring, and troubleshooting capabilities within the AWS environment. By enabling an...]]></description><link>https://platform.joebahocloud.com/aws-vpc-flowlogs</link><guid isPermaLink="true">https://platform.joebahocloud.com/aws-vpc-flowlogs</guid><category><![CDATA[AWS VPC FLOWLOGS]]></category><dc:creator><![CDATA[Joseph Mbatchou]]></dc:creator><pubDate>Thu, 28 Mar 2024 18:36:54 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1711681758567/1c2ab36a-61af-4ba2-a143-752b5e7d4b7f.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-aws-vpc-flowlogs">AWS VPC FlowLogs</h1>
<h2 id="heading-overview">🚀 Overview:</h2>
<p>The project aims to leverage VPC flow logs, a feature provided by Amazon Web Services (AWS), to enhance network visibility, security monitoring, and troubleshooting capabilities within the AWS environment. By enabling and analyzing VPC flow logs, the project seeks to provide valuable insights into network traffic patterns, detect security threats and anomalies, and optimize network performance. By implementing a robust VPC flow logs solution and integrating it into existing network monitoring and security frameworks, organizations can enhance their ability to manage and secure their AWS environments effectively, mitigating risks, improving operational efficiency, and ensuring compliance with regulatory requirements.</p>
<h2 id="heading-problem-statement">🔧 Problem Statement</h2>
<p>In complex cloud environments hosted on AWS, maintaining comprehensive visibility into network traffic and ensuring robust security monitoring are critical challenges for organizations. Traditional network monitoring tools may not provide sufficient visibility into traffic flows within virtual private clouds (VPCs), making it difficult to detect and investigate security incidents, troubleshoot connectivity issues, and optimize network performance.To address these challenges, the project proposes leveraging VPC flow logs, a feature provided by AWS, to capture and analyze network traffic metadata at the VPC level. VPC flow logs provide detailed information about the traffic flowing to and from network interfaces in the VPC, including source and destination IP addresses, ports, protocols, and packet counts.</p>
<h2 id="heading-techonology-stack">💽 Techonology Stack</h2>
<p>The architecture consists of the following three tiers:</p>
<ul>
<li><p><strong>VPC</strong>: AWS VPC</p>
</li>
<li><p><strong>IAM role</strong>: AWS IAM</p>
</li>
<li><p><strong>EC2 Instances</strong>: AWS EC2</p>
</li>
<li><p><strong>CloudWatch</strong>: AWS CloudWatch</p>
</li>
<li><p><strong>VPC Flowlogs</strong>: AWS VPC flow logs</p>
</li>
</ul>
<h2 id="heading-architecture-diagram">📌 Architecture Diagram</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713926764446/8900d7c3-cdb4-4c15-b583-3d29ce9ad2b6.jpeg" alt class="image--center mx-auto" /></p>
<h2 id="heading-project-requirements">🌟 Project Requirements</h2>
<p>Before you get started, make sure you have the following prerequisites in place:</p>
<ul>
<li><p>AWS IAM credentials configured in your text editor. In this case we will use VSCODE.</p>
</li>
<li><p>Git installed on your local machine and Github account set up <a target="_blank" href="https://www.github.com/">Github</a></p>
</li>
<li><p>Git for cloning the repository.</p>
</li>
<li><p>An Infrastructure (VPC, Subnets, route table, Security groups, NACL...) ready to be use for the lab.</p>
</li>
<li><p>Two instances running with a user data attached on each.</p>
</li>
</ul>
<p>You must also know the goal of setting up a VPC flow logs which are:</p>
<p>1- <strong>Enhanced Network Visibility</strong>: Gain comprehensive visibility into network traffic patterns and behavior within the AWS VPC environment.<br />2- <strong>Improved Security Monitoring</strong>: Detect and investigate security threats, anomalies, and unauthorized access attempts in real-time.<br />3- <strong>Efficient Troubleshooting</strong>: Quickly diagnose and troubleshoot connectivity issues, performance bottlenecks, and network anomalies.<br />4- <strong>Optimized Network Performance</strong>: Identify opportunities to optimize network performance, resource utilization, and cost efficiency based on traffic analysis and insights.</p>
<h2 id="heading-table-of-contents">📋 Table of Contents</h2>
<p>I - <strong>Infrastructure Configuration</strong></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-flow-logs-on-aws/#-VPC-Configuration">Step 1: VPC Configuration</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-flow-logs-on-aws/#-intances-configuration">Step 2: Instances Configuration</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-flow-logs-on-aws/#-web-page-output">Step 3: Web page Output</a></p>
<p>II - <strong>Instructions of Deployment</strong></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-flow-logs-on-aws/#-IAM-Role-Creation">Step 1: IAM Role</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-flow-logs-on-aws/#-Log-Group-Creation">Step 2: Log Group creation</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-flow-logs-on-aws/#-Flow-Logs_creation">Step 3: FlowLogs Creation</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-flow-logs-on-aws/#-Checking-Logs">Step 4: Checking Logs</a></p>
<p><a target="_blank" href="http://127.0.0.1:8000/vpc-flow-logs-on-aws/#-Understanding-Logs">Step 5: Understanding Logs</a></p>
<h2 id="heading-infrastructure-configuration">✨Infrastructure Configuration</h2>
<p>You need to create all resources needed for the accomplishment of the project. We considered this part easy to do. So, we will present infrastructure that was prebuilt</p>
<h5 id="heading-step-1-vpc-configuration">Step 1: <strong>VPC Configuration</strong></h5>
<p>Here we declare our foundation or networking environment. We have here VPC, subnets, routable, security groups, NACL...</p>
<ul>
<li>VPC</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713926820700/66569bb8-fa21-4f7a-8072-e37755ed47e3.jpeg" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-2-instances-configuration">Step 2: <strong>Instances Configuration</strong></h5>
<ul>
<li>EC2 instances</li>
</ul>
<p>We will be launching two EC2 instances in the public subnets with user data attached on them. We will use the public Ip address to display the content on the browser.</p>
<p><a target="_blank" href="https://github.com/Joebaho/Joebaho-Cloud-Platform/blob/main/site/vpc-flow-logs-on-aws/userdata.sh">User-data</a></p>
<p>Instance 1</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713926865056/8d4eefa5-9a8e-4dce-aad8-734cfd0e1047.jpeg" alt class="image--center mx-auto" /></p>
<p>Instance 2</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713926887498/eeb4c322-7235-4f79-9037-083976d1de03.jpeg" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-3-web-page-output">Step 3: <strong>Web page output</strong></h5>
<p>We will use public IP address of each instances to see the web page on the browser. These are the two web pages showing the message posted in the user data.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713926948376/264fc1f4-fe4b-41df-993a-53c183f05eee.jpeg" alt class="image--center mx-auto" /></p>
<h2 id="heading-instructions-of-deployment">💼 Instructions of Deployment</h2>
<p>Follow these steps to produce the VPC Flowlogs:</p>
<h5 id="heading-step-1-create-the-iam-role">Step 1: <strong>Create the IAM role</strong></h5>
<p>Create an IAM role that will allow the VPC to drop all logs in CloudWatch. The full access permission need to be add in the role and the trusted-entities in the role must be edit to "vpc-flow-logs.amazonaws.com"</p>
<p>Create role by giving name and a description</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713927011957/5eaee147-05a9-451e-9c27-ceab92dd046e.jpeg" alt class="image--center mx-auto" /></p>
<p>Add the CloudWatch full access permission</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713927045193/2d1e2c02-0284-4506-8a84-d1d4783f3108.jpeg" alt class="image--center mx-auto" /></p>
<p>In the "Trust Relationship " you must change the trust policy by replacing the trusted-entities <strong>"ec2.amazonaws.com"</strong> to <strong>"vpc-flow-logs.amazonaws.com"</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713927095557/955d6339-73a3-4f9d-b8aa-d51025a437ac.jpeg" alt class="image--center mx-auto" /></p>
<p>You have to follow this way : <strong>Trust relationship / click Edit trust policy / "vpc-flow-logs.amazonaws.com"</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713927117417/8ea55d55-28d7-4a6d-af8f-84220f5ff2cf.jpeg" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-2-create-the-log-group-in-the-cloudwatch">Step 2: <strong>Create the Log Group in the CloudWatch</strong></h5>
<p>Go on CloudWatch to create the log group by following the way:</p>
<p><strong>CloudWatch &gt; Log groups &gt; Click Create log group</strong></p>
<p>You will have to provide</p>
<ul>
<li><p><strong>Name of the log group</strong></p>
</li>
<li><p><strong>Retention period</strong> : giving the period of time when the logs can be kept</p>
</li>
<li><p><strong>Class</strong> : type of the log group.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713927167237/048ff221-07e0-468d-89fb-019712fbe982.jpeg" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-3-create-flowlogs-in-vcp">Step 3: <strong>Create Flowlogs in VCP</strong></h5>
<p>You must go back on your VPC option, select the VPC you working then keep following the process</p>
<p><strong>VPC &gt; select your_vpc &gt; click Flow Logs tab &gt; click Create flow logs</strong></p>
<p>In the flow log tab open you will add differents setting such as:</p>
<ul>
<li><p><strong>Name</strong>: identifier of the Flowlogs</p>
</li>
<li><p><strong>filter</strong> : type of the traffic to be considered. It will be Accepted, Reject or both</p>
</li>
<li><p><strong>Maximun aggregation interval</strong>: the interval of time where the Flowlogs will capture record.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713927227101/926c6cdf-ce3a-4aa1-aee2-a1b562fc646e.jpeg" alt class="image--center mx-auto" /></p>
<p>A destination where the logs will be dump must be specify.There are multiple place as you can see in the picture. In our case we gonna drop them in CloudWatch logs. The role created before is also need to be attached after.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713927250125/9b7a1255-0200-4dbd-a70d-177577bab694.jpeg" alt class="image--center mx-auto" /></p>
<p>At the end we have declare the format of the Output of the logs. Either we can customize it to use the AWS default format</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713927274497/a27ab1f1-2ae9-407b-be94-096d6e74a466.jpeg" alt class="image--center mx-auto" /></p>
<h5 id="heading-step-4-check-the-logs">Step 4: <strong>Check the logs</strong></h5>
<p>Here comes the time to see the result of the VPC Flowlogs. Remember we launched two EC2 instances. Each instances have an ENI(Elastic Network Interface) which is a virtual network interface that you can attach to an instance in a VPC (Virtual Private Cloud) in AWS. ENIs enable instances to communicate with other resources in the VPC and with the internet. In order to see the flow of access accepted or rejected to the instances we must go back on CloudWatch then follow the process</p>
<p><strong>CloudWatch &gt; Log groups &gt; Choose the log group(vpc-flowlogs-group) &gt;Log streams Tabs &gt; Choose eni</strong></p>
<p>A picture like the bellow one will appear</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713927393619/751c37e5-987d-4365-83cc-7b7667977573.jpeg" alt class="image--center mx-auto" /></p>
<p>As you pick the first eni here come the Flowlogs of the specific eni.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713927436963/cfb6debd-42bf-4c8a-ab1c-070190af2ef5.jpeg" alt class="image--center mx-auto" /></p>
<p>Then the second eni will produce the logs bellow</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713927459511/8fe53e83-2506-48e5-87d6-ccb622e33b27.jpeg" alt class="image--center mx-auto" /></p>
<h4 id="heading-step-5-understand-the-logs">Step 5: <strong>Understand the logs</strong></h4>
<p>Understanding VPC Flow Logs is essential for monitoring and analyzing network traffic within your AWS Virtual Private Cloud (VPC). VPC Flow Logs capture information about the traffic flowing in and out of network interfaces in your VPC, providing valuable insights into network activity, including the source and destination of traffic, the protocols used, and the number of packets and bytes transferred.</p>
<p>Here's how you can interpret and understand VPC Flow Logs:</p>
<p><em>1-Log Format:</em></p>
<p>VPC Flow Logs are stored in CloudWatch Logs and can be viewed in the AWS Management Console. Each log record contains information about a specific network flow, including fields such as source and destination IP addresses, ports, protocol, action (accepted or rejected), and timestamps. Understanding Fields:</p>
<p><em>2-Understanding Fields</em></p>
<p><strong>Source/Destination IP Address</strong>: The source and destination IP addresses of the network traffic.<br /><strong>Source/Destination Port</strong>: The source and destination ports used in the network communication.<br /><strong>Protocol</strong>: The protocol used in the network communication (e.g., TCP, UDP, ICMP).<br /><strong>Packets/Bytes</strong>: The number of packets and bytes transferred in the network flow.<br /><strong>Action</strong>: Indicates whether the traffic was accepted or rejected by security groups or network ACLs.<br /><strong>Timestamp</strong>: The timestamp when the network flow occurred.</p>
<p><em>3- Traffic Patterns:</em></p>
<p>Analyze traffic patterns to identify trends and anomalies. Look for patterns such as spikes in traffic volume, unusual source/destination IP addresses, or unexpected protocols. This can help detect security threats, performance issues, or misconfigurations.</p>
<p><em>4- Security Analysis:</em></p>
<p>Use VPC Flow Logs for security analysis and monitoring. Identify and investigate suspicious or unauthorized network activity, such as unauthorized access attempts, port scans, or data exfiltration attempts. VPC Flow Logs can help you detect and respond to security incidents in real-time.</p>
<p><em>5- Troubleshooting:</em></p>
<p>Troubleshoot network connectivity issues by analyzing VPC Flow Logs. Identify failed connections, dropped packets, or misconfigured security group rules that may be causing connectivity problems. VPC Flow Logs provide valuable diagnostic information to help pinpoint the root cause of network issues.</p>
<p><em>6- Compliance and Auditing:</em></p>
<p>Use VPC Flow Logs for compliance and auditing purposes. Maintain records of network traffic for regulatory compliance requirements or internal auditing purposes. VPC Flow Logs provide a detailed audit trail of network activity within your VPC.</p>
<p><em>7- Integration with Other Tools:</em></p>
<p>Integrate VPC Flow Logs with other AWS services or third-party tools for advanced analysis and visualization. For example, you can use Amazon Athena or Amazon Elasticsearch Service to perform ad-hoc queries and visualize network traffic data for deeper insights.</p>
<p>Take a look an example of output</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713927487405/95f97222-8238-4a7b-a4e8-c4aacde3aa41.jpeg" alt class="image--center mx-auto" /></p>
<p>By understanding and analyzing VPC Flow Logs, you can gain valuable visibility into your AWS VPC's network traffic, enhance security monitoring, troubleshoot network issues, and ensure compliance with regulatory requirements.</p>
<h2 id="heading-contributing">🤝 Contributing</h2>
<p>Your perspective is valuable! Whether you see potential for improvement or appreciate what's already here, your contributions are welcomed and appreciated. Thank you for considering joining us in making this project even better. Feel free to follow me for updates on this project and others, and to explore opportunities for collaboration. Together, we can create something amazing!</p>
<h2 id="heading-license">📄 License</h2>
<p>This project is licensed under the Joebaho Cloud License</p>
]]></content:encoded></item></channel></rss>