HYBRID-MULTI-CLOUD(TASK-2)

Amardeep Kumar
5 min readSep 5, 2020

Create/launch Application using Terraform using EFS instead of EBS service on the AWS.

Here I have upgraded my Task 1 Launching web server on AWS with Terraform given by Mr. Vimal Daga Sir.

What is upgraded?

Here we are using EFS instead of EBS service on the AWS.

But Why EFS?

The main EBS is only accessible from a single EC2 instance in your particular AWS region, while EFS allows you to mount the file system across multiple regions and instances. Finally, Amazon S3 is an object store good at storing vast numbers of backups or user files.

So to provide EFS for storage purpose we need to follow the steps:

Let’s start our journey!!

Task Description:

1. Create Security group which allow the port 80.

2. Launch EC2 instance.

3. In this Ec2 instance use the existing key or provided key and security group which we have created in step 1.

4. Launch one Volume using the EFS service and attach it in your vpc, then mount that volume into /var/www/html

5. Developer have uploaded the code into github repo also the repo has some images.

6. Copy the github repo code into /var/www/html

7. Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.

8 Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html

Prerequisite for this project:

  • First of all install and setup terraform in your system.
  • Have some knowledge about some basic commands of Linux Operating System.
  • Some basic commands of terraform:
terrafom validate:- Validates the Terraform files

terraform apply:- Builds or changes infrastructure

terraform destroy:- Destroy Terraform-managed infrastructure

--auto-approve:- Skip interactive approval of plan before applying.

Execution:

First you need to login into aws by user account .You can use root account but for security user account is better with restrictions can be done.

aws configure --profile user_name

Step I:

The Amazon Web Services (AWS) provider is used to interact with the many resources supported by AWS. The provider needs to be configured with the proper credentials before it can be used.

provider “aws” {
region = “ap-south-1”
profile = “amar”
}

Step II:

Creating Security Groups which support port 80 and 22.

resource "aws_security_group" "mysg" {
name = "mysg"
description = "Allow SSH and HTTP"
vpc_id = "vpc-af8e93c7"


ingress {
description = "HTTP"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}


ingress {
description = "SSH"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}

tags = {
Name = "allow_http"
}
}

Step III:

Let’s launch an ec2 instance which has setup in the public which is already having the security group allowing port 80 and nfs server. We have here used remote exec which will run provided commands directly into the created os. Here we have used pre-created key for authentication of os.

resource "aws_instance" "myos" {
ami = "ami-0447a12f28fddb066"
instance_type = "t2.micro"
key_name = var.enter_ur_key_name
security_groups = ["${aws_security_group.mysg.name}"]


connection {
type = "ssh"
user = "ec2-user"
private_key = file("C:/Users/abc/Downloads/abc.pem")
host = aws_instance.myos.public_ip
}


provisioner "remote-exec" {
inline = [
"sudo yum install httpd php git -y",
"sudo systemctl restart httpd",
"sudo systemctl enable httpd",
]
}
tags = {
Name = "oshin"
}
}

Step IV:

Launch one Volume using the EFS service and attach it in your vpc, then mount that volume into /var/www/html.

resource "aws_efs_file_system" "myefs" {
creation_token = "my-efs"
tags = {
Name = "myefs1"
}
}

# To mount target of EFS to EC2 Instance

resource "aws_efs_mount_target" "mountefs" {
depends_on = [aws_efs_file_system.myefs,]
file_system_id = aws_efs_file_system.myefs.id
subnet_id = "subnet-ebe5df83"
security_groups = [aws_security_group.allow_http.id]
}

#To mount EFS volume

resource "null_resource" "nullremote" {
depends_on = [
aws_efs_mount_target.mountefs,
]

connection {
type = "ssh"
user = "ec2-user"
port = 22
private_key = file("C:/Users/abc/Downloads/abc.pem")
host = aws_instance.myos.public_ip
}

provisioner "remote-exec" {
inline = [
"sudo mount -t nfs4 ${aws_efs_mount_target.mountefs.ip_address}:/ /var/www/html/",
"sudo rm -rf /var/www/html/*",
"sudo git clone https://github.com/Amar2582/MultiCloud-project.git /var/www/html/"
]
}
}

Step V:

Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.

resource "aws_s3_bucket" "mybucket" {
bucket = "oshin8858"
acl = "public-read"
force_destroy = true

provisioner "local-exec" {
command = "git clone https://github.com/2010ankita/multicloud.git terra-image"
}


provisioner "local-exec" {
when = destroy
command = "echo Y | rmdir /s terra-image"
}
}

resource "aws_s3_bucket_object" "image-upload" {
bucket = aws_s3_bucket.mybucket.bucket
key = "oshin.jpg"
source = "terra-image/oshin.jpg"
acl = "public-read"
content_type = "images/jpg"
depends_on = [
aws_s3_bucket.mybucket,
]
}
output "my_bucket_id"{
value = aws_s3_bucket.mybucket.bucket
}

Step VI:

Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html folder.

resource "aws_cloudfront_distribution" "distribution" {
depends_on = [
aws_s3_bucket_object.image-upload,
]

origin {
domain_name = "${aws_s3_bucket.mybucket.bucket_regional_domain_name}"
origin_id = "${local.s3_origin_id}"
}

enabled = true

default_cache_behavior {
allowed_methods = [ "GET", "HEAD", "OPTIONS"]
cached_methods = ["GET", "HEAD"]
target_origin_id = "${local.s3_origin_id}"


forwarded_values {
query_string = false

cookies {
forward = "none"
}
}
viewer_protocol_policy = "allow-all"
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
}


restrictions {
geo_restriction {
restriction_type = "none"
}
}


viewer_certificate {
cloudfront_default_certificate = true
}
connection {
type = "ssh"
user = "ec2-user"
private_key = file("C:/Users/AMAR/Pictures/squd.jpg")
host = aws_instance.myos.public_ip
}

provisioner "remote-exec"{
inline = [
"sudo su <<END",
"echo \"<img src='http://${aws_cloudfront_distribution.distribution.domain_name}/${aws_s3_bucket_object.image-upload.key}' height='400' width='450'>\" >> /var/www/html/index.php",
"END",
]
}
}
output "my_ip"{
value = aws_instance.myos.public_ip
}

Step VI:

Save the file and run the following command

terraform init                    
terraform validate
terraform apply -auto-approve

Here Our Task Completed Successfully!!!

Thanks For Reading!!!

Task 2 GitHub Link:

Task 1 Blog link:

https://www.linkedin.com/posts/amardeep-kumar-2638911a1_vimaldaga-vimaldaga-cloud-activity-6682041342295707648-mrxH

Task 1 GitHub link:-

Thank You!!!

--

--