11 February 2018

JENKINS: a continuous integration tool

Getting started with Jenkins is 3 step process:

- Install Jenkins
- Download the required plugins
- Configure the plugins & create a project


# Installing Jenkins in Ubuntu
$ wget -q -O - https://pkg.jenkins.io/debian-stable/jenkins.io.key | sudo apt-key add - $ sudo sh -c 'echo deb http://pkg.jenkins.io/debian-stable binary/ \
  > /etc/apt/sources.list.d/jenkins.list'
$ sudo apt-get update
$ sudo apt-get install jenkins
  //this will automatically start Jenkins

# Installing Jenkins in AWS-EC2
$ sudo wget -O /etc/yum.repos.d/jenkins.repo https://pkg.jenkins.io/redhat/jenkins.repo
$ sudo rpm --import https://pkg.jenkins.io/redhat/jenkins.io.key
$ sudo yum install -y jenkins
$ sudo yum install java -y
$ sudo service jenkins start
$ chkconfig jenkins on

# If your /etc/init.d/jenkins file fails to start jenkins then,
edit the /etc/default/jenkins to replace the HTTP_PORT=8080 to HTTP_PORT=8081
or
$ cd /var/share/jenkins/
$ java -jar jenkins.war // this will setup a new jenkins from begining

# I prefer to use below, this will start old jenkins without any delay
$ service jenkins restart

# Running docker container of Jenkins
$ docker pull punitporwal07/jenkins
$ docker run -d -p 8081:8080 -v jenkins-data:/software/jenkins punitporwal07/jenkins:tag
  to understand this command in brief check here: Docker

# Depending on configs to system, jenkins file structure may vary -
  /etc/sysconfig/jenkins
  /var/lib/jenkins/
  /etc/default/jenkins


Useful Plugins
you can push from manage plugins tab/ push from back-end into plugins dir
- Build pipeline: to chain multiple jobs
- Delivery pipeline: this will visualize deliver pipelines (upstream/downstream)
- Weblogic deployer: this is used to deploy a jar/war/ear to any weblogic target
- Deploy to container: to deploy war/ear to a tomcat/glassfish container
Roles strategy: this plugin allows you to assign roles to the different user of jenkins

Automate deployment on Tomcat using Jenkins pipeline
(benefits.war as an example on tomcat 8.x for Linux)
- install Deploy to container plugin, restart Jenkins to reflect changes
- create a new project/workspace & select post-build action as- Deploy war/ear to a container
- add properties as below:-
   - war/ear files: **/*.war
   - context path: benefits.war (provided you need to push this war file into your workspace)
   - select container from the drop-down list: tomcat 8.x
   - Add credentials: tomcat/tomcat (provided you have added this user in conf/tomcat-user.xml with all the required roles)
   - Tomcat URL: http://localhost:8080/
- apply/save your project and build it to validate the result.

Automate deployment on Weblogic using Jenkins pipeline
(benefits.war as an example on Weblogic 10.3.6)
- install Weblogic deployer plugin, restart jenkins to reflect changes
- configure the plugin,
- create new project/workspace
- Add post-build action as- Deploy the artefact to any weblogic environment (if no configuration has been set, the plugin will display an error message, else it will open up a new window)
- add properties as below:-
   - Task Name: give any task name
   - Environment: from the drop-down list select your AdminServer ( provided you have created configuration.xml and added it to Weblogic deployer Plugin)
   - Name: The name used by the WebLogic server to display the deployed component
   - Base directory of deployment: give the path to your deployment.war or push it to your workspace and leave it blank
   - Built resource to deploy: give your deployment.war name
   - Targets: give target name
- Apply/save your project and build it to validate the result.

Possible failure of your Jenkins jobs
Problem - Jenkins: java.lang.OutOfMemoryError: Java heap space
Solution - Navigate to your Jenkins project, click configure.
Scroll down to the Build section of the page to the Build Step with your plugin title:
Click Advanced…
In the System/Java Options field, add the following parameter
JAVA_ARGS="-Xmx2048m -XX:MaxPermSize=512m"
This will assign 2Gi of memory to you build.

--

13 August 2022

Provision of EKS cluster via terraform

Amazon service for Kubernetes a.k.a Amazon EKS makes it easy to deploy, manage,
and scale containerized applications using Kubernetes on AWS.
Amazon EKS runs the Kubernetes management infrastructure across multiple AWS Availability Zones, automatically detects and replaces unhealthy control plane nodes, and provides on-demand upgrades and patching when required.
You simply provision worker nodes and connect them to the provided Amazon EKS endpoint.

The architecture includes the following resources:

EKS Cluster - AWS managed Kubernetes control plane node + EC2-based worker nodes.
AutoScaling Group
Associated VPC, Internet Gateway, Security Groups, and Subnets
Associated IAM Roles and Policies

Once the cluster setup is complete install kubectl in your bastion host to control your EKS cluster.

# Install kubectl $ curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.23.6/bin/linux/amd64/kubectl
 
 $ chmod +x kubectl && 
sudo mv kubectl /usr/bin/kubectl

In this article, I will describe how my code containing Jenkinsfile & Terraform-code is housed on a local machine from which my code will be pushed to my git repository.

I then created a Jenkins pipeline job to pull the code from git and deploy it on AWS. Upon successful execution of the Terraform code, it will deploy an Amazon-managed EKS cluster with two worker nodes of Amazon EC2. (1 on-demand instance of the t2.micro size and 1 spot instance of the t2.medium size)


A Jenkinsfile for this pipeline job can be found here replicate it as per your requirement and replace the repository name where needed 

Terraform-code to provision the EKS cluster can be found here
https://github.com/punitporwal07/git-jenkins-tf-aws/blob/main/eks-by-tf-on-aws.tf 

once the code is ready push it to your git repository as below -

# Pushing code from local to github repository
$ git init
$ git add . $ git commit -m "sending Jenkinsfile & terraform code" $ git push origin master


Setup Jenkins Pipeline
Login to your Jenkins dashboard and create a Pipeline Job or a readymade config.xml for this job can be found here copy it and push it under /var/lib/jenkins/jobs/ directory and restart Jenkins, this will add a new pipeline job in your Jenkins.

If you are using a bastion host to test this journey make sure you have AWS profile set for the user which has all the required role attached to it.

my profile for this job looks like the below -

# cat ~/.aws/credentials [dev]
aws_access_key_id = AKIAUJSAMPLEWNHXOU aws_secret_access_key = EIAjod83jeE8fzhx1samplerjrzj5NrGuNUT6 region = eu-west-2

Policies attached to my user to provision this EKS cluster -


Plus an inline policy for EKS-administrator access

{ "Version": "2012-10-17", "Statement": [ { "Sid": "eksadministrator", "Effect": "Allow", "Action": "eks:*", "Resource": "*" } ] }

Now you have all the required things in place to run the pipeline job from Jenkins which will pick the code from GitHub and deploy an aws-managed EKS cluster on AWS provisioned by Terraform code.

A successful job will look like the below upon completion.


Finally, the EKS cluster is created now, which can be verified from the AWS console


Accessing  your cluster

1. from the Bastion host by doing the following -


 # export necessary varibles as below & set cluster context

 $ export KUBECONFIG=$LOCATIONofKUBECONFIG/kubeconfig_myEKSCluster
 $ export AWS_DEFAULT_REGION=eu-west-2
 $ export AWS_DEFAULT_PROFILE=dev

 $ aws eks update-kubeconfig --name myEKSCluster --region=eu-west-2
 Added new context arn:aws:eks:eu-west-2:295XXXX62576:cluster/myEKSCluster to $PWD/kubeconfig_myEKSCluster




2. Alternatively, you can view cluster resources from aws-console however, you might see the following error when you access them.


(OPTIONAL, not every user will see this message it will varies from roles to roles) 
EKS manages user and role permissions through a ConfigMap called aws-auth that resides in the kube-system namespace. So despite being logged in with an AWS user with full administrator access to all services, EKS will still limit your access in the console as it can't find the user or role in its authentication configuration.
which can be fixed by updating the Roles and Users of  aws-auth configmap of your EKS-cluster 


 # from your bastion host 
 
 $ kubectl edit configmap aws-auth -n kube-system

 apiVersion: v1
 data:
   mapAccounts: |
     []
   mapRoles: |
     - rolearn: arn:aws:iam::2951XXXX2576:role/TF-EKS-Cluster-Role
       username: punit
       groups:
         - system:masters
         - system:bootstrappers
         - system:nodes
   mapUsers: |
     - userarn: arn:aws:iam::2951XXXX2576:user/punit
       username: punit
       groups:
       - system:masters


your TF-EKS-Cluster-Role should have the following policies attached to it


from your custom user in this case punit, access the console with this user as we have granted permission to this user to view the EKS-cluster resources.



Known Issues -

Error while running pipeline stage when it tries to clone the repository to the workspace by running usual 'sh git clone' command and gives following error
generate pipeline script by navigating to

Pipeline Syntax > Sample Step as git:Git > give your repository details > Generate Pipeline Script

it will generate a command for you to use in the pipeline state -
Ex -
git branch: 'main', credentialsId: 'fbd18e1b-sample-43cd-805b-16e480a8c273', url: 'https://github.com/punitporwal07/git-jenkins-tf-aws-destroy.git'

add it in your pipeline stage and rerun it.

10 June 2018

Integrating Jenkins with GitHub

In this exercise, we are going to integrate Github with Jenkins using webhook that triggers an event
every time you commit a change in your code residing in your GitHub repository to invoke a job in your Jenkins.

To start with, you should be having the following in place.

a. A Github account with a code repository 
b. Jenkins up and running
c. Github plugin for integration with Jenkins

First, we going to create a webhook from our code repository by navigating

GitHub > your-code-repository > settings > webhooks > Add webhook 

add Payload URL - https://jenkinsUrl:8080/github-webhook/
Content type - choose application/json
Secret - you can leave this field blank
Which events would you like to trigger this webhook? - Select - Just the push event
Check the Active box - this will deliver the first payload to test the provided URL
Add webhook

Now come to Jenkins are create a freestyle project by navigating

Jenkins > new Item > item name > Freestyle project > OK

select Github project, and give project URL - https://github.com/punitporwal07/aws-codedeploy-linux/
for Source code management, select git and give repository URL - https://github.com/punitporwal07/aws-codedeploy-linux.git
Keep Branch Specifier as - */master
Under Build Triggers - check GitHub hook trigger for GITScm polling
Save

that's it. Now make a code commit in your git repo, the webhook associated with your repository detects the change and trigger a payload to your Jenkins job which you created in the above step.

you will be able to see the console output of your build as below









--

23 November 2017

Docker: Containerization Tool

Docker allows you to encapsulate your application, operating system and hardware configuration into a
single unit to run it anywhere.
It's all about applications and every application requires much of Infrastructure, which is a massive waste of resources since it utilizes very less % of it. I mean with Physical Machine/ram/CPU results heavy loss of cost & bla blah.. hence Hypervisor/Virtualization came into the picture, where we use shared resources on top of a single physical machine and create multiple VMs to utilize more from it but still not perfect.
Docker is the solution to the above problem, it can containerize your requirement & works on the principle of layered images.

working with docker is as simple as three steps:
  • Install Docker-engine
  • Pull the image from HUB/docker-registry
  • Run image as a container/service


How containers evolved over Virtualization
-In the virtual era, you need to maintain guest OS on the host OS in form of virtualization which boots up in minutes or so.
whereas containers bypass gust OS from host OS in containerization & boots up in a fraction of seconds
- It is not replacing virtualization, it is just the next step in evolution (more advanced)

What is docker?
Docker is a containerization platform that can bundle up technologies and packages your application and all its dependencies together in the form of an image which further you run as a service called container so as to ensure that your application will work in any environment be it Dev/Test/Prod

Point to remember
  • docker images are the read-only template & used to run containers
  • There is always a base image on which you layer up your requirement
  • the container is the actual running instance of the images
  • we always create images and run containers using images
  • we can pull images from the image registry/hub can be public/private
  • docker daemon runs on the host machine
  • docker0 is not a normal interface | Its a Bridge | Virtual Switch | that links multiple containers
  • Docker images are registered in the image registry & stored in the image hub
  • Docker hub is docker's own cloud repository (for sharing & caring purpose of images)
The essence of docker: if you are new to any technology and want to work on it, get its image from the docker hub configure it, work on it, destroy it, then you can move the same image to another environment and run as it is out there.   
                          
                      
key attribute of kernel used by containers
  • Namespaces (PID, net, mountpoint, user) Provides Isolation
  • cgroups (control groups)
  • capabilities ( assigning privileges to container users)
  • but each container shares common Kernel
how communication happens b/w docker client & docker daemon
  • Rest API
  • Socket.IO
  • TCP

Dockerfile supports the following list of variables

FROM       image:tag AS name
ADD        ["src",... "dest"]
COPY       /src/ dest/
ENV        ORACLE_HOME=/software/Oracle/
EXPOSE     port, [port/protocol]
LABEL      multi.label1="value1" multi.label2="value2" other="value3"
STOPSIGNAL
USER       
myuser
VOLUME     /myvolume
WORKDIR    /locationof/directory/
RUN        write your shell command
CMD        ["executable","param1","param2"]
ENTRYPOINT ["executable","param1","param2"] (exec form, preferred)
ENTRYPOINT command param1 param2 (shell form)
ENTRYPOINT script ; /bin/bash

How RUN | ENTRYPOINT | CMD differ from each other
RUN is built time instructions used to add layers to images & to install apps

ENTRYPOINT is not mandatory to use, it cannot be overridden at run-time with normal commands like docker run command. Any command passed to ENTRYPOINT is treated as first-ever command of that container.

CMD only executes at runtime. It executes commands in container at launch time equivalent of docker run <args> <command>. It can be used only once per Dockerfile
Shell form/commands are expressed the same way as a shell command. Commands get prepended by "/bin/sh -c" | variable expansion etc.
Exec form | json array style - ["command", "arg1"]
container don't need a shell | no variable expansion | no special characters(&&,||, <>)


Some arguments which you can use while running any docker Image

$ docker run -it --privileged image:tag
--privileged will give all capabilities to the container and lifts all the limitations enforced by OS/device, even you can run docker inside docker with it.

Installing docker-engine onto any Ubuntu system


$ sudo apt-get update -y && apt-get install docker.io # this will install docker-engine as a Linux service. Check engine status by running $ service docker status // else $ service docker start

check docker details installed in your system by running any of these commands


$ docker -v | docker version | docker info


Docker needs root to work for the creation of Namespaces/cgroups/etc..

$ ls -l /var/run/docker.sock
srw-rw---- 1 root docker 0 Jun 21 06:43 /var/run/docker.sock


so you need to add your local user to docker group (verify docker group from /etc/group and add your user as:

$ sudo usermod -aG docker $USER
# restart your session

# Alternatively add your user to the docker group
$ vi /etc/group
# append $USER to docker group and start using docker with your user now

# if fails with -

level=error msg="'overlay' is not supported over btrfs" level=fatal msg="Error starting daemon: error initializing graphdriver:
Failed to start Docker Application Container Engine.

it appears that the underline storage defined in daemon.json is not supported
/etc/docker/daemon.json
remove the above file and clear the /var/lib/docker/*
restart the docker service


Basic commands

FunctionCommand
pull a docker imagedocker pull reponame:imagename:tag
run an image
docker run parameters imagename:tag
list docker imagesdocker images
list running containers
list container even not running
docker ps
 docker ps -a
build an imagedocker build -t imagename:tag .
remove n processes in one command
docker rm $(docker ps -a -q) // for older versions
docker container prune -f // for newer versions
remove n images in one commanddocker rmi $(docker image -a -q)                                                               
reset docker systemdocker system prune
create mount docker volume create
using mount point
docker run -it -p 8001-8006:7001-7006 --mount type=bind, source=/software/, target=/software/docker/data/ registry.docker/weblogic12213:191004
docker run -it -p 8001-8006:7001-7006
-v data:/software/ registry.docker/weblogic1036:191004
create network                    docker network create --driver bridge --subnet=192.168.0.0/20 --gateway=192.168.0.2 mynetwork
docker run -it -p 8001:8006:7001:7006 
--network=mynetwork registry.docker/weblogic1036:191004         
for more on networkingclick here: networking in docker 


As an exercise lets attempt to setup Jenkins via Docker on a Linux machine

Open a terminal window and run(Provided Docker is already installed)
$ docker pull punitporwal07/jenkins
$ docker container run --rm -d -p 9090:8080 -v jenkins-data:/var/jenkins_home/ punitporwal07/jenkins
where
docker run : default command to run any docker container
--rm : this will remove the docker container as soon as process exits
-d : run the container in detached mode(in background) and omit the container ID
-p : port assignation from image to you local setup -p host-port:container-port
-v : Jenkins data to be mapped to /var/Jenkins_home/ directory/volume to one of your file system
punitporwal07/jenkins: docker will pull this image from your image registry

it will process for 2-3 mins then prompt as:

INFO: Jenkins is fully up and running
to access the jenkins console( http://localhost:9090 ) for the first time you need to provide admin password to make sure it was installed by admin only. & it will prompt admin password during the installation process as something like:
e72fb538166943269e96d5071895f31c
This may also be found at: /var/jenkins_home/secrets/initialAdminPassword

here we are running Jenkins inside docker as a detached container you can use:
$ docker logs to collect jenkins logs
if we select to install recommended plugins which are most useful, Jenkins by default will install

Best practice to write a Dockerfile
best practice is to build a container first, run all the instructions one by one that you are planning to put in a Dockerfile. Once they got succeed you can put them in your Dockerfile, which will avoid you building n images from your Dockerfile again and again and save image layers as well.

Writing a docker File: ( FROM COPY RUN CMD)

a Container runs on level of images:
            base image
            layer1 image
            layer2 image

Dockerfiles are simple text files with a command on each line.
To define a base image we use the instruction FROM 

Creating a Dockerfile
  • The first line of the Dockerfile should be FROM nginx:1.11-alpine (it is better to use exact version rather then writing it as latest, as it can deviate your desired version)
  • COPY allows you to copy files from the directory containing the Dockerfile to the container's image. This is extremely useful for source code and assets that you want to be deployed inside your container.
  • RUN allows you to execute any command as you would at a command prompt, for example installing different application packages or running a build command. The results of the RUN are persisted to the image so it's important not to leave any unnecessary or temporary files on the disk as these will be included in the image & it will create a image for each command
  • CMD is used to execute any single command as soon as container launch

Life of a docker Image

write a Dockerfile > build the image > tag the image > push it to registry > pull the image to any system > run the image as container

vi Dockerfile: 

FROM baseLayer:version
MAINTAINER xxx@xx.com
RUN install
CMD special commands/instructions

$ docker build -t imagename:tag .
$ docker tag 4a34imageidgfg43 punixxorwal07/image:tag
$ docker push punixxorwal07/image:tag
$ docker pull punixxorwal07/image:tag
$ docker run -it -p yourPort:imagePort punixxorwal07/image:tag

How to Upload/Push your image to a registry

after building your image (docker build -t imageName:tag .) do the following:

step1- login to your docker registry
$ docker login --username=punitporwal --email=punixxorwal@xxxx.com

list your images
$ docker images

step2- tag your image for registry
$ docker tag b9cc1bcac0fd reponame/punitporwal07/helloworld:0.1

step3- push your image to registry
$ docker push reponame/punitporwal07/helloworld:0.1

your image is now available and open for world, by default your images is public.

repeat the same step if you wish to do any changes in your docker image, make the changes, tag the new image, push it to you docker hub

Running your own image registry
$ docker pull registry/registry:2
$ docker run -d -p 5000:5000 --restart always -v /registry:/var/lib/registry --name registry registry:2 
if its an insecure registry update registries.conf with entry of your insecure registry before pushing your image to it
$ sudo vi /etc/containers/registries.conf

Volumes in Docker

first of all create volume for your docker container using command

$ docker volume create myVolume
$ docker volume ls 
DRIVER              VOLUME NAME
local               2f14a4803f8081a1af30c0d531c41684d756a9bcbfee3334ba4c33247fc90265
local               21d7149ec1b8fcdc2c6725f614ec3d2a5da5286139a6acc0896012b404188876
local               myVolume

there after use following way to use volume feature
we can define volumes in one container and same can be share across multiple containers

to define in container 1
$ docker run -it -v /volume1 --name voltainer centos /bin/bash

to call in another container from other container
$ docker run -it --volumes-from=voltainer centos /bin/bash

we can call Volumes in a container from Docker engine host
$ docker run -v /data:/data
$ docker run --volume mydata:/mnt/mqm

     /volumeofYourHost/:/volumeofContainer/

to define in a Dockerfile
VOLUME /data (but we cannot bind the volume from docker host to container via this, just docker run command can do this)


DOCKER DAEMON LOGGING

first of all stop the docker service
$ service docker stop
$ docker -d -l debug &
-d here is for daemon
-l log level
& to get our terminal back
or
$ vi /etc/default/docker/
add log-level
DOCKER_OPTS="--log-level=fatal"
then restart docker deamon
$ service docker start


Br
Punit

19 January 2021

Acheive CI/CD using AWS managed resources & with Jenkins

AWS provides the flexibility to set up pipelines using the below three services to achieve continuous integration and continuous deployment a.k.a CI/CD

a. CodeCommit/Git -  store code in aws codeCommit or a private git repositories.
b. CodeDeploy - automate code deployments
c. CodePipeline - service that deploy, build & test your code

a. Set up Codecommitwhich will be used for version controlling and a useful tool during CI/CD
The first thing is to get the required permissions for the user, setup aws-credentials for your aws environments & then create a repository

a1. Services IAM Users > User Permission > Attach existing policyAWSCodeCommitPowerUser
a2. Services > IAM > Users > User > Security credentials > HTTPS Git credentials for AWS CodeCommit > Generate credentials
a3. Services > codecommit > create a repo(MyRepo) > cloneURL via Https





 from any host having git installed, do the following
 $ git clone 'clone-https-url'
 // it will prompt for credentials here
 pass the same cred, that you have generated in step a2 
 $ cd repository-directory
$ vi index.html
 $ vi appspec.yml
 $ git status
 $ git add index.html appspec.yaml
 $ git status
 $ git commit -m 'initial commit'
 $ git push origin master // it will connect via https url and push the file to MyRepo
 $ git log // validate push


this completes the repository setup with a simple source code

b. Set up CodeDeploy to deploy an App: to automate the deployments and adding new features continuously, first is to add 2 roles for codeDeploy for instance.

first setup roles for codeDeploy

create Role1
b1. Services > IAM > Roles > CreateRole > EC2 > AmazonEC2RoleforAWSCodeDeploy
create Role2
b2. Services > IAM > Roles > CreateRole > codeDeploy AWSCodeDeployRole
create an application
b3. services > codeDeploy > createApp on Ec2
create appDeploymentGroup > name-myappgrp > servicerole-codedeploy-role2 > D-type-In-place > env-config-Amazon-EC2 > add-tag > choose-target-group > createDeploymentGroup

c. Set up CodePipelineto deploy code from CodeCommit
Services CodePipeline create name-pipeline arn:rolesource-CodeCommit connect/authorize > select-RepoCreatedInStep a3 > branch-Master Detection-aws-codePipeline > buildProvider-skipBuildStage > deploymentProvider-AWS CodeDeploy  App-myApp > D-Group-myappgrp > createPipeline

once the pipeline is created it will connect to the source & start deploying the code present in codecommit, and apear as below -


Similarly, you can use GitHub also as a source provider 

Set up CodePipelineto deploy code from GitHub
Services > CodePipeline > create > name-pipeline arn:role> source-GitHub > connect/authorize > Add Git-Repo > branch-Main > detection-aws-codePipeline-ec2 > buildProvider- skip build stage > deploymentProvider-AWS CodeDeploy App-myApp > D-Group-myappgrp > createPipeline

once the pipeline is created it will connect to the source Git this time & start deploying the code present in Github, and apear as below -


Note: On creating a deployment group you are required to choose your deployment & environment type that's where you will deploy your application, in this exercise I choose to deploy on Amazon EC2 instance using Tag value.

At this stage, you should be able to access your application deployed on the EC2 instance using aws CodeCommit/Git > CodePipeline > S3 > CodeDeployEC2 instance

In order to use Jenkins, connect to your Jenkins and do the following



 Install the listed plugins from Manage Plugins section of Jenkins & Restart jenkins

GitHub Integration AWS CodePipeline
 AWS CodeDeploy
 AWS CodeBuild
 Http Request
 File Operations
 Blue Ocean // for new Jenkins UI
 

Time to create a Jenkins Job
  • Create a new Freestyle Project
  • Add your Source Code Management
  • Choose Git, provide repository URL.
  • For Build Trigger select Poll SCM to add "H/2 * * * *" in Schedule.
  • Under Build Environment, select Delete workspace before build starts check box.
  • Under Build actions select AWS CodeBuild. On the AWS Configurations, choose Manually specify access and secret keys and provide the keys. 
  • Project Configuration - Provide Region, Project Name, choose Use Jenkins source & leave everything blank.
  • Add the second Build step and choose File operation (To make sure that all files cloned from the GitHub repository are deleted) Click Add and select File Delete and give '*' under include file pattern.
  • Add the third Build step and choose Http request and give S3 URI till your zip file. ex: http://s3-us-east-1.amazonaws.com/s3-bucket-2021/code-artifact.zip
    Ignore SSL error - yes
  • Under HTTP Request, choose Advanced and leave Authorization, Headers & Body to default values. 
  • Under Response, for Output response to file, enter the code-artifact.zip file name.
  • Add fourth Build step as File operation with 2 substeps - Unzip & File Delete and provide code-artifact.zip to include as zip file & file pattern in file Delete.
  • On the Post-build Actions, choose Add post-build actions and select the Deploy an application to AWS CodeDeploy check box and provide values
    CodeDeploy App Name
    CodeDeploy AppGrp Name
    CodeDeployDefault.OneAtATime
    Your AWS Region
  • at last choose Deploy Revision
       - choose wait for deployment to finish?
you should be able to access your application deployed on the EC2 instance using CI/CD flow.

--