15 October 2024

Secret detection in GitLab

 this article will help you to enable secret detection in GitLab a.k.a GitLeaks.

Once you have your GitLab project created. You are required to have the following file structure in it which is highlighted below -

ref - https://docs.gitlab.com/ee/user/application_security/secret_detection/pipeline/index.html#detecting-complex-strings 


 # .gitlab-ci.yml

 # See https://docs.gitlab.com/ee/ci/variables/#cicd-variable-precedence

 include:
   - template: Jobs/Secret-Detection.gitlab-ci.yml
 secret_detection:
   variables:
     SECRETS_ANALYZER_VERSION: "4.5"


# .gitlab/secret-detection-ruleset.toml # https://docs.gitlab.com/ee/user/application_security/secret_detection/pipeline/index.html#create-a-ruleset-configuration-file [secrets] [[secrets.passthrough]] type = "file" target = "gitleaks.toml" value = "extended-gitleaks-config.toml"



 # extended-gitleaks-config.toml

 # See https://docs.gitlab.com/ee/user/application_security/secret_detection/pipeline/index.html#detecting-complex-strings

 [extend]
 # Extends default packaged ruleset, NOTE: do not change the path.
 path = "/gitleaks.toml"

 [[rules]]
   description = "Generic Password Rule"
   id = "generic-password"
   regex = '''(?i)(?:pwd|passwd|password)(?:[0-9a-z\-_\t .]{0,20})(?:[\s|']|[\s|"]){0,3}(?:=|>|
              =:|:{1,3}=|\|\|:|<=|=>|:|\?=)(?:'|\"|\s|=|\x60){0,5}([0-9a-z\-_.=\S_]{3,50})(?:['
              |\"|\n|\r|\s|\x60|;]|$)'''
   entropy = 3.5
   keywords = ["pwd", "passwd", "password"]

 [[rules]]
 description = "Generic Complex Rule"
 id = "COMPLEX_PASSWORD"
 regex = '''(?i)(?:key|api|token|secret|client|passwd|password|PASSWORD|auth|access)(?:[0-9a-z\
            -_\t .]{0,20})(?:[\s|']|[\s|"]){0,3}(?:=|>|:{1,3}=|\|\|:|<=|=>|:|\?=)(?:'|\"|\s|=|\
            x60){0,5}([0-9a-z\-_.=]{10,150})(?:['|\"|\n|\r|\s|\x60|;]|$)'''
 severity = "high"

More complex rule can be referred from here  - https://github.com/gitleaks/gitleaks/blob/master/config/gitleaks.toml

once you have the following structure, enable GitLab Runner and invoke the pipeline.

In case the runner is not registered, do the following within your project 

ref - https://docs.gitlab.com/runner/register/ 

Navigate to GitLab project > settings > CI/CD > expand Runner tab 

Click on three dots and retrieve the registration token -

Login to your GitLab-Runner VM and register your project.


 $ gitlab-runner register

 Runtime platform                                    arch=amd64 os=linux pid=927974 revision=853330f9 version=16.5.0
 Running in system-mode.

 Enter the GitLab instance URL (for example, https://gitlab.com/):
 https://gitlab.company.domain.com/
 Enter the registration token:
 GR13489416BxxxxAG2y_9-ysB_tdR
 Enter a description for the runner:
 [gitrunnerinstance01]: myproject-runner
 Enter tags for the runner (comma-separated):

 Enter optional maintenance note for the runner:

 WARNING: Support for registration tokens and runner parameters in the 'register' command has been deprecated in GitLab Runner 15.6
 and will be replaced with support for authentication tokens. For more information, see https://docs.gitlab.com/ee/ci/runners/new_creation_workflow
 Registering runner... succeeded                     runner=GR13489416BtsyuAG
 Enter an executor: shell, ssh, docker-autoscaler, docker+machine, kubernetes, custom, docker, docker-windows, parallels, virtualbox, instance:
 docker
 Enter the default Docker image (for example, ruby:2.7):
 gcr.io/kaniko-project/executor

 Runner registered successfully. Feel free to start it, but if it's running already the config should be automatically reloaded!

 Configuration (with the authentication token) was saved in "/etc/gitlab-runner/config.toml" 
 

Now that the runner is registered with your project you can invoke the ci/cd-pipeline on your next commit & if your project contains any secrets will be detected in the pipeline and a job artifact will be generated in the form of json that will have the detected leaks as shown in the screenshot below -


leak report showing detected secrets -


gl-secret-detection-report.json can be downloaded by navigating thru - Jobs > Artifacts > gl-secret-detection-report.json

that is how secrets in the code will be detected using GitLeaks before the code is pushed to the source repository GitLab.

27 October 2023

Testing Ingress in Azure Kubernetes Service

If you are familiar with the Kubernetes ecosystem, you know we can use NGINX as an ingress controller to expose Kubernetes services on an static external/Public IP address. Azure Kubernetes Services provides the solution. In AKS, you have the option of installing your NGINX ingress controller(using helm) on an internal network. This ensures that your resources are only accessible on your internal network and that they can be accessed through an Express Route or a VPN connection.

The process of setting up Ingress in AKS can be summarized in following steps:%3CmxGraphModel%3E%3Croot%3E%3CmxCell%20id%3D%220%22%2F%3E%3CmxCell%20id%3D%221%22%20parent%3D%220%22%2F%3E%3CmxCell%20id%3D%222%22%20value%3D%22%22%20style%3D%22sketch%3D0%3Bhtml%3D1%3Bdashed%3D0%3Bwhitespace%3Dwrap%3BfillColor%3D%232875E2%3BstrokeColor%3D%23ffffff%3Bpoints%3D%5B%5B0.005%2C0.63%2C0%5D%2C%5B0.1%2C0.2%2C0%5D%2C%5B0.9%2C0.2%2C0%5D%2C%5B0.5%2C0%2C0%5D%2C%5B0.995%2C0.63%2C0%5D%2C%5B0.72%2C0.99%2C0%5D%2C%5B0.5%2C1%2C0%5D%2C%5B0.28%2C0.99%2C0%5D%5D%3BverticalLabelPosition%3Dbottom%3Balign%3Dcenter%3BverticalAlign%3Dtop%3Bshape%3Dmxgraph.kubernetes.icon%3BprIcon%3Ding%3Bdirection%3Deast%3B%22%20vertex%3D%221%22%20parent%3D%221%22%3E%3CmxGeometry%20x%3D%221260%22%20y%3D%22930%22%20width%3D%2231%22%20height%3D%2230.53%22%20as%3D%22geometry%22%2F%3E%3C%2FmxCell%3E%3C%2Froot%3E%3C%2FmxGraphModel%3E%3CmxGraphModel%3E%3Croot%3E%3CmxCell%20id%3D%220%22%2F%3E%3CmxCell%20id%3D%221%22%20parent%3D%220%22%2F%3E%3CmxCell%20id%3D%222%22%20value%3D%22%22%20style%3D%22sketch%3D0%3Bhtml%3D1%3Bdashed%3D0%3Bwhitespace%3Dwrap%3BfillColor%3D%232875E2%3BstrokeColor%3D%23ffffff%3Bpoints%3D%5B%5B0.005%2C0.63%2C0%5D%2C%5B0.1%2C0.2%2C0%5D%2C%5B0.9%2C0.2%2C0%5D%2C%5B0.5%2C0%2C0%5D%2C%5B0.995%2C0.63%2C0%5D%2C%5B0.72%2C0.99%2C0%5D%2C%5B0.5%2C1%2C0%5D%2C%5B0.28%2C0.99%2C0%5D%5D%3BverticalLabelPosition%3Dbottom%3Balign%3Dcenter%3BverticalAlign%3Dtop%3Bshape%3Dmxgraph.kubernetes.icon%3BprIcon%3Ding%3Bdirection%3Deast%3B%22%20vertex%3D%221%22%20parent%3D%221%22%3E%3CmxGeometry%20x%3D%221260%22%20y%3D%22930%22%20width%3D%2231%22%20height%3D%2230.53%22%20as%3D%22geometry%22%2F%3E%3C%2FmxCell%3E%3C%2Froot%3E%3C%2FmxGraphModel%3E

Step01 - create aks cluster and connect thru bastion host 
Step02 -
setup ingress and create static public IP
Step03 - 
Install Ingress Controller using helm
Step04 - Create an ingress route 


Step-01  Create an AKS Cluster from Bastion Host.

Pre-requisite 
 - Verify you have the Microsoft.OperationsManagement and Microsoft.OperationalInsights providers registered on your subscription.
    (These Azure resource providers are required to support Container insights)
 - 
A resource group under which AKS cluster will be deployed.
 - Enable addon http_application_routing in case you do not have https enabled


 $ az provider show -n Microsoft.OperationsManagement -o table
$ az provider show -n Microsoft.OperationalInsights -o table

  Enable http_application_routing
 $ az aks enable-addons --resource-group test-networking-uksouth \
   --name aks-cluster --addons http_application_routing 

 $ az provider show -n Microsoft.ContainerService -o table











Presuming you have a resource group in place, proceed to create an AKS cluster now -


 $ az aks create -g test-networking-uksouth -n aks-cluster \
   --enable-managed-identity \
   --node-count 1 \
   --enable-addons monitoring \
   --enable-msi-auth-for-monitoring \
   --generate-ssh-keys


Next connect to AKS Cluster from Bastion Host -


 $ az login

 set the cluster subscription
 $ az account set --subscription bf687f81-fad1-48cb-b38c-b6e9b00a5cfe

 download cluster credentials
 $ az aks get-credentials --resource-group test-networking-uksouth --name aks-cluster

 install Kubectl
 $ curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.23.6/bin/linux/amd64/kubectl
 $ chmod +x kubectl && mv kubectl /usr/bin/
 $ kubectl cluster-info

Setup Ingress 

Step-02 Create a static public IP -


 Get the resource group name of the AKS cluster that you need to update in subsequent comamnd 
 $ az aks show --resource-group test-networking-uksouth --name aks-cluster --query nodeResourceGroup -o tsv

 Create a public IP address with the static allocation &
 ASSOCIATE - Assign this Public IP to a Load Balancer & replace them
$ az network public-ip create \ --resource-group MC_test-networking-uksouth_aks-cluster_uksouth \ --name myAKSPublicIPForIngress \ --sku Standard \ --allocation-method static \ --query publicIp.ipAddress -o tsv  ex- o/p - 51.104.208.183
Step-03: Install Ingress Controller using helm


 Install Helm3 (if not installed)

 $ wget https://get.helm.sh/helm-v3.10.3-linux-amd64.tar.gz 
 $ tar -zxvf helm-v3.0.0-linux-amd64.tar.gz
 $ sudo cp -rp linux-amd64/helm /usr/local/bin/

 Create a namespace for your ingress resources
 $ kubectl create namespace ingress-basic

 Add the official stable repository
 $ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
 $ helm repo add stable https://charts.helm.sh/stable
 $ helm repo update

 Customizing the Chart Before Installing.
 $ helm show values ingress-nginx/ingress-nginx

 Use Helm to deploy an NGINX ingress controller & Replace with your static IP
 $ helm install ingress-nginx ingress-nginx/ingress-nginx \
   --namespace ingress-basic \
   --set controller.image.allowPrivilegeEscalation=false \
   --set controller.replicaCount=2 \
   --set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \
   --set defaultBackend.nodeSelector."kubernetes.io/os"=linux \
   --set controller.service.externalTrafficPolicy=Local \
   --set controller.service.loadBalancerIP="51.104.208.183" 

 ex - o/p -
 NOTES:
 The ingress-nginx controller has been installed.
 It may take a few minutes for the LoadBalancer IP to be available.
 You can watch the status by running 
 'kubectl --namespace ingress-basic get services -o wide -w ingress-nginx-controller'

List services with labels


 $ kubectl get service -l app.kubernetes.io/name=ingress-nginx --namespace ingress-basic

 List Pods
 $ kubectl get pods -n ingress-basic
 $ kubectl get all -n ingress-basic


Access Public IP like - http://<Public-IP-created-for-Ingress>
ex - http://51.104.208.183/ 

Output should be -    404 Not Found from Nginx
ex -








Verify Load Balancer on Azure Mgmt Console

    Primarily refer Settings -> Frontend IP Configuration






















Setup DNS

First go to your public IP address & configure it | DNS record first to associate with your public IP address
- create DNS zone - sampleapp.com in your resource-group
- create alias record - my.sampleapp.com
- add DNS name label - sampleapp1 which will become DNS

the DNS will look like - http://sampleapp1.uksouth.cloudapp.azure.com/








Step04 - create an ingress route

# INGRESS RESOURCE THAT WILL BE DEPLOYED AS APP LOAD-BALANCER IN AKS ---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  namespace: ingress-basic
  name: sampleapp-ing
  annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/ingress.class: addon-http-application-routing
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /static/$2
spec:
  ingressClassName: alb
  rules:
# - host: sampleapp1.uksouth.cloudapp.azure.com // <DNSentryofPublicIP>
    - http:
        paths:
        - path: /
          pathType: Prefix
          backend:
            service:
              name: sampleapp-svc
              port:
                number: 80

Test ingress by deploying a sample application


 $ kubectl create deployment sampleapp --image=punitporwal07/sampleapp:3.0 -n ingress-basic


Known issues -


Issue1 -
on running kubectl get pods -n ingress-basic you might observer that the ingress deployment fails to schedule pods and see following errors in the namespace events.

Error creating: admission webhook "validation.gatekeeper.sh" denied the request:
 [azurepolicy-k8sazurev3noprivilegeescalatio-b740f48149d00f4a06d8] Privilege escalation container is not allowed: controller





Solution1 -
This is due to the fact that when we create privileged pods if there is a azure policy attached, it will block the container creation.

Dashboard > cluster > Policies > look for below policy, if present install controller with following tag -


 --set controller.image.allowPrivilegeEscalation=false

















22 August 2023

Deploy ArgoCD in k8s

ArgoCD is a declarative, GitOps-driven continuous delivery tool designed specifically for Kubernetes.
Unlike traditional tools, ArgoCD uses Git repositories to hold the precise definition of your application's desired state, making it your single source of trust.

𝗪𝗵𝗮𝘁 𝗠𝗮𝗸𝗲𝘀 𝗔𝗿𝗴𝗼𝗖𝗗 𝗦𝗽𝗲𝗰𝗶𝗮𝗹?

𝗔𝘂𝘁𝗼𝗺𝗮𝘁𝗲𝗱 𝗗𝗲𝗽𝗹𝗼𝘆𝗺𝗲𝗻𝘁𝘀:ArgoCD can automatically deploy your applications to your cluster, be it through a Git commit, a CI/CD pipeline trigger, or even a manual request. Think of the time and effort you'll save!

𝗜𝗻𝘀𝘁𝗮𝗻𝘁 𝗢𝗯𝘀𝗲𝗿𝘃𝗮𝗯𝗶𝗹𝗶𝘁𝘆: With its GUI and CLI, ArgoCD lets developers instantly check whether the application's live state aligns with the desired state. It’s like having a crystal ball for your deployments.

Powerful Operations: It's not just about top-level resources. ArgoCD's UI provides a detailed view of the entire application resource hierarchy, including the underlying ReplicaSets and Pods. Want to see Pod logs and Kubernetes events? It’s all there in a multi-cluster dashboard.

𝗪𝗵𝘆 𝗔𝗿𝗴𝗼𝗖𝗗?

ArgoCD is more than just a tool, it's a strategy in the chaotic world of DevOps. It helps:

✅ Bring order to the chaos
✅ Reach your DevOps milestones
✅ Save precious time and energy for your team

𝗕𝗲 𝘁𝗵𝗲 𝗩𝗶𝗰𝘁𝗼𝗿 𝗶𝗻 𝗬𝗼𝘂𝗿 𝗗𝗲𝘃𝗢𝗽𝘀 𝗕𝗮𝘁𝘁𝗹𝗲

If your DevOps environment feels overwhelming, ArgoCD might be the tool you've been searching for. It's designed to tame the chaos, streamline operations, and position you as a victor in the ever-challenging DevOps landscape.

Setup ArgoCD in your kubernetes cluster

if you have a cluster not up and running already follow the article here to set up one - 

https://cloudnetes.blogspot.com/2018/02/launching-kubernetes-cluster-different.html
Launch k8s-cluster


 Create argocd namespace in your cluster
 $ kubectl create namespace argocd


 
 Run the Argo CD install script provided by the project maintainers
 $ kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml



 Verify the argocd pods and resources
 $ kubectl get all -n argocd
 $ watch kubectl get pods -n argocd


there are different articles on the internet that will misguide you in doing port forwarding to access the argocd dashboard which I am not sure how that will work, instead should try following the basic method -

 
 expose your argocd-server deployment as NP or LB type to access Argo CD
 $ kubectl expose deploy/argocd-server --type=NodePort --name=arg-svc
 $ kubectl expose deploy/argocd-server --type=LoadBalancer --name=argcd-svc

 Alternatively if you are deploying it to a cloud k8s service, use -
 $ kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'

 Fetch the LoadBalancer DNS
 $ export ARGOCD_SERVER=`kubectl get svc argocd-server -n argocd -o json \
   | jq --raw-output '.status.loadBalancer.ingress[0].hostname'`
 $ echo $ARGOCD_SERVER


access the argocd deployment by hitting the public IP and port of your exposed service that you created in the command above



 Fetch the passwd of argocd from secrets
 $ kubectl get secret argocd-initial-admin-secret -o yaml
 $ echo "Q20yMGV0DeMo05WZ0otZg==" | base64 -d
 Cm20eDeMoSNVgJ-f
 
 OR

 $ kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo


Forgot your argocd password after you have changed it, patch the secret.

To reset password you might remove ‘admin.password’ and ‘admin.passwordMtime’ keys from argocd-secret and restart api server pod. Password will be reset to pod name.


 $ kubectl patch secret argocd-secret  -p '{"data": {"admin.password": null, "admin.passwordMtime": null}}' -n argocd
 $ kubectl delete pod/argocd-server-756bbc65c5-zttw4 -n argocd
 
 fetch new pass again -
 $ kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo


Once you are in do the following -
✅ connect repositories
✅ create application
✅ Sync up your deployment

ex- thats how your apps will be shown once added to argocd



10 August 2023

Understanding the Role of an API Gateway

 The API gateway is a crucial component in managing and routing API requests. Here's how it functions:

Step 1: The client initiates an HTTP request to the API gateway.

Step 2: The API gateway parses and validates the attributes of the HTTP request.

Step 3: It performs allow-list and deny-list checks to ensure the request is authorized.

Step 4: The API gateway communicates with an identity provider for authentication and authorization.

Step 5: Rate-limiting rules are applied. If the request exceeds the limit, it is rejected.

Steps 6 and 7: After passing basic checks, the API gateway routes the request to the appropriate service using path matching.

Step 8: The API gateway transforms the request into the correct protocol and forwards it to the backend microservices.

Steps 9-12: The API gateway manages errors and handles faults that take longer to recover (circuit breaking). It leverages the ELK (Elastic-Logstash-Kibana) stack for logging and monitoring. Sometimes, data is cached in the API gateway for efficiency.

how to read a SonarQube code analysis report

Here, I've broken down each metric (ideal value in brackets) as a quick byte.

→ Code Coverage Percentage (> 80%):
        Ensures a high proportion of your code is tested, reducing bugs. 

→ Technical Debt Ratio (< 5%):
        Measures how much code needs refactoring for maintainability. 

→ Number of Bugs (Ideally 0):
        Counts coding errors needing fixes for functional integrity. 

→ Security Vulnerabilities (Minimal):
        Identifies potential security risks needing attention. 

→ Code Smells Count (Minimal):
        Detects 'smelly' code that may need improvement for better readability. 

→ Duplications Percentage (< 3%):
        Highlights repeated code blocks that should be simplified. 

→ Security Hotspots Reviewed (100%):
        Ensures all potential security risks are examined. 

→ Complexity Metrics (Cyclomatic Complexity < 10):
        Evaluates how complicated the code is, aiming for simplicity. 

→ Coding Rules Compliance (Close to 100%):
        Shows adherence to set coding standards for quality. 

→ Quality Gate Status (Passed):
        Indicates the overall health of the codebase, based on set criteria.