07 February 2025

reverse proxy in HAProxy

credits - https://www.haproxy.com/glossary/what-is-a-reverse-proxy 
When you want clients to connect to HAProxy using a hostname that's also listed as a SAN in
HAProxy's certificate, and then HAProxy should forward the traffic to a specific backend server. This is a common and perfectly valid use case, but it's important to understand the details.

Here's how you can achieve this:

1. Configure HAProxy:

vi /etc/haproxy/haproxy.cfg

frontend my_frontend
    bind *:443 ssl crt /path/to/your/certificate.pem
    acl is_for_backend1 hdr(host) -i backend1.example.com  # Match the hostname/SAN
    use_backend backend1 if is_for_backend1

backend backend1
    server backend1_server 192.168.1.10:80 check  # Your backend server
  • frontend my_frontend: Defines the frontend that listens for incoming connections.
  • bind *:443 ssl crt /path/to/your/certificate.pem: Binds to port 443 (HTTPS) and specifies the path to your certificate. Crucially, this certificate must have backend1.example.com as a SAN entry.
  • acl is_for_backend1 hdr(host) -i backend1.example.com: This Access Control List (ACL) checks the Host header of the incoming HTTP request. It matches if the Host header is backend1.example.com (case-insensitive).
  • use_backend backend1 if is_for_backend1: Directs traffic to the backend1 backend only if the is_for_backend1 ACL matches (i.e., the Host header is backend1.example.com).
  • backend backend1: Defines the backend server.
  • server backend1_server 192.168.1.10:80 check: Specifies the IP address and port of your backend server.

2. DNS Configuration:

You need to configure your DNS so that backend1.example.com resolves to the IP address of your HAProxy server. This is how clients will be able to connect to HAProxy using that hostname.

3. Certificate:

Your SSL certificate must have backend1.example.com listed as a Subject Alternative Name (SAN). This is essential because when a client connects to backend1.example.com, HAProxy will present this certificate. The client will then verify that backend1.example.com is in the certificate's SAN list. If it's not, the client will get a certificate error.

How it Works:

  1. Client makes a request to https://backend1.example.com.
  2. DNS resolves backend1.example.com to HAProxy's IP address.
  3. Client connects to HAProxy over HTTPS.
  4. HAProxy presents its certificate (which has backend1.example.com as a SAN).
  5. Client verifies the certificate.
  6. HAProxy checks the Host header of the request.
  7. Because the Host header is backend1.example.com, the is_for_backend1 ACL matches.
  8. HAProxy forwards the request to the backend1 backend server (192.168.1.10:80).
  9. The backend server processes the request and sends the response back to HAProxy.
  10. HAProxy forwards the response back to the client.

Key Points:

  • SAN is essential: The SAN in the certificate is the critical piece that allows the client to trust the connection to backend1.example.com when it's terminated at HAProxy.
  • Host header matching: The ACL ensures that HAProxy only forwards traffic to the correct backend when the client uses the specific hostname.
  • DNS is crucial: DNS must be configured correctly so that the hostname resolves to HAProxy's IP address.

This setup allows you to use a specific hostname (backend1.example.com) that's associated with a particular backend server, even though the connection is terminated at HAProxy. This is a very common pattern for load balancing and reverse proxying.

15 October 2024

Secret detection in GitLab

 this article will help you to enable secret detection in GitLab a.k.a GitLeaks.

Once you have your GitLab project created. You are required to have the following file structure in it which is highlighted below -

ref - https://docs.gitlab.com/ee/user/application_security/secret_detection/pipeline/index.html#detecting-complex-strings 


 # .gitlab-ci.yml

 # See https://docs.gitlab.com/ee/ci/variables/#cicd-variable-precedence

 include:
   - template: Jobs/Secret-Detection.gitlab-ci.yml
 secret_detection:
   variables:
     SECRETS_ANALYZER_VERSION: "4.5"


# .gitlab/secret-detection-ruleset.toml # https://docs.gitlab.com/ee/user/application_security/secret_detection/pipeline/index.html#create-a-ruleset-configuration-file [secrets] [[secrets.passthrough]] type = "file" target = "gitleaks.toml" value = "extended-gitleaks-config.toml"



 # extended-gitleaks-config.toml

 # See https://docs.gitlab.com/ee/user/application_security/secret_detection/pipeline/index.html#detecting-complex-strings

 [extend]
 # Extends default packaged ruleset, NOTE: do not change the path.
 path = "/gitleaks.toml"

 [[rules]]
   description = "Generic Password Rule"
   id = "generic-password"
   regex = '''(?i)(?:pwd|passwd|password)(?:[0-9a-z\-_\t .]{0,20})(?:[\s|']|[\s|"]){0,3}(?:=|>|
              =:|:{1,3}=|\|\|:|<=|=>|:|\?=)(?:'|\"|\s|=|\x60){0,5}([0-9a-z\-_.=\S_]{3,50})(?:['
              |\"|\n|\r|\s|\x60|;]|$)'''
   entropy = 3.5
   keywords = ["pwd", "passwd", "password"]

 [[rules]]
 description = "Generic Complex Rule"
 id = "COMPLEX_PASSWORD"
 regex = '''(?i)(?:key|api|token|secret|client|passwd|password|PASSWORD|auth|access)(?:[0-9a-z\
            -_\t .]{0,20})(?:[\s|']|[\s|"]){0,3}(?:=|>|:{1,3}=|\|\|:|<=|=>|:|\?=)(?:'|\"|\s|=|\
            x60){0,5}([0-9a-z\-_.=]{10,150})(?:['|\"|\n|\r|\s|\x60|;]|$)'''
 severity = "high"

More complex rule can be referred from here  - https://github.com/gitleaks/gitleaks/blob/master/config/gitleaks.toml

once you have the following structure, enable GitLab Runner and invoke the pipeline.

In case the runner is not registered, do the following within your project 

ref - https://docs.gitlab.com/runner/register/ 

Navigate to GitLab project > settings > CI/CD > expand Runner tab 

Click on three dots and retrieve the registration token -

Login to your GitLab-Runner VM and register your project.


 $ gitlab-runner register

 Runtime platform                                    arch=amd64 os=linux pid=927974 revision=853330f9 version=16.5.0
 Running in system-mode.

 Enter the GitLab instance URL (for example, https://gitlab.com/):
 https://gitlab.company.domain.com/
 Enter the registration token:
 GR13489416BxxxxAG2y_9-ysB_tdR
 Enter a description for the runner:
 [gitrunnerinstance01]: myproject-runner
 Enter tags for the runner (comma-separated):

 Enter optional maintenance note for the runner:

 WARNING: Support for registration tokens and runner parameters in the 'register' command has been deprecated in GitLab Runner 15.6
 and will be replaced with support for authentication tokens. For more information, see https://docs.gitlab.com/ee/ci/runners/new_creation_workflow
 Registering runner... succeeded                     runner=GR13489416BtsyuAG
 Enter an executor: shell, ssh, docker-autoscaler, docker+machine, kubernetes, custom, docker, docker-windows, parallels, virtualbox, instance:
 docker
 Enter the default Docker image (for example, ruby:2.7):
 gcr.io/kaniko-project/executor

 Runner registered successfully. Feel free to start it, but if it's running already the config should be automatically reloaded!

 Configuration (with the authentication token) was saved in "/etc/gitlab-runner/config.toml" 
 

Now that the runner is registered with your project you can invoke the ci/cd-pipeline on your next commit & if your project contains any secrets will be detected in the pipeline and a job artifact will be generated in the form of json that will have the detected leaks as shown in the screenshot below -


leak report showing detected secrets -


gl-secret-detection-report.json can be downloaded by navigating thru - Jobs > Artifacts > gl-secret-detection-report.json

that is how secrets in the code will be detected using GitLeaks before the code is pushed to the source repository GitLab.

27 September 2024

Generative AI

Generative AI is a subset of artificial intelligence (AI) focused on generating new, synthetic data that
resembles existing data. This technology has revolutionized various fields, including art, music, writing, and more. Let's dive into the world of generative AI:

Fundamentals of Generative AI

1. Generative Models
: These models learn patterns and structures in data and generate new data that resembles the original. Examples include Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs).
2. Neural Networks: The backbone of generative AI, neural networks are composed of layers of interconnected nodes (neurons) that process and transform inputs.
3. Deep Learning: A subset of machine learning, deep learning uses neural networks with multiple layers to learn complex patterns in data.

Types of Generative AI Models

1. Generative Adversarial Networks (GANs): Consist of two neural networks: a generator and a discriminator. The generator creates new data, while the discriminator evaluates its authenticity.
2. Variational Autoencoders (VAEs): Comprise an encoder and a decoder. The encoder maps input data to a latent space, while the decoder generates new data from this space.
3. Transformers: Introduced for natural language processing, transformers have been adapted for generative tasks, such as image and music generation.

Applications of Generative AI

1. Art and Design: Generative AI creates stunning artwork, product designs, and architectural concepts.
2. Music and Audio: AI-generated music, sound effects, and voiceovers are transforming the audio industry.
3. Writing and Storytelling: AI-powered tools assist with writing, editing, and even generating entire stories.
4. Healthcare and Medicine: Generative AI helps with medical imaging, drug discovery, and personalized treatment plans.

Getting Started with Generative AI

1. Learn the Basics: Understand the fundamentals of neural networks, deep learning, and generative models.
2. Choose a Framework: Select a deep learning framework like TensorFlow, PyTorch, or Keras.
3. Experiment with Pre-Built Models: Utilize pre-trained models and fine-tune them for your specific use case.
4. Join Online Communities: Participate in forums like Reddit's r/MachineLearning and r/GenerativeAI to stay updated and learn from others.


Understanding Generative AI: The Creative Powerhouse

Generative AI represents a monumental leap in the field of artificial intelligence. Unlike traditional AI, which focuses on analyzing and learning from data to make predictions or classifications, Generative AI is designed to create. Whether it's producing human-like text, generating realistic images, or composing music, Generative AI systems harness deep learning models to craft new, original content.

How Does It Work?

At its core, Generative AI leverages models such as Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs). These models learn the underlying patterns and structures of the input data, allowing them to generate new content that is remarkably similar to the original data but entirely new.

For instance, a GAN consists of two neural networks: the generator and the discriminator. The generator creates new data instances, while the discriminator evaluates them. The interplay between these two networks results in highly refined outputs that can be indistinguishable from real data.

Real-World Applications

Generative AI has found its place in numerous fields:
- Art and Design: Artists and designers use generative models to create stunning visuals, designs, and even fashion items.
- Content Creation: Tools powered by Generative AI can write articles, generate marketing copy, and even produce poetry.
Healthcare: In medical research, Generative AI helps synthesize realistic medical images for training and diagnostic purposes.
- Entertainment: The gaming and film industries use generative models to create realistic characters, landscapes, and scenes.

Why It Matters

Generative AI is a technological marvel and a catalyst for creativity and innovation. It empowers individuals and industries to push the boundaries of what's possible, making creativity more accessible and automated.

What should be the Strategic Roadmap for a business while Adopting AI

what are LLMs 

Large Language Models (LLMs) are a type of artificial intelligence designed to understand and generate human-like text based on vast amounts of data. Imagine a really smart assistant that can read and write at a high level across many topics. It learns from an extensive range of text data, such as books, websites, and articles, so it can predict what comes next in a sentence or provide detailed answers to questions.

In Simple Terms:

- Learning from Text: LLMs are trained on huge datasets of written material, which help them understand language patterns.

- Text Generation: They can create new text that is coherent and contextually relevant, like writing essays, poems, or even computer code.

- Versatility: They can perform a variety of language-related tasks, such as translating languages, summarizing articles, or holding conversations.

An LLM is like a supercharged text tool that leverages its training to assist with a wide range of language tasks.


27 October 2023

Testing Ingress in Azure Kubernetes Service

If you are familiar with the Kubernetes ecosystem, you know we can use NGINX as an ingress controller to expose Kubernetes services on an static external/Public IP address. Azure Kubernetes Services provides the solution. In AKS, you have the option of installing your NGINX ingress controller(using helm) on an internal network. This ensures that your resources are only accessible on your internal network and that they can be accessed through an Express Route or a VPN connection.

The process of setting up Ingress in AKS can be summarized in following steps:%3CmxGraphModel%3E%3Croot%3E%3CmxCell%20id%3D%220%22%2F%3E%3CmxCell%20id%3D%221%22%20parent%3D%220%22%2F%3E%3CmxCell%20id%3D%222%22%20value%3D%22%22%20style%3D%22sketch%3D0%3Bhtml%3D1%3Bdashed%3D0%3Bwhitespace%3Dwrap%3BfillColor%3D%232875E2%3BstrokeColor%3D%23ffffff%3Bpoints%3D%5B%5B0.005%2C0.63%2C0%5D%2C%5B0.1%2C0.2%2C0%5D%2C%5B0.9%2C0.2%2C0%5D%2C%5B0.5%2C0%2C0%5D%2C%5B0.995%2C0.63%2C0%5D%2C%5B0.72%2C0.99%2C0%5D%2C%5B0.5%2C1%2C0%5D%2C%5B0.28%2C0.99%2C0%5D%5D%3BverticalLabelPosition%3Dbottom%3Balign%3Dcenter%3BverticalAlign%3Dtop%3Bshape%3Dmxgraph.kubernetes.icon%3BprIcon%3Ding%3Bdirection%3Deast%3B%22%20vertex%3D%221%22%20parent%3D%221%22%3E%3CmxGeometry%20x%3D%221260%22%20y%3D%22930%22%20width%3D%2231%22%20height%3D%2230.53%22%20as%3D%22geometry%22%2F%3E%3C%2FmxCell%3E%3C%2Froot%3E%3C%2FmxGraphModel%3E%3CmxGraphModel%3E%3Croot%3E%3CmxCell%20id%3D%220%22%2F%3E%3CmxCell%20id%3D%221%22%20parent%3D%220%22%2F%3E%3CmxCell%20id%3D%222%22%20value%3D%22%22%20style%3D%22sketch%3D0%3Bhtml%3D1%3Bdashed%3D0%3Bwhitespace%3Dwrap%3BfillColor%3D%232875E2%3BstrokeColor%3D%23ffffff%3Bpoints%3D%5B%5B0.005%2C0.63%2C0%5D%2C%5B0.1%2C0.2%2C0%5D%2C%5B0.9%2C0.2%2C0%5D%2C%5B0.5%2C0%2C0%5D%2C%5B0.995%2C0.63%2C0%5D%2C%5B0.72%2C0.99%2C0%5D%2C%5B0.5%2C1%2C0%5D%2C%5B0.28%2C0.99%2C0%5D%5D%3BverticalLabelPosition%3Dbottom%3Balign%3Dcenter%3BverticalAlign%3Dtop%3Bshape%3Dmxgraph.kubernetes.icon%3BprIcon%3Ding%3Bdirection%3Deast%3B%22%20vertex%3D%221%22%20parent%3D%221%22%3E%3CmxGeometry%20x%3D%221260%22%20y%3D%22930%22%20width%3D%2231%22%20height%3D%2230.53%22%20as%3D%22geometry%22%2F%3E%3C%2FmxCell%3E%3C%2Froot%3E%3C%2FmxGraphModel%3E

Step01 - create aks cluster and connect thru bastion host 
Step02 -
setup ingress and create static public IP
Step03 - 
Install Ingress Controller using helm
Step04 - Create an ingress route 


Step-01  Create an AKS Cluster from Bastion Host.

Pre-requisite 
 - Verify you have the Microsoft.OperationsManagement and Microsoft.OperationalInsights providers registered on your subscription.
    (These Azure resource providers are required to support Container insights)
 - 
A resource group under which AKS cluster will be deployed.
 - Enable addon http_application_routing in case you do not have https enabled


 $ az provider show -n Microsoft.OperationsManagement -o table
$ az provider show -n Microsoft.OperationalInsights -o table

  Enable http_application_routing
 $ az aks enable-addons --resource-group test-networking-uksouth \
   --name aks-cluster --addons http_application_routing 

 $ az provider show -n Microsoft.ContainerService -o table











Presuming you have a resource group in place, proceed to create an AKS cluster now -


 $ az aks create -g test-networking-uksouth -n aks-cluster \
   --enable-managed-identity \
   --node-count 1 \
   --enable-addons monitoring \
   --enable-msi-auth-for-monitoring \
   --generate-ssh-keys


Next connect to AKS Cluster from Bastion Host -


 $ az login

 set the cluster subscription
 $ az account set --subscription bf687f81-fad1-48cb-b38c-b6e9b00a5cfe

 download cluster credentials
 $ az aks get-credentials --resource-group test-networking-uksouth --name aks-cluster

 install Kubectl
 $ curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.23.6/bin/linux/amd64/kubectl
 $ chmod +x kubectl && mv kubectl /usr/bin/
 $ kubectl cluster-info

Setup Ingress 

Step-02 Create a static public IP -


 Get the resource group name of the AKS cluster that you need to update in subsequent comamnd 
 $ az aks show --resource-group test-networking-uksouth --name aks-cluster --query nodeResourceGroup -o tsv

 Create a public IP address with the static allocation &
 ASSOCIATE - Assign this Public IP to a Load Balancer & replace them
$ az network public-ip create \ --resource-group MC_test-networking-uksouth_aks-cluster_uksouth \ --name myAKSPublicIPForIngress \ --sku Standard \ --allocation-method static \ --query publicIp.ipAddress -o tsv  ex- o/p - 51.104.208.183
Step-03: Install Ingress Controller using helm


 Install Helm3 (if not installed)

 $ wget https://get.helm.sh/helm-v3.10.3-linux-amd64.tar.gz 
 $ tar -zxvf helm-v3.0.0-linux-amd64.tar.gz
 $ sudo cp -rp linux-amd64/helm /usr/local/bin/

 Create a namespace for your ingress resources
 $ kubectl create namespace ingress-basic

 Add the official stable repository
 $ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
 $ helm repo add stable https://charts.helm.sh/stable
 $ helm repo update

 Customizing the Chart Before Installing.
 $ helm show values ingress-nginx/ingress-nginx

 Use Helm to deploy an NGINX ingress controller & Replace with your static IP
 $ helm install ingress-nginx ingress-nginx/ingress-nginx \
   --namespace ingress-basic \
   --set controller.image.allowPrivilegeEscalation=false \
   --set controller.replicaCount=2 \
   --set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \
   --set defaultBackend.nodeSelector."kubernetes.io/os"=linux \
   --set controller.service.externalTrafficPolicy=Local \
   --set controller.service.loadBalancerIP="51.104.208.183" 

 ex - o/p -
 NOTES:
 The ingress-nginx controller has been installed.
 It may take a few minutes for the LoadBalancer IP to be available.
 You can watch the status by running 
 'kubectl --namespace ingress-basic get services -o wide -w ingress-nginx-controller'

List services with labels


 $ kubectl get service -l app.kubernetes.io/name=ingress-nginx --namespace ingress-basic

 List Pods
 $ kubectl get pods -n ingress-basic
 $ kubectl get all -n ingress-basic


Access Public IP like - http://<Public-IP-created-for-Ingress>
ex - http://51.104.208.183/ 

Output should be -    404 Not Found from Nginx
ex -








Verify Load Balancer on Azure Mgmt Console

    Primarily refer Settings -> Frontend IP Configuration






















Setup DNS

First go to your public IP address & configure it | DNS record first to associate with your public IP address
- create DNS zone - sampleapp.com in your resource-group
- create alias record - my.sampleapp.com
- add DNS name label - sampleapp1 which will become DNS

the DNS will look like - http://sampleapp1.uksouth.cloudapp.azure.com/








Step04 - create an ingress route

# INGRESS RESOURCE THAT WILL BE DEPLOYED AS APP LOAD-BALANCER IN AKS ---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  namespace: ingress-basic
  name: sampleapp-ing
  annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/ingress.class: addon-http-application-routing
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /static/$2
spec:
  ingressClassName: alb
  rules:
# - host: sampleapp1.uksouth.cloudapp.azure.com // <DNSentryofPublicIP>
    - http:
        paths:
        - path: /
          pathType: Prefix
          backend:
            service:
              name: sampleapp-svc
              port:
                number: 80

Test ingress by deploying a sample application


 $ kubectl create deployment sampleapp --image=punitporwal07/sampleapp:3.0 -n ingress-basic


Known issues -


Issue1 -
on running kubectl get pods -n ingress-basic you might observer that the ingress deployment fails to schedule pods and see following errors in the namespace events.

Error creating: admission webhook "validation.gatekeeper.sh" denied the request:
 [azurepolicy-k8sazurev3noprivilegeescalatio-b740f48149d00f4a06d8] Privilege escalation container is not allowed: controller





Solution1 -
This is due to the fact that when we create privileged pods if there is a azure policy attached, it will block the container creation.

Dashboard > cluster > Policies > look for below policy, if present install controller with following tag -


 --set controller.image.allowPrivilegeEscalation=false

















22 August 2023

Deploy ArgoCD in k8s

ArgoCD is a declarative, GitOps-driven continuous delivery tool designed specifically for Kubernetes.
Unlike traditional tools, ArgoCD uses Git repositories to hold the precise definition of your application's desired state, making it your single source of trust.

𝗪𝗵𝗮𝘁 𝗠𝗮𝗸𝗲𝘀 𝗔𝗿𝗴𝗼𝗖𝗗 𝗦𝗽𝗲𝗰𝗶𝗮𝗹?

𝗔𝘂𝘁𝗼𝗺𝗮𝘁𝗲𝗱 𝗗𝗲𝗽𝗹𝗼𝘆𝗺𝗲𝗻𝘁𝘀:ArgoCD can automatically deploy your applications to your cluster, be it through a Git commit, a CI/CD pipeline trigger, or even a manual request. Think of the time and effort you'll save!

𝗜𝗻𝘀𝘁𝗮𝗻𝘁 𝗢𝗯𝘀𝗲𝗿𝘃𝗮𝗯𝗶𝗹𝗶𝘁𝘆: With its GUI and CLI, ArgoCD lets developers instantly check whether the application's live state aligns with the desired state. It’s like having a crystal ball for your deployments.

Powerful Operations: It's not just about top-level resources. ArgoCD's UI provides a detailed view of the entire application resource hierarchy, including the underlying ReplicaSets and Pods. Want to see Pod logs and Kubernetes events? It’s all there in a multi-cluster dashboard.

𝗪𝗵𝘆 𝗔𝗿𝗴𝗼𝗖𝗗?

ArgoCD is more than just a tool, it's a strategy in the chaotic world of DevOps. It helps:

✅ Bring order to the chaos
✅ Reach your DevOps milestones
✅ Save precious time and energy for your team

𝗕𝗲 𝘁𝗵𝗲 𝗩𝗶𝗰𝘁𝗼𝗿 𝗶𝗻 𝗬𝗼𝘂𝗿 𝗗𝗲𝘃𝗢𝗽𝘀 𝗕𝗮𝘁𝘁𝗹𝗲

If your DevOps environment feels overwhelming, ArgoCD might be the tool you've been searching for. It's designed to tame the chaos, streamline operations, and position you as a victor in the ever-challenging DevOps landscape.

Setup ArgoCD in your kubernetes cluster

if you have a cluster not up and running already follow the article here to set up one - 

https://cloudnetes.blogspot.com/2018/02/launching-kubernetes-cluster-different.html
Launch k8s-cluster


 Create argocd namespace in your cluster
 $ kubectl create namespace argocd


 
 Run the Argo CD install script provided by the project maintainers
 $ kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml



 Verify the argocd pods and resources
 $ kubectl get all -n argocd
 $ watch kubectl get pods -n argocd


there are different articles on the internet that will misguide you in doing port forwarding to access the argocd dashboard which I am not sure how that will work, instead should try following the basic method -

 
 expose your argocd-server deployment as NP or LB type to access Argo CD
 $ kubectl expose deploy/argocd-server --type=NodePort --name=arg-svc
 $ kubectl expose deploy/argocd-server --type=LoadBalancer --name=argcd-svc

 Alternatively if you are deploying it to a cloud k8s service, use -
 $ kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'

 Fetch the LoadBalancer DNS
 $ export ARGOCD_SERVER=`kubectl get svc argocd-server -n argocd -o json \
   | jq --raw-output '.status.loadBalancer.ingress[0].hostname'`
 $ echo $ARGOCD_SERVER


access the argocd deployment by hitting the public IP and port of your exposed service that you created in the command above



 Fetch the passwd of argocd from secrets
 $ kubectl get secret argocd-initial-admin-secret -o yaml
 $ echo "Q20yMGV0DeMo05WZ0otZg==" | base64 -d
 Cm20eDeMoSNVgJ-f
 
 OR

 $ kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo


Forgot your argocd password after you have changed it, patch the secret.

To reset password you might remove ‘admin.password’ and ‘admin.passwordMtime’ keys from argocd-secret and restart api server pod. Password will be reset to pod name.


 $ kubectl patch secret argocd-secret  -p '{"data": {"admin.password": null, "admin.passwordMtime": null}}' -n argocd
 $ kubectl delete pod/argocd-server-756bbc65c5-zttw4 -n argocd
 
 fetch new pass again -
 $ kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo


Once you are in do the following -
✅ connect repositories
✅ create application
✅ Sync up your deployment

ex- thats how your apps will be shown once added to argocd