Nov 22, 2024
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster.
Traffic routing is controlled by rules defined on the Ingress resource.
Here is a simple example where an Ingress sends all its traffic to one service:
#IAM URL: https://console.aws.amazon.com/iamv2/home#/home
a) VPC full access
b) EC2 full access
c) S3 full access
d) Route53 full access
e) IAM full access
a) VPC full access
b) EC2 full access
c) S3 full access
d) Route53 full access
e) IAM full access
#Download the latest release with the command:
$curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
#make the downloaded file executable
$chmod +x kubectl
#Move the executable to the /usr/local/bin
$sudo mv kubectl /usr/local/bin
#Download the latest release with the command:
$curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64
#Make the binary executable
$chmod +x kops-linux-amd64
#Move the executable to /usr/local/bin
$sudo mv kops-linux-amd64 /usr/local/bin/kops
$aws configure
#enter the Access key ID and Secret access key.
#Provide the region details, i.e., US-East-1 or any other
#Give output format as "json".
#Create a S3 bucket through the S3 bucket console: https://s3.console.aws.amazon.com/s3/home?region=us-east-1
#Create a private hosted zone from the Route53 console: https://console.aws.amazon.com/route53/v2/home#Dashboard
#Generate public and private keys.
$ssh-keygen
#Everything is setup. Now let's begin creating the cluster with the below commands.
$export KOPS_STATE_STORE="s3://pki.atlasqa.co.uk"
$export MASTER_SIZE=${MASTER_SIZE:-m4.large}
$export NODE_SIZE=${NODE_SIZE:-m4.large}
$export ZONES="us-east-1a,us-east-1b,us-east-1c"
$kops create cluster pki.atlasqa.co.uk --node-count 3 --zones $ZONES --node-size $NODE_SIZE --master-size $MASTER_SIZE --master-zones $ZONES --dns public --dns-zone pki.atlasqa.co.uk --cloud aws
#It will describe everything that it will create within the cluster. In the next step, kops will update the cluster and resources will be created.
$kops update cluster --name pki.atlasqa.co.uk --yes --admin
#It will take around 20 minutes to get all the resources ready within the cluster.
#Check the cluster status after 20 minutes with the below command.
$kops validate cluster --name pki.atlasqa.co.uk
The cluster is setup; there are 3 node machines running in us-east-1 region and 3 master running in us-east-1 as per the availability zones.
#Add the Jetstack Helm repository.
$helm repo add jetstack https://charts.jetstack.io
#Update your local Helm chart repository cache.
$helm repo update
#Install the CustomResourceDefinition resources separately.
$kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.8.0/cert-manager.crds.yaml
#Install the cert-manager using helm.
$helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --version v1.8.0
#Next, install the Atlas controller and CRDs:'
$kubectl apply -f https://github.com/globalsign/atlas-cert-manager/releases/download/v0.0.1/install.yaml
The controller is deployed and ready to handle Atlas requests.
#Label the cert-manager namespace to disable resource validation.
$kubectl label namespace cert-manager certmanager.k8s.io/disable-validation=true
$helm upgrade --install ingress-nginx ingress-nginx --repo https://kubernetes.github.io/ingress-nginx --namespace cert-manager
$kubectl get svc -n cert-manager
Create a secret to store the ATLAS account api_key, secrets along with the mTLS and private key:
kubectl create secret generic issuer-credentials --from-literal=apikey=$API_KEY --from-literal=apisecret=$API_SECRET --from-literal=cert="$(cat mTLS.pem)" --from-literal=certkey="$(cat privatekey.pem)" -n cert-manager
$kubectl apply -f issuer.yaml
issuer.yaml
apiVersion: hvca.globalsign.com/v1alpha1
kind: Issuer
metadata:
name: gs-issuer
namespace: cert-manager
spec:
authSecretName: "issuer-credentials"
url: "https://emea.api.hvca.globalsign.com:8443/v2"
$kubectl apply -f cert.yaml
cert.yaml
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: pki.atlasqa.co.uk
namespace: cert-manager
spec:
# Secret names are always required.
secretName: www.atlasqa.co.uk
duration: 2160h # 90d
renewBefore: 360h # 15d
subject:
# organizations:
# - jetstack
# The use of the common name field has been deprecated since 2000 and is
# discouraged from being used.
commonName: pki.atlasqa.co.uk
isCA: false
privateKey:
algorithm: RSA
encoding: PKCS1
size: 2048
usages:
- server auth
#- client auth
# At least one of a DNS Name, URI, or IP address is required.
# dnsNames:
# -
#www.atlasqa.co.uk
# Issuer references are always required.
issuerRef:
name: gs-issuer
# We can reference ClusterIssuers by changing the kind here.
# The default value is Issuer (i.e. a locally namespaced Issuer)
kind: Issuer
# This is optional since cert-manager will default to this value however
# if you are using an external issuer, change this to that issuer group.
group: hvca.globalsign.com
$kubectl apply -f ingress.yaml
ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx
namespace: cert-manager
annotations:
cert-manager.io/issuer: GS-issuer
kubernetes.io/ingress.class: nginx
spec:
tls:
- hosts:
- pki.atlasqa.co.uk
secretName: www.atlasqa.co.uk
rules:
- host: pki.atlasqa.co.uk
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: example-service
port:
number: 80
Check your certificate installation for SSL issues and vulnerabilities.