CD

This is a guide to CD and CI.

Introduction

1
2
3
4
5
6
7
8
9
10
11
12
13
1. ArgoCD <--- ok
2. FluxCD <--- Not Recommend
3. JenkinsX <--- Not Recommend
4. aliyun
5. GitlabCI <--- ok
6. tekton <--- based on k8s, more complicated than others.


Prometheus
kustomize <--- ok

rancher <--- ok
k3s/AutoK3s <--- ok

K3s

A lightweight kubernetes, like minikube.

K3s

Install

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
#https://docs.k3s.io/quick-start
#https://github.com/k3s-io/k3s/issues/1160
#Must have hostname can resolved by dns, or add them in /etc/hosts file of all machines:
cat /etc/hosts:
192.168.101.82 dave-PC
192.168.109.50 dave-analysis-server
192.168.109.52 peter-analysis-server
192.168.109.53 eino-analysis-server

#Install Master:
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server --disable traefik" sh
#In China:
curl -sfL https://rancher-mirror.rancher.cn/k3s/k3s-install.sh | INSTALL_K3S_EXEC="server --disable traefik" INSTALL_K3S_MIRROR=cn sh -

cp -a /etc/rancher/k3s/k3s.yaml ~/.kube/config
sed -i 's;127.0.0.1;192.168.101.82;g' ~/.kube/config
kubectl get all -A -o wide

#Install Node:
#K3S_TOKEN is located in Master Server: /var/lib/rancher/k3s/server/node-token
curl -sfL https://rancher-mirror.rancher.cn/k3s/k3s-install.sh | \
INSTALL_K3S_EXEC="--node-ip=192.168.109.50 --flannel-iface=eth1" \
INSTALL_K3S_MIRROR=cn K3S_URL=https://192.168.101.82:6443 K3S_NODE_NAME=dave-analysis-server \
K3S_TOKEN=K10944a3ff29886d4bc05ba79fa6f8a8504bca00e35231e397f9242d7af6724d16c::server:32d4961ba4b4ea871855e856d0ccef06 sh -

curl -sfL https://rancher-mirror.rancher.cn/k3s/k3s-install.sh | \
INSTALL_K3S_EXEC="--node-ip=192.168.109.52 --flannel-iface=eth1" \
INSTALL_K3S_MIRROR=cn K3S_URL=https://192.168.101.82:6443 K3S_NODE_NAME=peter-analysis-server \
K3S_TOKEN=K10944a3ff29886d4bc05ba79fa6f8a8504bca00e35231e397f9242d7af6724d16c::server:32d4961ba4b4ea871855e856d0ccef06 sh -

curl -sfL https://rancher-mirror.rancher.cn/k3s/k3s-install.sh | \
INSTALL_K3S_EXEC="--node-ip=192.168.109.53 --flannel-iface=eth1" \
INSTALL_K3S_MIRROR=cn K3S_URL=https://192.168.101.82:6443 K3S_NODE_NAME=eino-analysis-server \
K3S_TOKEN=K10944a3ff29886d4bc05ba79fa6f8a8504bca00e35231e397f9242d7af6724d16c::server:32d4961ba4b4ea871855e856d0ccef06 sh -

#cloudeon(Just a memo, ignore this)
#https://cloudeon.top/
docker run -d -p 7700:7700 --name cloudeon \
-v /data/cloudeon/application.properties:/usr/local/cloudeon/cloudeon-assembly/conf/application.properties \
registry.cn-hangzhou.aliyuncs.com/udh/cloudeon:v1.3.0

#https://blog.thenets.org/how-to-create-a-k3s-cluster-with-nginx-ingress-controller/
#https://blog.csdn.net/weixin_45444133/article/details/116952250
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.5/deploy/static/provider/cloud/deploy.yaml

Uninstall

1
2
3
4
#Master
/usr/local/bin/k3s-uninstall.sh
#Node
/usr/local/bin/k3s-agent-uninstall.sh

Demo

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
# Create a test Namespace, if not exist
kubectl create namespace test

# Apply the example file
#https://kubernetes.github.io/ingress-nginx/user-guide/basic-usage/
kubectl -n test apply -f my-example.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: test-nginx-app
namespace: test
spec:
selector:
matchLabels:
name: test-nginx-backend
template:
metadata:
labels:
name: test-nginx-backend
spec:
containers:
- name: backend
image: docker.io/nginx:alpine
imagePullPolicy: Always
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: test-nginx-service
namespace: test
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
selector:
name: test-nginx-backend

---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-nginx-ingress
namespace: test
spec:
rules:
- host: test.w1.thenets.org
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: test-nginx-service
port:
number: 80
ingressClassName: nginx

Kustomize

Declarative Management of Kubernetes Objects Using Kustomize | Kubernetes

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Kustomize/
├── base
│   ├── deployment.yaml
│   ├── kustomization.yaml
│   └── service.yaml
└── overlays
├── dev
│   ├── deployment.yaml
│   ├── kustomization.yaml
│   ├── password.txt
│   └── service.yaml
└── prod
├── deployment.yaml
├── kustomization.yaml
├── password.txt
└── service.yaml

base/kustomization.yaml

1
2
3
4
5
6
7
commonLabels:
type: demo
commonAnnotations:
version: 1.1.0
resources:
- deployment.yaml
- service.yaml

base/deployment.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: apache
name: portfolio
spec:
replicas: 1
selector:
matchLabels:
app: apache
strategy: {}
template:
metadata:
labels:
app: apache
spec:
containers:
- image: singharunk/webserver:v3
name: portfolio
resources: {}

base/service.yaml

1
2
3
4
5
6
7
8
9
10
11
12
apiVersion: v1
kind: Service
metadata:
name: portfolio-service
spec:
selector:
app: apache
ports:
- protocol: TCP
port: 80
targetPort: 80
name: porty

Dev environment:

overlays/dev/kustomization.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
namePrefix: dev-
commonLabels:
env: dev
commonAnnotations:
typeofApp: htmlApp
bases:
- ../../base
namespace: kustomize-namespace
patchesStrategicMerge:
- deployment.yaml
- service.yaml
secretGenerator:
- name: dummy
files:
- password.txt

overlays/dev/deployment.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
apiVersion: apps/v1
kind: Deployment
metadata:
name: portfolio
spec:
replicas: 1
template:
spec:
containers:
- image: singharunk/webserver:v3
name: portfolio
volumeMounts:
- name: dummy
mountPath: /opt/password.txt
volumes:
- name: dummy
secret:
secretName: dummy

overlays/dev/password.txt

1
this is dummy secret but now I am changing it

overlays/dev/service.yaml

1
2
3
4
5
6
7
8
9
10
11
12
apiVersion: v1
kind: Service
metadata:
name: portfolio-service
spec:
selector:
app: apache
ports:
- protocol: TCP
port: 82
targetPort: 80
name: httpx

Prod environment:

overlays/prod/kustomization.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
namePrefix: prd-

commonLabels:
env: prd


commonAnnotations:
typeofApp: htmlApp
rollout: value2

bases:
- ../../base

patchesStrategicMerge:
- deployment.yaml
- service.yaml

secretGenerator:
- name: dummy
files:
- password.txt

namespace: kustomize-namespace

generatorOptions:
disableNameSuffixHash: true

overlays/prod/deployment.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
apiVersion: apps/v1
kind: Deployment
metadata:
name: portfolio
spec:
replicas: 2
template:
spec:
containers:
- image: singharunk/webserver:v3
name: portfolio
volumeMounts:
- name: dummy
mountPath: /opt/password.txt
volumes:
- name: dummy
secret:
secretName: dummy

overlays/prod/password.txt

1
this is dummy secret

overlays/prod/service.yaml

1
2
3
4
5
6
7
8
9
10
11
12
apiVersion: v1
kind: Service
metadata:
name: portfolio-service
spec:
selector:
app: apache
ports:
- protocol: TCP
port: 81
targetPort: 80
name: portx

Create

1
2
3
4
5
6
cd overlays/dev
#Run kubectl kustomize ./ to check if any error occurred
#kubectl kustomize <kustomization_directory>
kubectl kustomize ./
#To apply those Resources, run kubectl apply with --kustomize or -k flag:
kubectl apply -k <kustomization_directory>

Argocd

Getting Started - Argo CD - Declarative GitOps CD for Kubernetes (argo-cd.readthedocs.io)

Argo CD 保姆级入门教程 (qq.com)

Install

1
2
3
4
5
6
7
8
9
10
11
12
#argocd with kubernetes 1.18
#https://argo-cd.readthedocs.io/en/stable/getting_started/
#https://mp.weixin.qq.com/s?__biz=MzU1MzY4NzQ1OA==&mid=2247512193&idx=1&sn=da41bb4072870e34bdf338c22bcbc8cc&chksm=fbedf04ccc9a795a08f4b0deb5a8518aa901dc1e8678277d232fff0d05ba1613a3f8d8636ab9&scene=178&cur_album_id=2470838961377427457#rd

kubectl create namespace argocd
#kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v2.1.2/manifests/install.yaml

#Service Type Load Balancer
#Change the argocd-server service type to LoadBalancer:
kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'
kubectl -n argocd get svc

Enable external web ui

1
2
3
4
5
6
7
8
9
10
11
12
13
#Change port to 8443 and 8080
kubectl -n argocd edit svc argocd-serve
ports:
- name: http
nodePort: 31291
port: 8080
protocol: TCP
targetPort: 8080
- name: https
nodePort: 31592
port: 8443
protocol: TCP
targetPort: 8080

The URL is: https://192.168.64.6:8443/, login name is admin. Getting password from:

1
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo

image-20230926173648087

Demo

Demo1

1
2
3
4
5
fleet/
├── application.yaml
└── dev
├── deployment.yaml
└── service.yaml

application.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
# application.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: argo-demo
namespace: argocd
spec:
project: default

source:
#repoURL: https://github.com/yangchuansheng/argocd-lab.git
repoURL: http://gitlab.zerofinance.net/dave.zhao/fleet_demo.git
targetRevision: master
path: dev
destination:
server: https://kubernetes.default.svc
namespace: myapp

# syncPolicy:
# syncOptions:
# - CreateNamespace=true

# automated:
# selfHeal: true
# prune: true

dev/deployment.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
selector:
matchLabels:
app: myapp
replicas: 2
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: registry.zerofinance.net/xpayappimage/am-webhook:1.0.x
ports:
- containerPort: 8088

dev/service.yaml

1
2
3
4
5
6
7
8
9
10
11
12
# service.yaml
apiVersion: v1
kind: Service
metadata:
name: myapp-service
spec:
selector:
app: myapp
ports:
- port: 8088
protocol: TCP
targetPort: 8088

Create Apps Via Command

1
kubectl apply -f application.yaml

Creating Apps Via UI

Open a browser to the Argo CD external UI, and login by visiting the IP/hostname in a browser and use the credentials set in step 4.

After logging in, click the + New App button:

image-20230926163552595

Notice: If you create apps via UI, you don’t need application.yaml located in root folder.

More usage please visit: Getting Started - Argo CD - Declarative GitOps CD for Kubernetes (argo-cd.readthedocs.io)

Demo2

guestbook-ui-deployment.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
apiVersion: apps/v1
kind: Deployment
metadata:
name: guestbook-ui
spec:
replicas: 1
revisionHistoryLimit: 3
selector:
matchLabels:
app: guestbook-ui
template:
metadata:
labels:
app: guestbook-ui
spec:
containers:
- image: gcr.io/heptio-images/ks-guestbook-demo:0.2
name: guestbook-ui
ports:
- containerPort: 80

guestbook-ui-svc.yaml

1
2
3
guestbook/
├── guestbook-ui-deployment.yaml
└── guestbook-ui-svc.yaml

guestbook-ui-deployment.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
apiVersion: apps/v1
kind: Deployment
metadata:
name: guestbook-ui
spec:
replicas: 1
revisionHistoryLimit: 3
selector:
matchLabels:
app: guestbook-ui
template:
metadata:
labels:
app: guestbook-ui
spec:
containers:
- image: gcr.io/heptio-images/ks-guestbook-demo:0.2
name: guestbook-ui
ports:
- containerPort: 80

guestbook-ui-svc.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
apiVersion: v1
kind: Service
metadata:
name: guestbook-ui
spec:
ports:
- port: 80
targetPort: 80
selector:
app: guestbook-ui

# ---
# apiVersion: networking.k8s.io/v1
# kind: Ingress
# metadata:
# name: rollouts-bluegreen-ingress
# annotations:
# kubernetes.io/ingress.class: nginx
# spec:
# rules:
# - host: guestbook-ui.local
# http:
# paths:
# - path: /
# pathType: Prefix
# backend:
# # Reference to a Service name, also specified in the Rollout spec.strategy.canary.stableService field
# service:
# name: guestbook-ui
# port:
# number: 80


---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: guestbook-ui-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: guestbook-ui.local
http:
paths:
- backend:
serviceName: guestbook-ui
servicePort: 80

argo-rollouts

Install

1
2
3
4
5
6
kubectl create namespace argo-rollouts
#kubectl apply -n argo-rollouts -f https://github.com/argoproj/argo-rollouts/releases/latest/download/install.yaml
kubectl apply -n argo-rollouts -f https://github.com/argoproj/argo-rollouts/releases/download/v1.2.2/install.yaml

wget https://github.com/argoproj/argo-rollouts/releases/download/v1.2.2/kubectl-argo-rollouts-linux-amd64
wget https://github.com/argoproj/argo-rollouts/releases/download/v1.2.2/kubectl-argo-rollouts-windows-amd64

Demo

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
#demo
#kubectl apply -f https://raw.githubusercontent.com/argoproj/argo-rollouts/master/docs/getting-started/basic/rollout.yaml
#kubectl apply -f https://raw.githubusercontent.com/argoproj/argo-rollouts/master/docs/getting-started/basic/service.yaml
kubectl apply -f https://raw.githubusercontent.com/argoproj/argo-rollouts/v1.2.2/docs/getting-started/basic/rollout.yaml
kubectl apply -f https://raw.githubusercontent.com/argoproj/argo-rollouts/v1.2.2/docs/getting-started/basic/service.yaml

#Watch
#kubectl argo rollouts get rollout rollouts-demo -w
kubectl argo rollouts -n testing get rollout rollouts-demo --watch

#Updating a Rollout
kubectl argo rollouts -n testing set image rollouts-demo \
rollouts-demo=argoproj/rollouts-demo:yellow

#Promoting a Rollout
kubectl argo rollouts -n testing promote rollouts-demo

#Updating a red Rollout
kubectl argo rollouts -n testing set image rollouts-demo \
rollouts-demo=argoproj/rollouts-demo:red

#Aborting a Rollout
kubectl argo rollouts -n testing abort rollouts-demo

#In order to make Rollout considered Healthy again and not Degraded, it is necessary to change the desired state back to the previous, stable versio
kubectl argo rollouts -n testing set image rollouts-demo \
rollouts-demo=argoproj/rollouts-demo:yellow

#ingress:
#https://argoproj.github.io/argo-rollouts/getting-started/nginx/
kubectl apply -f https://raw.githubusercontent.com/argoproj/argo-rollouts/master/docs/getting-started/nginx/rollout.yaml
kubectl apply -f https://raw.githubusercontent.com/argoproj/argo-rollouts/master/docs/getting-started/nginx/services.yaml
kubectl apply -f https://raw.githubusercontent.com/argoproj/argo-rollouts/master/docs/getting-started/nginx/ingress.yaml

#cat ingress.yaml has to be changed from >=1.19 cluster
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: rollouts-demo-stable
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: rollouts-demo.local
http:
paths:
- path: /
pathType: Prefix
backend:
# Reference to a Service name, also specified in the Rollout spec.strategy.canary.stableService field
service:
name: rollouts-demo-stable
port:


#Perform an update
kubectl argo rollouts set image rollouts-demo rollouts-demo=argoproj/rollouts-demo:yellow

Dashboard

1
2
3
#dashboard
kubectl argo rollouts dashboard(url为启动这个命令的那台机器)
http://192.168.102.82:3100/rollouts

image-20230926173551648

bluegreen

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
#bluegreen
#https://github.com/argoproj/argo-rollouts/blob/master/examples/rollout-bluegreen.yaml

#cat rollout-bluegreen.yaml
# This example demonstrates a Rollout using the blue-green update strategy, which contains a manual
# gate before promoting the new stack.
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
name: rollout-bluegreen
spec:
replicas: 2
revisionHistoryLimit: 2
selector:
matchLabels:
app: rollout-bluegreen
template:
metadata:
labels:
app: rollout-bluegreen
spec:
containers:
- name: rollouts-demo
image: argoproj/rollouts-demo:blue
imagePullPolicy: Always
ports:
- containerPort: 8080
strategy:
blueGreen:
# activeService specifies the service to update with the new template hash at time of promotion.
# This field is mandatory for the blueGreen update strategy.
activeService: rollout-bluegreen-active
# previewService specifies the service to update with the new template hash before promotion.
# This allows the preview stack to be reachable without serving production traffic.
# This field is optional.
previewService: rollout-bluegreen-preview
# autoPromotionEnabled disables automated promotion of the new stack by pausing the rollout
# immediately before the promotion. If omitted, the default behavior is to promote the new
# stack as soon as the ReplicaSet are completely ready/available.
# Rollouts can be resumed using: `kubectl argo rollouts promote ROLLOUT`
autoPromotionEnabled: false

---
kind: Service
apiVersion: v1
metadata:
name: rollout-bluegreen-active
spec:
selector:
app: rollout-bluegreen
ports:
- protocol: TCP
port: 80
targetPort: 8080

---
kind: Service
apiVersion: v1
metadata:
name: rollout-bluegreen-preview
spec:
selector:
app: rollout-bluegreen
ports:
- protocol: TCP
port: 80
targetPort: 8080

---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: rollouts-bluegreen-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: rollouts-bluegreen.local
http:
paths:
- path: /
pathType: Prefix
backend:
# Reference to a Service name, also specified in the Rollout spec.strategy.canary.stableService field
service:
name: rollout-bluegreen-active
port:
number: 80

Set image:

1
2
3
4
kubectl argo rollouts set image rollout-bluegreen \
rollouts-demo=argoproj/rollouts-demo:yellow

kubectl argo rollouts get rollout rollout-bluegreen -w

Demo

bluegreen

rollout-bluegreen.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
# This example demonstrates a Rollout using the blue-green update strategy, which contains a manual
# gate before promoting the new stack.
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
name: rollout-bluegreen
spec:
replicas: 2
revisionHistoryLimit: 2
selector:
matchLabels:
app: rollout-bluegreen
template:
metadata:
labels:
app: rollout-bluegreen
spec:
containers:
- name: rollouts-demo
image: argoproj/rollouts-demo:blue
imagePullPolicy: Always
ports:
- containerPort: 8080
strategy:
blueGreen:
# activeService specifies the service to update with the new template hash at time of promotion.
# This field is mandatory for the blueGreen update strategy.
activeService: rollout-bluegreen-active
# previewService specifies the service to update with the new template hash before promotion.
# This allows the preview stack to be reachable without serving production traffic.
# This field is optional.
previewService: rollout-bluegreen-preview
# autoPromotionEnabled disables automated promotion of the new stack by pausing the rollout
# immediately before the promotion. If omitted, the default behavior is to promote the new
# stack as soon as the ReplicaSet are completely ready/available.
# Rollouts can be resumed using: `kubectl argo rollouts promote ROLLOUT`
autoPromotionEnabled: false

---
kind: Service
apiVersion: v1
metadata:
name: rollout-bluegreen-active
spec:
selector:
app: rollout-bluegreen
ports:
- protocol: TCP
port: 80
targetPort: 8080

---
kind: Service
apiVersion: v1
metadata:
name: rollout-bluegreen-preview
spec:
selector:
app: rollout-bluegreen
ports:
- protocol: TCP
port: 80
targetPort: 8080

# ---
# apiVersion: networking.k8s.io/v1
# kind: Ingress
# metadata:
# name: rollouts-bluegreen-ingress
# annotations:
# kubernetes.io/ingress.class: nginx
# spec:
# rules:
# - host: rollouts-bluegreen.local
# http:
# paths:
# - path: /
# pathType: Prefix
# backend:
# # Reference to a Service name, also specified in the Rollout spec.strategy.canary.stableService field
# service:
# name: rollout-bluegreen-active
# port:
# number: 80

---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: rollouts-bluegreen-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: rollouts-bluegreen.local
http:
paths:
- backend:
serviceName: rollout-bluegreen-active
servicePort: 80

Canary

ingress.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
# apiVersion: networking.k8s.io/v1
# kind: Ingress
# metadata:
# name: rollouts-canary-stable
# annotations:
# kubernetes.io/ingress.class: nginx
# spec:
# rules:
# - host: rollouts-canary.local
# http:
# paths:
# - path: /
# pathType: Prefix
# backend:
# # Reference to a Service name, also specified in the Rollout spec.strategy.canary.stableService field
# service:
# name: rollouts-canary-stable
# port:
# number: 80

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: rollouts-canary-stable
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: rollouts-canary.local
http:
paths:
- backend:
serviceName: rollouts-canary-stable
servicePort: 80

rollout.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
name: rollouts-canary
spec:
replicas: 2
strategy:
canary:
canaryService: rollouts-canary-canary
stableService: rollouts-canary-stable
trafficRouting:
nginx:
stableIngress: rollouts-canary-stable
steps:
- setWeight: 5
- pause: {}
revisionHistoryLimit: 2
selector:
matchLabels:
app: rollouts-canary
template:
metadata:
labels:
app: rollouts-canary
spec:
containers:
- name: rollouts-canary
image: argoproj/rollouts-demo:blue
ports:
- name: http
containerPort: 8080
protocol: TCP
resources:
requests:
memory: 32Mi
cpu: 5m

services.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
apiVersion: v1
kind: Service
metadata:
name: rollouts-canary-canary
spec:
ports:
- port: 80
targetPort: http
protocol: TCP
name: http
selector:
app: rollouts-canary
# This selector will be updated with the pod-template-hash of the canary ReplicaSet. e.g.:
# rollouts-pod-template-hash: 7bf84f9696

---
apiVersion: v1
kind: Service
metadata:
name: rollouts-canary-stable
spec:
ports:
- port: 80
targetPort: http
protocol: TCP
name: http
selector:
app: rollouts-canary
# This selector will be updated with the pod-template-hash of the stable ReplicaSet. e.g.:
# rollouts-pod-template-hash: 789746c88d

Arago-workflow

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
#arago-workflow:
Controller and Server
kubectl create namespace argo
kubectl apply -n argo -f https://github.com/argoproj/argo-workflows/releases/download/v3.3.9/install.yaml

kubectl -n argo edit svc argo-server
change argo-server to LoadBalancer

https://192.168.64.6:2746/

#https://argoproj.github.io/argo-workflows/quick-start/
kubectl patch deployment \
argo-server \
--namespace argo \
--type='json' \
-p='[{"op": "replace", "path": "/spec/template/spec/containers/0/args", "value": [
"server",
"--auth-mode=server"
]}]'

Get login password:
k get secret -n argo
ARGO_TOKEN="Bearer $(kubectl get -n argo secret argo-server-token-vw8xf -o=jsonpath='{.data.token}' | base64 --decode)"
echo $ARGO_TOKEN

argo-windows-amd64.exe submit -n argo --watch https://raw.githubusercontent.com/argoproj/argo-workflows/v3.3.9/examples/hello-world.yaml

Gitlab-Runner

Docker

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
#https://docs.gitlab.com/runner/install/docker.html
#https://docs.gitlab.com/runner/register/index.html#docker

#cat /etc/gitlab-runner/config.toml
concurrent = 1
check_interval = 0

[session_server]
session_timeout = 1800

[[runners]]
name = "my-runner"
url = "http://gitlab.zerofinance.net/"
token = "111111"
executor = "docker"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
[runners.docker]
tls_verify = false
image = "docker:latest"
privileged = true
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/var/run/docker.sock:/var/run/docker.sock","/cache","/works/config/runner:/runner"]
shm_size = 0

docker run -d --name gitlab-runner --restart always \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /works/config/runner:/etc/gitlab-runner \
gitlab/gitlab-runner:latest

#Register runner
docker run --rm -it -v /works/config/runner:/etc/gitlab-runner gitlab/gitlab-runner register

Hosted-Machine

Recommend

Notice: Running sudo as gitlab-runner to register

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
Installing on Linux:
#https://docs.gitlab.com/runner/install/linux-manually.html
sudo curl -L --output /usr/local/bin/gitlab-runner "https://gitlab-runner-downloads.s3.amazonaws.com/latest/binaries/gitlab-runner-linux-amd64"
sudo chmod +x /usr/local/bin/gitlab-runner
sudo useradd --comment 'GitLab Runner' --create-home gitlab-runner --shell /bin/bash
sudo gitlab-runner install --user=gitlab-runner --working-directory=/home/gitlab-runner
sudo gitlab-runner start
#Remember git password,login as gitlab-runner:
git config --global credential.helper store
#Mocking cloning a certain repo, input username and password to store credential.

#https://docs.gitlab.com/runner/shells/index.html#shell-profile-loading
To troubleshoot this error, check /home/gitlab-runner/.bash_logout. For example, if the .bash_logout file has a script section like the following, comment it out and restart the pipeline:

if [ "$SHLVL" = 1 ]; then
[ -x /usr/bin/clear_console ] && /usr/bin/clear_console -q
fi

#Running sudo as gitlab-runner:
#sudo gitlab-runner register --name native-runner --url http://gitlab-prod.zerofinance.net/ --registration-token 1111111111

choice shell as a executor.

sudo gitlab-runner register

.gitlab-ci.yml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
# default:
# before_script:
# - echo "This is a gloab before_script..."
# after_script:
# - echo "This is a gloab after_script..."

variables:
RUN_PIPELINE:
value: "false"
options:
- "false"
- "true"
description: "Runs pipeline immediately?"

stages:
- stage-导出数据
- stage-分发数据
- stage-导入数据
- stage-数仓调度
- stage-数据清理

# 前置检查:
# #variables:
# # CI_DEBUG_TRACE: "true"
# stage: stage-前置检查
# needs: []
# tags:
# - hkcos
# - hkx8
# script:
# - echo "前置检查 started"
# - sleep 5
# - echo "前置检查 done"
# rules:
# - if: $RUN_PIPELINE == "true"

HKCASH-导出数据:
#variables:
# CI_DEBUG_TRACE: "true"
stage: stage-导出数据
needs: []
tags:
- master-runner
script:
- echo "[start] HKCASH-导出数据(65.105) -> 数仓(84.101) - [$(date '+%F %T')]"
# - bash ./mysql_hkcash_dump.sh
- echo "[ end ] HKCASH-导出数据(65.105) -> 数仓(84.101) - [$(date '+%F %T')]"
# Pass variables to next stage
# - echo "BUILD_VERSION1=hello1" >> build.env
# - echo "BUILD_VERSION2=hello2" >> build.env
# artifacts:
# reports:
# dotenv: build.env
rules:
- if: $RUN_PIPELINE == "true"

HKCASH-分发数据:
stage: stage-分发数据
needs: [HKCASH-导出数据]
tags:
- master-runner
script:
- echo "[start] HKCASH分发数据(65.105) -> 数仓服务器(84.101) - [$(date '+%F %T')]"
# - bash ./mysql_hkcash_dump.sh
- echo "[ end ] HKCASH分发数据(65.105) -> 数仓服务器(84.101) - [$(date '+%F %T')]"
dependencies:
- HKCASH-导出数据
rules:
- if: $RUN_PIPELINE == "true"

HKCASH-导入数据:
stage: stage-导入数据
needs: [HKCASH-分发数据]
tags:
- master-runner
script:
- echo "HKCASH数据导入:数仓服务器(84.101) -> ETL(84.101) - [$(date '+%F %T')]"
# - bash ./datahouse_ods_hkcash.sh
- echo "HKCASH数据导入:数仓服务器(84.101) -> ETL(84.101) - [$(date '+%F %T')]"
rules:
- if: $RUN_PIPELINE == "true"

HKCOS-导出数据:
stage: stage-导出数据
needs: []
tags:
- slave-runner
script:
- echo "[start] HKCOS-导出数据(65.106) -> NAS - [$(date '+%F %T')]"
# - bash ./mysql_hkcos_dump.sh
- echo "[ end ] HKCOS-导出数据(65.106) -> NAS - [$(date '+%F %T')]"
rules:
- if: $RUN_PIPELINE == "true"

HKCOS-分发数据:
stage: stage-分发数据
needs: [HKCOS-导出数据]
tags:
- master-runner
script:
- echo "[start] HKCASH分发数据(65.105) -> 数仓服务器(84.101) - [$(date '+%F %T')]"
# - bash ./mysql_hkcos_dump.sh
- echo "[ end ] HKCASH分发数据(65.105) -> 数仓服务器(84.101) - [$(date '+%F %T')]"
rules:
- if: $RUN_PIPELINE == "true"

HKCOS-导入数据:
stage: stage-导入数据
needs: [HKCOS-分发数据]
tags:
- master-runner
script:
- echo "HKCOS数据导入:数仓服务器(84.101) -> ETL(84.101) - [$(date '+%F %T')]"
# - bash ./datahouse_ods_hkcos.sh
- echo "HKCOS数据导入:数仓服务器(84.101) -> ETL(84.101) - [$(date '+%F %T')]"
rules:
- if: $RUN_PIPELINE == "true"

HKX8-导出数据:
# variables:
# CI_DEBUG_TRACE: "true"
stage: stage-导出数据
needs: []
tags:
- master-runner
script:
- echo "[start] HKX8导出数据(65.105) -> NAS - [$(date '+%F %T')]"
- sudo /bin/bash ${CI_PROJECT_DIR}/mysql_hkx8_dump.sh
- echo "[ end ] HKX8导出数据(65.105) -> NAS - [$(date '+%F %T')]"
rules:
- if: $RUN_PIPELINE == "true"

HKX8-分发数据:
stage: stage-分发数据
needs: [HKX8-导出数据]
tags:
- master-runner
script:
- echo "[start] HKX8分发数据(65.105) -> 数仓服务器(84.101) - [$(date '+%F %T')]"
- sudo /bin/bash ${CI_PROJECT_DIR}/mysql_hkx8_dump.sh rsync
- echo "[ end ] HKX8分发数据(65.105) -> 数仓服务器(84.101) - [$(date '+%F %T')]"
rules:
- if: $RUN_PIPELINE == "true"

HKX8-导入数据:
stage: stage-导入数据
needs: [HKX8-分发数据]
tags:
- master-runner
script:
- echo "[start] HKX8数据导入:数仓服务器(84.101) -> ETL(84.101) - [$(date '+%F %T')]"
# - bash ./datahouse_ods_hkx8.sh
- echo "[ end ] HKX8数据导入:数仓服务器(84.101) -> ETL(84.101) - [$(date '+%F %T')]"
rules:
- if: $RUN_PIPELINE == "true"

XPAY-导出数据:
stage: stage-导出数据
needs: []
tags:
- master-runner
script:
- echo "[start] HKX8-导出数据(65.105) -> NAS - [$(date '+%F %T')]"
# - bash ./mysql_xpay_dump.sh
- echo "[ end ] HKX8-导出数据(65.105) -> NAS - [$(date '+%F %T')]"
rules:
- if: $RUN_PIPELINE == "true"

XPAY-分发数据:
stage: stage-分发数据
needs: [XPAY-导出数据]
tags:
- master-runner
script:
- echo "[start] XPAY分发数据(65.105) -> 数仓服务器(84.101) - [$(date '+%F %T')]"
# - bash ./mysql_hkx8_dump.sh rsync
- echo "[ end ] XPAY分发数据(65.105) -> 数仓服务器(84.101) - [$(date '+%F %T')]"
rules:
- if: $RUN_PIPELINE == "true"

XPAY-导入数据:
stage: stage-导入数据
needs: [XPAY-分发数据]
tags:
- master-runner
script:
- echo "[start] XPAY数据导入:数仓服务器(84.101) -> ETL(84.101) - [$(date '+%F %T')]"
# - bash ./datahouse_ods_hkx8.sh
- echo "[ end ] XPAY数据导入:数仓服务器(84.101) -> ETL(84.101) - [$(date '+%F %T')]"
rules:
- if: $RUN_PIPELINE == "true"

Datahouse-ETL调度:
stage: stage-数仓调度
needs: [HKCASH-导入数据, HKCOS-导入数据, XPAY-导入数据, HKX8-导入数据]
tags:
- datahouse-runner
script:
- echo "[start] 数仓ETL调度 - 数仓服务器(84.101) -> ETL(84.101) - [$(date '+%F %T')]"
# - ./datahouse_ods_etl.sh
- echo "[ end ] 数仓ETL调度 - 数仓服务器(84.101) -> ETL(84.101) - [$(date '+%F %T')]"
#There are any variables can be showed:
# - echo "BUILD_VERSION1=$BUILD_VERSION1"
# - echo "BUILD_VERSION2=$BUILD_VERSION2"
rules:
- if: $RUN_PIPELINE == "true"

Analyst-数据导入:
stage: stage-数仓调度
needs: [HKCASH-导入数据, HKCOS-导入数据, XPAY-导入数据, HKX8-导入数据]
tags:
- master-runner
script:
- echo "[start] Analyst数据导入 - 数据服务器(63.17) -> 阿里云RDS(rm-3nslo2652x449k4oa) - [$(date '+%F %T')]"
- sleep 5
# - bash ./mysql_analyst_masking_report.sh
- echo "[ end ] Analyst数据导入 - 数据服务器(63.17) -> 阿里云RDS(rm-3nslo2652x449k4oa) - [$(date '+%F %T')]"
rules:
- if: $RUN_PIPELINE == "true"

Pipeline UI

image-20230926162504964

Trigger with remote URL

1
2
3
4
5
6
7
8
9
#https://blog.csdn.net/lenkty/article/details/124668164
#https://blog.csdn.net/boling_cavalry/article/details/106991691
#https://blog.csdn.net/sandaawa/article/details/112897733
#https://github.com/lonly197/docs/blob/master/src/operation/GitLab%20CI%20%E6%8C%81%E7%BB%AD%E9%9B%86%E6%88%90.md
curl -X POST \
-F token=111222 \
-F ref=1.0.x \
-F variables[project]=dwh-pipeline \
http://gitlab.zerofinance.net/api/v4/projects/575/trigger/pipeline

AutoK3s

Not Recommend, recommend using k3s.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
#AutoK3s:(Not Recommend)
##https://docs.rancher.cn/docs/k3s/autok3s/_index/
##https://jasonkayzk.github.io/2022/10/22/%E5%8D%95%E6%9C%BA%E9%83%A8%E7%BD%B2autok3s/

#For docker:(Recommend)
##docker run -itd --restart=unless-stopped --net host -v /var/run/docker.sock:/var/run/docker.sock cnrancher/autok3s:v0.6.0
docker run --privileged -d --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher:stable

##Installing on hosted machine:(optional)
#curl -sS https://rancher-mirror.rancher.cn/autok3s/install.sh | INSTALL_AUTOK3S_MIRROR=cn sh
##Starting
#autok3s serve --bind-address 192.168.101.82 --bind-port 8080
##Uninstalling:
#/usr/local/bin/autok3s-uninstall.sh

##Install instance:
#Put "--disable traefik" param into "Master Extra Args"
##execute once:
#k get ns --insecure-skip-tls-verify
#k get ns