Kubernetes Development Environment

This article will teach us how to set up a development environment in you local machine, including java/k8s/spring cloud kubernetes, etc.

OS

reference: https://docs.microsoft.com/en-us/windows/wsl/install-win10

I’d recommend you base on WSL system to develop, if you don’t know what’s the wsl, look at this article

Installing WSL as below:

1
2
3
4
5
6
7
8
9
10
11
12
#Step 1 - Enable WSL
#Openning PowerShell as administrotor:
dism.exe /online /enable-feature /featurename:Microsoft-Windows-Subsystem-Linux /all /norestart
#Step 2 - Enable Virtual Machine
#Openning PowerShell as administrotor:
dism.exe /online /enable-feature /featurename:VirtualMachinePlatform /all /norestart
#Step 3 - Downloading Linux Kernel
wget http://aka.ms/wsl2kernelmsix64
#Step 4 - Setting WSL 2 as default
wsl --set-default-version 2
#Step 5 - Installing linux
Searching proper linux version in Microsoft Store

Windows Terminal

I’d highly recommend using Windows terminal in Windows 10, it’s pretty handly.

Reference: https://docs.microsoft.com/en-us/windows/terminal/get-started

Adding git-bash support:

1
2
3
4
5
6
7
8
{
"closeOnExit" : true,
"commandline" : "D:\\Developer\\Git\\bin\\bash.exe --login -i",
"guid" : "{1d4e097e-fe87-4164-97d7-3ca794c316fd}",
"icon" : "D:\\Developer\\Git\\git-bash.png",
"name" : "Bash",
"startingDirectory" : "%USERPROFILE%"
}

Install Local Docker(Optional)

If you want to use the remote development environment, don’t need it.

Reference:

Just need to install Docker-Desktop for Windows, and select the ubuntu at “Setting->Resources/WSL Integration”, that’s all.

Install Local Kubernetes(Optional)

If you want to use the remote development environment, don’t need it.

Just need to enable kubernetes in “Docker-Desktop Settings”, that’s all. But you need to set the proxy in the Docker Setting, or you will get the failure by Downloading google’s containers.

1
2
http://192.168.101.175:1082
127.0.0.1,localhost,10.0.0.0/8,172.0.0.0/8,192.168.0.0/16,*.zerofinance.net,*.aliyun.com,*.163.com,*.docker-cn.com,registry.gcalls.cn

Enable Ingress Addon:

Reference:https://github.com/docker/for-win/issues/7094

1
2
3
4
5
6
7
#https://kubernetes.github.io/ingress-nginx/deploy/#docker-for-mac
#kubectl.exe create -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-0.32.0/deploy/static/provider/cloud/deploy.yaml
#Resolved: Unable to connect to the server
proxy_on
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.41.2/deploy/static/provider/cloud/deploy.yaml
proxy_off
kubectl apply -f deploy.yaml

demo.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
kind: Service
apiVersion: v1
metadata:
name: hello
labels:
app: hello
spec:
#type: NodePort
ports:
- protocol: TCP
name: http
port: 8080
targetPort: 8080
selector:
app: hello

---

kind: Deployment
apiVersion: apps/v1
metadata:
name: hello
labels:
app: hello
spec:
replicas: 1
selector:
matchLabels:
app: hello
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
# image: paulbouwer/hello-kubernetes:1.8
image: gcr.io/google-samples/hello-app:1.0
ports:
- containerPort: 8080

---

apiVersion: networking.k8s.io/v1 # for versions before 1.14 use extensions/v1beta1
kind: Ingress
metadata:
name: hello
# annotations:
# nginx.ingress.kubernetes.io/rewrite-target: /$1
# nginx.ingress.kubernetes.io/whitelist-source-range: 192.168.147.174/32
spec:
rules:
- host: hello.info
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: hello
port:
number: 8080

Remote Kubernetes Environment

I’d recommend installing docker and kubernetes on the remote machine, and all of developers can share it and save some of local resources.

Docker

Ubuntu Reference: http://blog.gcalls.cn/blog/2018/12/ubuntu-os.html#Docker

For CentOS:

1
2
3
4
5
6
7
#https://www.cnblogs.com/763977251-sg/p/11837130.html
#Docker installation
#https://aka.ms/vscode-remote/samples/docker-from-docker
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
sudo yum makecache fast
sudo yum -y install docker-ce

Kubernetes

Kubectl

Following the instructions to install the k8s client tools:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
#For MAC
#https://kubernetes.io/zh/docs/tasks/tools/install-kubectl-macos/
#curl -LO "https://dl.k8s.io/release/v1.18.18/bin/darwin/amd64/kubectl"
#chmod +x ./kubectl
#sudo mv ./kubectl /usr/local/bin/kubectl
#sudo chown root: /usr/local/bin/kubectl
brew install kubectl
mkdir .kube
#the "config" file located in the root of project
cp config ~/.kube/
kubectl config use microk8s-0

#For Windows
Step 1:
Downloading https://dl.k8s.io/release/v1.18.18/bin/windows/amd64/kubectl.exe, and put it to a executable environment path(I'd recommend putting it to the folder of GIT_HOME/bin), and make sure $GIT_HOME is in the PATH of your environment.

Step 2:
Open the terminal of CMD, executing the following command(Press Win+R, and input "cmd"):
cd %HOMEPATH%
mkdir .kube

Step 3:
Copying the "config" file located in the root folder of project to the ".kube" folder located in current user home(like: C:\Users\YourName\.kube).

Step 4:
Open the terminal of CMD again, executing the following command:
kubectl config use microk8s-0

Congratulations, That's all done.

#For Linux
#https://kubernetes.io/zh/docs/tasks/tools/install-kubectl-linux/
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
kubectl version --client
mkdir ~/.kube
#the "config" file located in the root of project
cp config ~/.kube/
kubectl config use microk8s-0

microk8s

Recommend using microk8s on Linux. It’s the best performance.

Reference:

https://jiajunhuang.com/articles/2019_11_17-microk8s.md.html
https://microk8s.io/#quick-start
https://microk8s.io/docs
https://www.cnblogs.com/xiao987334176/p/10931290.html

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
#For ubuntu:
#https://blog.flyfox.top/2020/04/03/microk8s%E5%AE%89%E8%A3%85%E6%95%99%E7%A8%8B/
#https://microk8s.io/#install-microk8s
#Ubuntu 20.04 has been installed with snap pre-installed
sudo snap install microk8s --classic
apt install bash-completion -y
source /usr/share/bash-completion/bash_completion

#For centos7:
#sudo yum install epel-release
sudo su - dev
sudo yum install snapd
sudo systemctl enable --now snapd.socket
sudo ln -s /var/lib/snapd/snap /snap
sudo snap install microk8s --classic
yum install bash-completion -y
source /usr/share/bash-completion/bash_completion

#Notice: microk8s is using containerd, not docker any more.
#Either log out and back in again or restart your system to ensure 
sudo vim /var/snap/microk8s/current/args/containerd-env
HTTP_PROXY="http://192.168.101.175:1082"
HTTPS_PROXY="http://192.168.101.175:1082"
NO_PROXY="127.0.0.1,localhost,10.0.0.0/8,172.0.0.0/8,192.168.0.0/16,*.zerofinance.net,*.aliyun.com,*.163.com,*.docker-cn.com,registry.gcalls.cn"
sudo systemctl list-unit-files |grep -i microk8s
sudo systemctl restart snap.microk8s.daemon-containerd.service
microk8s.start
#Addons: https://microk8s.io/docs/addons#heading--list
#microk8s.enable dashboard dns ingress istio registry storage rbac
microk8s.enable dashboard dns ingress hostpath-storage
microk8s status --wait-ready
#list all of enabled addons
microk8s status
microk8s.kubectl describe pods -A
microk8s.inspect

kubectl cluster-info
Kubernetes master is running at https://192.168.95.234:16443
Metrics-server is running at https://192.168.95.234:16443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy

CoreDNS is running at https://192.168.95.234:16443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.



#bashboard:
#https://medium.com/@junya.kaneko/quick-way-of-using-kubernetes-dashboard-on-microk8s-9c7b0e26be02
#Skip login
microk8s kubectl edit deployment/kubernetes-dashboard -n kube-system
spec:
containers:
- args:
- --auto-generate-certificates
- --namespace=kube-system
- --enable-skip-login

microk8s kubectl create clusterrolebinding kubernertes-dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:kubernetes-dashboard
microk8s kubectl proxy --accept-hosts=.* --address=0.0.0.0
http://192.168.80.98:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#/login
token=$(microk8s kubectl -n kube-system get secret | grep default-token | cut -d " " -f1)
echo $token
microk8s kubectl -n kube-system describe secret $token

#uninstall microk8s
sudo snap remove microk8s
rm -fr /root/snap/microk8s /home/dev/snap/microk8s

#https://microk8s.io/docs/working-with-kubectl
#Export the config for clients
cd $HOME
mkdir .kube
cd .kube
microk8s config > config

#https://kubernetes.io/zh/docs/reference/kubectl/cheatsheet/
#kubectl context config
kubectl config get-contexts
#current context config
kubectl config current-context
#switch config as alik8s-0
kubectl config use-context alik8s-0

#~/.bash_profile
#yum install bash-completion -y
~/.bash_profile
alias k=kubectl
source <(kubectl completion bash | sed s/kubectl/k/g)
#source /usr/share/bash-completion/bash_completion

#Kubectl installation:
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.19.4/bin/linux/amd64/kubectl
chmod +x kubectl && sudo mv kubectl /usr/local/bin/

#OR
#Kubectl For CentOS
#https://blog.csdn.net/nklinsirui/article/details/80581286
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum install -y kubectl
#Kubectl For Ubuntu
#https://blog.csdn.net/nklinsirui/article/details/80581286
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
echo "deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main" >> /etc/apt/sources.list
apt-get update
apt-get install -y kubectl

Harbor

Harbor is an open source trusted cloud native registry project that stores, signs, and scans content. Harbor extends the open source Docker Distribution by adding the functionalities usually required by users such as security, identity and management. Having a registry closer to the build and run environment can improve the image transfer efficiency. Harbor supports replication of images between registries, and also offers advanced security features such as user management, access control and activity auditing.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
#Habor:
#Using root enviroment:
sudo yum install python3-pip
#root
pip3 install -U docker-compose
#non-root
#pip3 install --user docker-compose
#https://goharbor.io/docs/2.1.0/install-config/configure-https/
#Generating the certificate:
#Using dev enviroment:
sudo su - dev
export DP_Id=""
export DP_Key=""
acme.sh --issue --dns dns_dp -d gcalls.cn -d *.gcalls.cn
#acme.sh --issue --dns dns_dp -d registry.gcalls.cn --keylength ec-256
Your cert is in /home/dev/.acme.sh/gcalls.cn/gcalls.cn.cer
Your cert key is in /home/dev/.acme.sh/gcalls.cn/gcalls.cn.key
The intermediate CA cert is in /home/dev/.acme.sh/gcalls.cn/ca.cer
The full chain certs is there: /home/dev/.acme.sh/gcalls.cn/fullchain.cer

sudo mkdir -p /etc/docker/certs.d/registry.gcalls.cn
sudo cp /home/dev/.acme.sh/gcalls.cn/gcalls.cn.cer /etc/docker/certs.d/registry.gcalls.cn/
#must be gcalls.cn.cert
sudo cp /home/dev/.acme.sh/gcalls.cn/fullchain.cer /etc/docker/certs.d/registry.gcalls.cn/gcalls.cn.cert
sudo cp /home/dev/.acme.sh/gcalls.cn/gcalls.cn.key /etc/docker/certs.d/registry.gcalls.cn/
sudo cp /home/dev/.acme.sh/gcalls.cn/ca.cer /etc/docker/certs.d/registry.gcalls.cn/
cp -a harbor-offline-installer-v2.1.2.tgz /works/k8s/
cd /works/k8s/
tar zxvf harbor-offline-installer-v2.1.2.tgz
sudo chown -R dev.dev /works/k8s/harbor
#https://goharbor.io/docs/2.1.0/install-config/configure-yml-file/
#Modifying the harbor.yml
cd /works/k8s/harbor
cp -a harbor.yml.tmpl harbor.yml
vim harbor.yml:
hostname: registry.gcalls.cn
https:
# The path of cert and key files for nginx
#certificate: /home/dev/.acme.sh/gcalls.cn/gcalls.cn.cer
certificate: /etc/docker/certs.d/registry.gcalls.cn/gcalls.cn.cert
private_key: /home/dev/.acme.sh/gcalls.cn/gcalls.cn.key
#Execting the script
./prepare
sudo su - root
cd /works/k8s/harbor
#https://goharbor.io/docs/2.1.0/install-config/run-installer-script/
./install.sh
#If Harbor is running, stop and remove the existing instance.Your image data remains in the file system, so no data is lost.
#docker-compose down -v
#Restarting
docker-compose stop
docker-compose up -d
#Open a browser and enter https://yourdomain.com. It should display the Harbor interface
https://registry.gcalls.cn
admin/Harbor12345
docker login registry.gcalls.cn
#troubleshooting
Get https://registry.gcalls.cn/v2/: net/http: TLS handshake timeout
If you got the error above, it seems you are using the proxy, try to exclude "registry.gcalls.cn" in the "NO_PROXY", the file is located:
/etc/systemd/system/docker.service.d/http-proxy.conf
Environment="HTTP_PROXY=http://192.168.101.175:1082"
Environment="HTTPS_PROXY=http://192.168.101.175:1082"
Environment="NO_PROXY=127.0.0.1,localhost,10.0.0.0/8,172.0.0.0/8,192.168.0.0/16,*.zerofinance.net,*.aliyun.com,*.163.com,*.docker-cn.com,registry.gcalls.cn"
systemctl daemon-reload && systemctl restart docker
#Test
docker pull hello-world
#must include the project name, like xwallet, and created the project beforehand:
docker tag hello-world registry.gcalls.cn/xwallet/hello-world
docker push registry.gcalls.cn/xwallet/hello-world

#If the registry work without http, need to add the following(https don't do it):
#server-side
#vim /etc/docker/daemon.json
#"insecure-registries" : ["localhost:32000", "192.168.95.233:32000"]
#client-side
#"insecure-registries" : ["192.168.95.233:32000"]
#Don't forget rebooting docker

docker registry2

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
sudo su - dev
#ssl
acme.sh --issue --dns dns_dp -d registry.gcalls.cn

#install
mkdir -p /works/docker/registry

docker run -d \
--name private_registry --restart=always \
-e SETTINGS_FLAVOUR=dev \
-e STORAGE_PATH=/registry-storage \
-v /works/docker/registry:/var/lib/registry \
-u root \
-p 5000:5000 \
-v /home/dev/.acme.sh/registry.gcalls.cn:/certs \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/fullchain.cer \
-e REGISTRY_HTTP_TLS_KEY=/certs/registry.gcalls.cn.key \
registry:2

#test
docker pull hello-world
docker tag hello-world registry.gcalls.cn:5000/hello-world
docker push registry.gcalls.cn:5000/hello-world

kind

Not Recommend.

Reference:https://kind.sigs.k8s.io/docs/user/quick-start/

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
# Download the latest version of Kind
curl -Lo ./kind https://github.com/kubernetes-sigs/kind/releases/download/v0.9.0/kind-$(uname)-amd64
# Make the binary executable
chmod +x ./kind
# Move the binary to your executable path
sudo mv ./kind /usr/local/bin/

# Check if the KUBECONFIG is not set
echo $KUBECONFIG
# Check if the .kube directory is created > if not, no need to create it
ls $HOME/.kube
# Create the cluster and give it a name (optional)
export http_proxy="http://192.168.101.175:1082"
export https_proxy=$http_proxy
export no_proxy="127.0.0.1,localhost,10.0.0.0/8,172.0.0.0/8,192.168.0.0/16,*.zerofinance.net,*.aliyun.com,*.163.com,*.docker-cn.com,registry.gcalls.cn"
kind create cluster --name wslkind
kind delete cluster --name wslkind
kind get clusters
# Check if the .kube has been created and populated with files
ls $HOME/.kube
kubectl get nodes

Notice: Kind clusters based on docker, cannot communicate with the internal docker container. Adding the extraPortMappings:

Reference:https://kind.sigs.k8s.io/docs/user/using-wsl2/

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
# cluster-config.yml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 8080
hostPort: 8080
protocol: TCP
- containerPort: 30000
hostPort: 30000
protocol: TCP

kind create cluster --config=cluster-config.yml
#kubectl run nginx --image=nginx --port=3000 --targetPort=80 --expose
kubectl create deployment nginx --image=nginx
#kubectl create service nodeport nginx --tcp=81:80 --node-port=30000
kubectl expose deployment nginx --type=NodePort --name nginx --port=80 --target-port=80


#access service
curl localhost:30000

Local development

Reference:

telepresence

https://www.telepresence.io/reference/windows

1
2
3
4
5
6
7
8
9
10
11
12
13
14
#For Windows:
1. Install Windows Subsystem for Linux.
2. Start the BASH.exe program.
3. Install Telepresence by following the Ubuntu instructions above.

#For ubuntu:
curl -s https://packagecloud.io/install/repositories/datawireio/telepresence/script.deb.sh | sudo bash
sudo apt install --no-install-recommends telepresence

#For CentOS:
sudo yum install torsocks sshfs conntrack python3 -y
git clone https://github.com/telepresenceio/telepresence.git /Developer/telepresence \
&& cd /Developer/telepresence \
&& sudo env PREFIX=/usr/local ./install.sh

Develop

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
#https://www.telepresence.io/tutorials/java
#kubectl apply -f https://raw.githubusercontent.com/telepresenceio/telepresence/master/docs/tutorials/hello-world.yaml
#kubectl create deployment hello-world --image=datawire/hello-world
#kubectl expose deployment hello-world --type=LoadBalancer --port=8000
#$ curl 127.0.0.1:8000
#Hello, world!
git clone https://github.com/cesartl/telepresence-k8s
cd telepresence-k8s
#Setting up Quote Of the Moment Service
kubectl run qotm --image=datawire/qotm:1.3 --port=5000 --expose
#Basic profile
#Using the basic profile, service disovery using K8S API is disable; The Ribbon client use the service #host name directly:
qotm:
ribbon:
listOfServers: qotm:5000

#telepresence --docker-run --rm -it pstauffer/curl -- curl http://hello-world:8000/
#telepresence --new-deployment telepresence-k8s --expose 8080:8080 --run-shell
#telepresence --new-deployment telepresence-k8s --expose 8080 --expose 8081 --run-shell
#telepresence --swap-deployment telepresence-k8s --docker-run --rm -it --name mynginx -p 8080:80 nginx
#telepresence --new-deployment telepresence-k8s --docker-run --rm -it --name mynginx -p 8081:80 nginx
#telepresence --swap-deployment hello-world --expose 8000 --run python3 -m http.server 8000 &
telepresence --new-deployment telepresence-k8s --run-shell

#Notice: Some of spring-boot versions don't support remote debug through mvnDebug or MAVEN_DEBUG_OPTS:
#cat /Developer/apache-maven-3.3.9/bin/mvnDebug
#MAVEN_DEBUG_OPTS="-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=8000"
>cd /mnt/d/Developer/workspace/telepresence-k8s/
>export KUBERNETES_NAMESPACE=default
>export PROFILES=basic
>mvn spring-boot:run -Dspring-boot.run.jvmArguments="-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8000"
#Preparing to Execute Maven in Debug Mode
#It'll pause until the client is connected, you can set suspend=n to against it.
#Listening for transport dt_socket at address: 8000
#Notice:pom.xml musn't add the section,or you cannot remote debug:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-devtools</artifactId>
<optional>true</optional>
</dependency>
#Testing:
curl localhost:8080/rest/quote/cesar

Notice:

Making sure “KUBERNETES_NAMESPACE” is set in the OS environment. You can set it of “remoteEnv” of devcontainer.json file if you develop with VSCODE.

JRebel

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
#https://www.jrebel.com/success/products/jrebel/free-trial
#https://www.jrebel.com/products/jrebel/quickstart/eclipse/
#https://www.jrebel.com/products/jrebel/download/prev-releases
#https://manuals.jrebel.com/jrebel/standalone/activate.html
#Plugin for eclipse:
#update site:
http://update.zeroturnaround.com/update-site
#Download ZIP
http://update.zeroturnaround.com/update-site/update-site.zip

#Crack
#https://www.cnblogs.com/flyrock/archive/2019/09/23/11574617.html
#Generating GUID from:
https://www.guidgen.com/
#Activation Server URL: Pasting the following url on the "Licensing service"
https://jrebel.qekang.com/{GUID}
https://jrebel.qekang.com/d1b8919f-e1e9-4a8d-84da-0c43d75aa970
aaa@bbb.com
#Activation for standalone:
wget https://www.jrebel.com/download/jrebel/496
cd /Developer/jrebel/bin
./activate.sh https://jrebel.qekang.com/d1b8919f-e1e9-4a8d-84da-0c43d75aa970 aaa@bbb.com

#https://manuals.jrebel.com/jrebel/standalone/springboot.html#spring-boot-2-x-using-maven
#https://manuals.jrebel.com/jrebel/standalone/config.html#rebel-xml
#https://www.javazhiyin.com/22460.html
mvn spring-boot:run -Dspring-boot.run.jvmArguments="-agentpath:/Developer/jrebel/lib/libjrebel64.so"

#!!!Important!!!: Project must be located at linux folder, a windows folder located won't take affect by JRebel.
#If you see the following message, it works.
2020-12-14 12:26:33 JRebel: Reloading class 'com.ctl.telepresencek8s.DummyRestController'.
2020-12-14 12:26:38 JRebel: Reconfiguring bean 'dummyRestController' [com.ctl.telepresencek8s.DummyRestController]

demo:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
#account-service
cd /mnt/d/Developer/workspace/java-k8s/spring-cloud-k8s-account-service
mvn clean install -Pk8s
#cd /mnt/d/Developer/workspace/java-k8s/spring-cloud-k8s-web-service
#mvn clean install -Pk8s
#telepresence --swap-deployment web-service --run-shell
telepresence --new-deployment web-service --run-shell
#new-deployment will get the error message:
#Did not find any endpoints in ribbon in namespace [null] for name [account-service] and portName [null]
#https://github.com/telepresenceio/telepresence/issues/947
#You can fix this with:
export KUBERNETES_NAMESPACE=default
cd /mnt/d/Developer/workspace/java-k8s/spring-cloud-k8s-web-service
mvn spring-boot:run -Dspring-boot.run.jvmArguments="-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8000"
curl 127.0.0.1:8080/account
kubectl scale --replicas=0 deployment account-service

Notice: Some of spring-boot versions don’t support remote debug through mvnDebug or MAVEN_DEBUG_OPTS:

https://docs.spring.io/spring-boot/docs/2.3.4.RELEASE/maven-plugin/reference/html/

Fixed this by:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<jvmArguments>
-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8000
</jvmArguments>
</configuration>
<executions>
<execution>
<goals>
<goal>repackage</goal>
</goals>
</execution>
</executions>
</plugin>

Or:

1
mvn spring-boot:run -Dspring-boot.run.jvmArguments="-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8000"

kubernetes-maven-plugin

https://www.eclipse.org/jkube/docs/kubernetes-maven-plugin

Building Kubernetes, strongly recommended.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
#Copying remote config into "~/.kube/config" of local
#https://github.com/eclipse/jkube/tree/master/quickstarts/maven/
cd /mnt/d/Developer/workspace/java-k8s/spring-cloud-k8s-account-service
#For private docker registry, the most mavenish way is to add a server to the Maven settings file /Developer/apache-maven-3.3.9/conf/settings.xml
<server>
<id>registry.gcalls.cn</id>
<username>dave.zhao</username>
<password>******</password>
</server>

#Generating the configuration automatically, remember don't adding "dockerHost" and "images" selection,
#Or will be generated two image section in the deployment yaml file:
<plugin>
<groupId>org.eclipse.jkube</groupId>
<artifactId>kubernetes-maven-plugin</artifactId>
<version>1.1.0</version>
<executions>
<execution>
<id>fmp</id>
<goals>
<goal>resource</goal>
<goal>build</goal>
<goal>push</goal>
<goal>apply</goal>
</goals>
</execution>
</executions>
<configuration>
<resources>
<imagePullPolicy>Always</imagePullPolicy>
</resources>
<enricher>
<config>
<fmp-service>
<type>NodePort</type>
</fmp-service>
</config>
</enricher>
</configuration>
</plugin>

#Using the external Dockerfile and deployment.yaml/service.yaml:
#https://www.eclipse.org/jkube/docs/kubernetes-maven-plugin#external-dockerfile
<plugin>
<groupId>org.eclipse.jkube</groupId>
<artifactId>kubernetes-maven-plugin</artifactId>
<version>1.1.0</version>
<executions>
<execution>
<id>fmp</id>
<goals>
<goal>build</goal>
<goal>push</goal>
<goal>resource</goal>
#<!-- Don't use deploy, or twice build was triggered -->
<goal>apply</goal>
</goals>
</execution>
</executions>
<configuration>
<!-- <dockerHost>tcp://registry.gcalls.cn:2375</dockerHost> -->
<dockerHost>tcp://localhost:2375</dockerHost>
<images>

</images>
<enricher>
<config>
<fmp-service>
<type>NodePort</type>
</fmp-service>
</config>
</enricher>
</configuration>
</plugin>

#Only include the jar file
.maven-dockerinclude:
target/*.jar

#src/man/docker/Dockerfile
FROM java:8-jdk
RUN mkdir /app
WORKDIR /app
ENV APPNAME=account-service \
VERSION=0.0.1-SNAPSHOT \
CONFIG=/config/
RUN ln -snf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && echo Asia/Shanghai > /etc/timezone
COPY target/${APPNAME}-${VERSION}.jar /app/
ENTRYPOINT ["sh", "-c", "java -Djava.security.egd=file:/dev/./urandom -jar /app/${APPNAME}-${VERSION}.jar --spring.config.location=${CONFIG} --spring.profiles.active=@spring.profile@"]
EXPOSE 8100

#src/man/jkube/account-service-deployment.yml
kind: Deployment
apiVersion: apps/v1
metadata:
name: account-service
namespace: default
labels:
app: account-service
group: com.xwallet
version: 0.0.1-SNAPSHOT
provider: jkube
spec:
replicas: 1
selector:
matchLabels:
app: account-service
group: com.xwallet
version: 0.0.1-SNAPSHOT
provider: jkube
template:
metadata:
labels:
app: account-service
group: com.xwallet
version: 0.0.1-SNAPSHOT
provider: jkube
spec:
containers:
- name: account-service
#image: registry.gcalls.cn/xwallet/account-service:0.0.1-SNAPSHOT
imagePullPolicy: Always
ports:
- containerPort: 8089

#src/man/jkube/account-service-service.yml
kind: Service
apiVersion: v1
metadata:
name: account-service
namespace: default
labels:
app: account-service
group: com.xwallet
version: 0.0.1-SNAPSHOT
provider: jkube
spec:
type: NodePort
ports:
- protocol: TCP
name: http
port: 8080
targetPort: 8080
selector:
app: account-service
group: com.xwallet
version: 0.0.1-SNAPSHOT
provider: jkube

#Running:
#If don't define the dockerHost or private registry parameter, using the following command:
#export DOCKER_HOST="tcp://registry.gcalls.cn:2375"
#mvn clean install k8s:push k8s:deploy -Pk8s -Ddocker.registry=registry.gcalls.cn
#mvn clean install k8s:build k8s:push k8s:resource k8s:apply -Dmaven.test.skip=true -Dspring.profile=dev -Pk8s
mvn clean install -Pk8s

#Building by parameters
#Dockerfile
#ENTRYPOINT ["sh", "-c", "java -Djava.security.egd=file:/dev/./urandom -jar /app/${APPNAME}-${VERSION}.jar --spring.profiles.active=@spring.profile@"]
#OR
<profiles>
<profile>
<id>k8s</id>
<properties>
<spring.profile>default</spring.profile>
</properties>
#OR
mvn clean install -Pk8s -Dspring.profile=dev

#Exposing extra port of existing docker container
#https://blog.csdn.net/lsziri/article/details/69396990
#Assuming docker container's name is: asset-app
docker inspect asset-app | grep IPAddress
docker port asset-app
sudo iptables -t nat -nvL --line-number
#Exposing 5100 of host -> 5100 of container
sudo iptables -t nat -A PREROUTING -p tcp -m tcp --dport 5100 -j DNAT --to-destination 10.244.47.4:5100
sudo iptables-save
#docker port asset-app couldn't show the 5100, do this to view:
sudo iptables -t nat -nvL | grep 10.244.47.4

#Push images to aliyun
docker login --username=zhaoxunyong@139.com registry.cn-shenzhen.aliyuncs.com
docker tag [ImageId] registry.cn-shenzhen.aliyuncs.com/zerofinance/fisco:[镜像版本号]
docker push registry.cn-shenzhen.aliyuncs.com/zerofinance/fisco:[镜像版本号]