misterli's Blog.

kubevela小试牛刀(1)

字数统计: 2.9k阅读时长: 14 min
2020/12/03

什么是kubevela

KubeVela 是一个简单易用且高度可扩展的应用管理平台与核心引擎。KubeVela 是基于 Kubernetes 与 Open Application Model(OAM) 技术构建的。

OAM 全称是 Open Application Model,从名称上来看它所定义的就是一种模型,同时也实现了基于 OAM 的我认为这种模型旨在定义了云原生应用的标准。

  • 开放(Open):支持异构的平台、容器运行时、调度系统、云供应商、硬件配置等,总之与底层无关
  • 应用(Application):云原生应用
  • 模型(Model):定义标准,以使其与底层平台无关

在 OAM 中,一个应用程序包含三个核心理念。

  • 第一个核心理念是组成应用程序的组件(Component),它可能包含微服务集合、数据库和云负载均衡器;
  • 第二个核心理念是描述应用程序运维特征(Trait)的集合,例如,弹性伸缩和 Ingress 等功能。它们对应用程序的运行至关重要,但在不同环境中其实现方式各不相同;
  • 最后,为了将这些描述转化为具体的应用程序,运维人员使用应用配置(Application Configuration)来组合组件和相应的特征,以构建应部署的应用程序的具体实例

对于开发人员来说,KubeVela本身是一个易于使用的工具,能够以最小的工作量描述应用程序并将其发布到Kubernetes。只需管理一个以应用程序为中心的工作流程即可轻松地与任何CI / CD管道集成,无需管理少量的Kubernetes YAML文件,只需一个简单的docker -compose样式的Appfile

对于平台开发人员来说,KubeVela是一个框架,使他们能够轻松创建面向开发人员的高度可扩展的平台。详细地说,KubeVela通过执行以下操作减轻了构建此类平台的麻烦:

  • 以应用为中心。在Appfile的后面,KubeVela实施了一个应用程序概念,因为它的主要API和ALL KubeVela的功能仅满足应用程序的需求。这是通过采用开放应用程序模型作为KubeVela的核心API来实现的。
  • 本地扩展。KubeVela中的应用程序由各种可插拔工作负载类型和操作功能(即特征)组成。Kubernetes生态系统的功能可以随时通过Kubernetes CRD注册机制作为新的工作负载类型或特征添加到KubeVela中。
  • 简单但可扩展的抽象机制。KubeVela的主要用户界面(即Appfile和CLI)使用基于CUELang的抽象引擎构建,该引擎将面向用户的模式转换为下划线的Kubernetes资源。KubeVela提供了一组内置的抽象对象,平台构建者可以随时随意对其进行修改。抽象更改将在运行时生效,无需重新编译或重新部署KubeVela。

架构

KubeVela 整体架构如下图所示:

img

在架构上,KubeVela 只有一个 controller 并且以插件的方式运行在 Kubernetes 之上,为 Kubernetes 带来了面向应用层的抽象,以及以此为基础的面向用户的使用界面,即Appfile。Appfile 乃至 KubeVela 运行机制背后的核心,则是其能力管理模型 Open Application Model (OAM) 。基于这个模型,KubeVela 为系统管理员提供了一套基于注册与自发现的能力装配流程,来接入 Kubernetes 生态中的任意能力到 KubeVela 中,从而以“一套核心框架搭配不同能力”的方式,适配各种使用场景

概念和术语

从上面的架构图上我们可以看到kubevela中存在一些专业术语applicationserviceworkload typetrait,他们之间的关系如下图:

image-20201203103159705

workload type:声明运行时基础设施应该考虑到应用管理的一些特性。workload type类型可以是“长期运行的服务”或“一次性任务”。

trait: 定义一个组件所需的运维策略与配置,例如环境变量、Ingress、AutoScaler、Volume 等。

Capabley:

Service: service 不同于k8s中的对象service,在这里定义了一个服务在Kubernetes中运行应用程序所需的运行时配置(即workload typetrait)。service是KubeVela中基本可部署单元的描述符

Application: application在k8s中时应用的集合,描述了开发人员需要定义的内容,application由KubeVela中的Appfilevela.yaml默认命名)定义。

安装

要求:

  • Kubernetes集群> = v1.15.0
  • 安装并配置了kubectl

下载kubevela

通过脚本下载

1
curl -fsSl https://kubevela.io/install.sh | bash

从github下载

  • vela发行页面下载最新的二进制文件。
  • 解压缩vela二进制文件并将其添加$PATH到入门中。
1
$ sudo mv ./vela /usr/local/bin/vela

初始化kubevela

执行vela install安装KubeVela服务器组件及其依赖组件。

将安装以下依赖项组件以及Vela服务器组件:

注意:如果k8s集群的monitoring这个namespace下已经安装了Prometheus-operator,则kubevela安装的 prometheus stack 会造成冲突。

配置保存在“ vela-system / vela-config”中的ConfigMap中

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
[root@master-01 kubevela]# vela install
- Installing Vela Core Chart:
install chart vela-core, version 0.1.0, desc : A Helm chart for Kube Vela core, contains 35 file
Successfully installed the chart, status: deployed, last deployed time = 2020-12-03 11:06:34.3800069 +0800 CST m=+6.951945903 Automatically discover capabilities successfully ✅ Add(8) Update(0) Delete(0)

TYPE CATEGORY DESCRIPTION
+task workload One-off task to run a piece of code or script to completion
+webservice workload Long-running scalable service with stable endpoint to receive external traffic
+worker workload Long-running scalable backend worker without network endpoint
+autoscale trait Automatically scale the app following certain triggers or metrics
+metrics trait Configure metrics targets to be monitored for the app
+rollout trait Configure canary deployment strategy to release the app
+route trait Configure route policy to the app
+scaler trait Manually scale the app

- Finished successfully.
[root@master-01 kubevela]# kubectl get pod -n vela-system
NAME READY STATUS RESTARTS AGE
flagger-7846864bbf-m6wxt 1/1 Running 0 50s
kubevela-vela-core-f8b987775-mdjqm 0/1 Running 0 65s
[root@master-01 kubevela]# kubectl get pod -n cert-manager
NAME READY STATUS RESTARTS AGE
cert-manager-79c5f9946-gx9s9 1/1 Running 0 70s
cert-manager-cainjector-76c9d55b6f-f478g 1/1 Running 0 70s
cert-manager-webhook-6d4c5c44bb-bw9vh 1/1 Running 0 70s
[root@master-01 kubevela]# kubectl get pod -n keda
NAME READY STATUS RESTARTS AGE
keda-operator-566d494bf-mqpn8 0/1 Running 0 68s
keda-operator-metrics-apiserver-698865dc8b-fg4gn 1/1 Running 0 68s

卸载

执行:

1
2
$ helm uninstall -n vela-system kubevela
$ rm -r ~/.vela

这将卸载KubeVela服务器组件及其依赖组件。这还将清理本地CLI缓存。

然后清理CRD(默认情况下,不会通过chart删除CRD):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
$ kubectl delete crd \
applicationconfigurations.core.oam.dev \
applicationdeployments.core.oam.dev \
autoscalers.standard.oam.dev \
certificaterequests.cert-manager.io \
certificates.cert-manager.io \
challenges.acme.cert-manager.io \
clusterissuers.cert-manager.io \
components.core.oam.dev \
containerizedworkloads.core.oam.dev \
healthscopes.core.oam.dev \
issuers.cert-manager.io \
manualscalertraits.core.oam.dev \
metricstraits.standard.oam.dev \
orders.acme.cert-manager.io \
podspecworkloads.standard.oam.dev \
routes.standard.oam.dev \
scopedefinitions.core.oam.dev \
servicemonitors.monitoring.coreos.com \
traitdefinitions.core.oam.dev \
workloaddefinitions.core.oam.dev

使用vela 部署一个服务

下载官方提供的示例文件

1
2
$ git clone https://github.com/oam-dev/kubevela.git
$ cd kubevela/docs/examples/testapp

该示例包含NodeJS应用程序代码,用于构建应用程序的Dockerfile。

注意:要将image: misterli/testapp:v1用户替换为自己用户,以便进行推送。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
[root@master-01 testapp]# ls 
Dockerfile package.json server.js vela.yaml
[root@master-01 testapp]# cat vela.yaml
name: testapp

services:
express-server:
# this image will be used in both build and deploy steps
image: misterli/testapp:v1

build:
# Here more runtime specific build templates will be supported, like NodeJS, Go, Python, Ruby.
docker:
file: Dockerfile
context: .

# Uncomment the following to push to local kind cluster
# push:
# local: kind

# type: webservice (default) | worker | task

cmd: ["node", "server.js"]
port: 8080

# scaler:
# replicas: 1

# route:
# domain: example.com
# rules:
# - path: /testapp
# rewriteTarget: /

# metrics:
# format: "prometheus"
# port: 8080
# path: "/metrics"
# scheme: "http"
# enabled: true

# autoscale:
# min: 1
# max: 4
# cron:
# startAt: "14:00"
# duration: "2h"
# days: "Monday, Thursday"
# replicas: 2
# timezone: "America/Los_Angeles"

# pi:
# image: perl
# cmd: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]

部署

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
[root@master-01 testapp]# vela  up 
Parsing vela.yaml ...
Loading templates ...

Building service (express-server)...
Sending build context to Docker daemon 7.68kB
Step 1/10 : FROM mhart/alpine-node:12
12: Pulling from mhart/alpine-node
31603596830f: Pulling fs layer
a1768851dab2: Pulling fs layer
31603596830f: Verifying Checksum
31603596830f: Download complete
31603596830f: Pull complete
a1768851dab2: Verifying Checksum
a1768851dab2: Download complete
a1768851dab2: Pull complete
Digest: sha256:31eebb77c7e3878c45419a69e5e7dddd376d685e064279e024e488076d97c7e4
Status: Downloaded newer image for mhart/alpine-node:12
---> b13e0277346d
Step 2/10 : WORKDIR /app
---> Running in ab10b920fb85
Removing intermediate container ab10b920fb85
---> 9f6c8afc0ac4
Step 3/10 : COPY package.json ./
---> a4432016a818
Step 4/10 : RUN npm install
---> Running in c13d25b9a074
npm notice created a lockfile as package-lock.json. You should commit this file.
npm WARN docker_web_app@1.0.0 No repository field.
npm WARN docker_web_app@1.0.0 No license field.

added 50 packages from 37 contributors and audited 50 packages in 6.037s
found 0 vulnerabilities

Removing intermediate container c13d25b9a074
---> ba5e090aa522
Step 5/10 : RUN npm ci --prod
---> Running in f0fa46706fdc
npm WARN prepare removing existing node_modules/ before installation
added 50 packages in 0.3s
Removing intermediate container f0fa46706fdc
---> a9345a48a79a
Step 6/10 : FROM mhart/alpine-node:slim-12
slim-12: Pulling from mhart/alpine-node
31603596830f: Already exists
de802a068b6a: Pulling fs layer
de802a068b6a: Verifying Checksum
de802a068b6a: Download complete
de802a068b6a: Pull complete
Digest: sha256:12e59927fda21237348acf1a229ad09cf37fb232d251c3e54e1dac3ddac6feeb
Status: Downloaded newer image for mhart/alpine-node:slim-12
---> 6d25d4327eff
Step 7/10 : WORKDIR /app
---> Running in d541b38c1823
Removing intermediate container d541b38c1823
---> 1e0777fd03d8
Step 8/10 : COPY --from=0 /app .
---> abe26ca579ed
Step 9/10 : COPY . .
---> 6e9f13fd2777
Step 10/10 : CMD ["node", "server.js"]
---> Running in e2a66724e4f1
Removing intermediate container e2a66724e4f1
---> 533e1502cb2c
Successfully built 533e1502cb2c
Successfully tagged misterli/testapp:v1
pushing image (misterli/testapp:v1)...
The push refers to repository [docker.io/misterli/testapp]
c84892b4351c: Preparing
fac1f8a2295d: Preparing
5d57bb81c0cc: Preparing
2864da400028: Preparing
89ae5c4ee501: Preparing
89ae5c4ee501: Mounted from mhart/alpine-node
2864da400028: Mounted from mhart/alpine-node
5d57bb81c0cc: Pushed
c84892b4351c: Pushed
fac1f8a2295d: Pushed
v1: digest: sha256:6ac7865710892ddd57c0604d02560f1dd9bbf007b23fbacfa45fdbf718a41669 size: 1365

Rendering configs for service (express-server)...
Writing deploy config to (.vela/deploy.yaml)

Applying deploy configs ...
Checking if app has been deployed...
App has not been deployed, creating a new deployment...
App has been deployed 🚀🚀🚀
Port forward: vela port-forward testapp
SSH: vela exec testapp
Logging: vela logs testapp
App status: vela status testapp
Service status: vela status testapp --svc express-server

[root@master-01 testapp]# vela status testapp
About:

Name: testapp
Namespace: default
Created at: 2020-12-03 11:17:10.202380171 +0800 CST
Updated at: 2020-12-03 11:17:10.202380322 +0800 CST

Services:
- Name: express-server
Type: webservice
HEALTHY Ready:1/1
Traits:

Last Deployment:
Created at: 2020-12-03 11:17:10 +0800 CST
Updated at: 2020-12-03T11:17:10+08:00
[root@master-01 rabbitmq]# kubectl get pod
NAME READY STATUS RESTARTS AGE
busybox-deployment-7bfd6d554c-nqrln 1/1 Running 831 6d1h
busybox-deployment-7bfd6d554c-s6lrw 1/1 Running 831 6d1h
check-ecs-price-7cdc97b997-j9w9q 1/1 Running 0 7d
express-server-7b5d47c867-hcq99 1/1 Running 0 88s

我们可以看到执行vela up之后,先进行了docker镜像的build push 操作,然后根据vela.yaml文件内容生成.vela/deploy.yaml文件然后进行apply操作,我们查看一下.vela/deploy.yaml文件内容:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
[root@master-01 testapp]# cat .vela/deploy.yaml 
apiVersion: core.oam.dev/v1alpha2
kind: ApplicationConfiguration
metadata:
creationTimestamp: null
name: testapp
namespace: default
spec:
components:
- componentName: express-server
scopes:
- scopeRef:
apiVersion: core.oam.dev/v1alpha2
kind: HealthScope
name: testapp-default-health
traits:
- trait:
apiVersion: core.oam.dev/v1alpha2
kind: ManualScalerTrait
metadata:
labels:
trait.oam.dev/type: scaler
spec:
replicaCount: 2
status:
dependency: {}
observedGeneration: 0

---
apiVersion: core.oam.dev/v1alpha2
kind: Component
metadata:
creationTimestamp: null
name: express-server
namespace: default
spec:
workload:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
workload.oam.dev/type: webservice
spec:
selector:
matchLabels:
app.oam.dev/component: express-server
template:
metadata:
labels:
app.oam.dev/component: express-server
spec:
containers:
- command:
- node
- server.js
image: misterli/testapp:v1
name: express-server
ports:
- containerPort: 8080
status:
observedGeneration: 0

---
apiVersion: core.oam.dev/v1alpha2
kind: HealthScope
metadata:
creationTimestamp: null
name: testapp-default-health
namespace: default
spec:
workloadRefs: []
status:
scopeHealthCondition:
healthStatus: ""

我们修改vela.yaml文件中,取消如下注释并修改为2

1
2
scaler:
replicas: 2

我们在执行vela up ,pod的副本数将变为2个

1
2
3
4
5
6
7
[root@master-01 rabbitmq]# kubectl get pod 
NAME READY STATUS RESTARTS AGE
busybox-deployment-7bfd6d554c-nqrln 1/1 Running 832 6d1h
busybox-deployment-7bfd6d554c-s6lrw 1/1 Running 832 6d1h
check-ecs-price-7cdc97b997-j9w9q 1/1 Running 0 7d
express-server-7b5d47c867-g4jbh 1/1 Running 0 70s
express-server-7b5d47c867-hcq99 1/1 Running 0 5m33s

注意:我们这里删除服务如果使用kubectl 删除deployment 是不起作用的,vela会自动拉取新的deployment,删除服务我们需要使用vela delete APP_NAME

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
#错误的删除方式
[root@master-01 testapp]# kubectl get deployments.apps
NAME READY UP-TO-DATE AVAILABLE AGE
check-ecs-price 1/1 1 1 8d
express-server 1/1 1 1 3m45s
[root@master-01 testapp]# kubectl delete deployments.apps express-server
deployment.apps "express-server" deleted
[root@master-01 testapp]# kubectl get deployments.apps
NAME READY UP-TO-DATE AVAILABLE AGE
check-ecs-price 1/1 1 1 8d
express-server 0/1 1 0 1s

#正确的删除方式
[root@master-01 testapp]# vela ls
SERVICE APP TYPE TRAITS STATUS CREATED-TIME
express-server testapp webservice metrics,scaler Deployed 2020-12-03 11:17:10 +0800 CST
[root@master-01 testapp]# vela delete testapp
Deleting Application "testapp"
delete apps succeed testapp from default
[root@master-01 testapp]# vela ls
SERVICE APP TYPE TRAITS STATUS CREATED-TIME
[root@master-01 testapp]# kubectl get deployments.apps
NAME READY UP-TO-DATE AVAILABLE AGE
check-ecs-price 1/1 1 1 8d

CATALOG
  1. 1. 什么是kubevela
  2. 2. 架构
    1. 2.1. 概念和术语
  3. 3. 安装
    1. 3.1. 下载kubevela
    2. 3.2. 初始化kubevela
  4. 4. 卸载
  5. 5. 使用vela 部署一个服务