misterli's Blog.

misterli's Blog.

欢迎您访问misterli's Blog.

Kubernetes 的 API 流量查看器---mizu

介绍

一个简单但功能强大的 Kubernetes中的 API 流量查看器,可以用来查看pod之间的所有 API 通信,以帮助调试和排除故障。可以理解为TCPDump 和 Chrome Dev Tools 的结合

image-20210928131450804

安装

mac

1
2
3
curl -Lo mizu \
https://github.com/up9inc/mizu/releases/latest/download/mizu_darwin_amd64 \
&& chmod 755 mizu

linux

1
2
3
curl -Lo mizu \
https://github.com/up9inc/mizu/releases/latest/download/mizu_linux_amd64 \
&& chmod 755 mizu

使用

前提

记一次longhorn 组件重启导致pv无法正常挂载

集群中的longhorn组件异常重启后发现我们使用longhorn创建的pv无法正常挂载 报错如下

1
2
3
4
5
6
7
8
9
10
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 99s default-scheduler Successfully assigned devops/nexus3-84c8b98cb-rshlv to node-02
Warning FailedMount 78s kubelet MountVolume.SetUp failed for volume "pvc-9784831a-3130-4377-9d44-7e7129473b90" : rpc error: code = Internal desc = 'fsck' found errors on device /dev/longhorn/pvc-9784831a-3130-4377-9d44-7e7129473b90 but could not correct them: fsck from util-linux 2.31.1
/dev/longhorn/pvc-9784831a-3130-4377-9d44-7e7129473b90 contains a file system with errors, check forced.
/dev/longhorn/pvc-9784831a-3130-4377-9d44-7e7129473b90: Inodes that were part of a corrupted orphan linked list found.

/dev/longhorn/pvc-9784831a-3130-4377-9d44-7e7129473b90: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.

describe信息提示我们执行fsck,我们到pv所在的node节点上执行fsck如下

1
2
3
4
[root@node-02 e2fsprogs-1.45.6]# fsck.ext4 -cvf /dev/longhorn/pvc-9784831a-3130-4377-9d44-7e7129473b90 
e2fsck 1.42.9 (28-Dec-2013)
/dev/longhorn/pvc-9784831a-3130-4377-9d44-7e7129473b90 has unsupported feature(s): metadata_csum
e2fsck: Get a newer version of e2fsck!

提示e2fsck版本太低需要升级,我们这里先升级一下e2fsck

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
[root@node-02 replicas]# wget https://distfiles.macports.org/e2fsprogs/e2fsprogs-1.45.6.tar.gz
--2021-09-16 11:51:48-- https://distfiles.macports.org/e2fsprogs/e2fsprogs-1.45.6.tar.gz
正在解析主机 distfiles.macports.org (distfiles.macports.org)... 151.101.230.132
正在连接 distfiles.macports.org (distfiles.macports.org)|151.101.230.132|:443... 已连接。
已发出 HTTP 请求,正在等待回应... 200 OK
长度:7938544 (7.6M) [application/x-gzip]
正在保存至: “e2fsprogs-1.45.6.tar.gz”

100%[=======================================================================================================================================>] 7,938,544 747KB/s 用时 10s

2021-09-16 11:52:04 (747 KB/s) - 已保存 “e2fsprogs-1.45.6.tar.gz” [7938544/7938544])

[root@node-02 replicas]# tar -zxvf e2fsprogs-1.45.6.tar.gz
e2fsprogs-1.45.6/
e2fsprogs-1.45.6/.gitignore
e2fsprogs-1.45.6/.missing-copyright
e2fsprogs-1.45.6/.release-checklist
.......
[root@node-02 replicas]# cd e2fsprogs-1.45.6/
[root@node-02 e2fsprogs-1.45.6]# ./configure
Generating configuration file for e2fsprogs version 1.45.6
Release date is March, 2020
checking build system type... x86_64-pc-linux-gnu
checking host system type... x86_64-pc-linux-gnu
checking for gcc... gcc
checking whether the C compiler works... yes
.......
[root@node-02 e2fsprogs-1.45.6]# make
cd ./util ; make subst
make[1]: 进入目录“/var/lib/longhorn/replicas/e2fsprogs-1.45.6/util”
CREATE dirpaths.h
CC subst.c
LD subst
make[1]: 离开目录“/var/lib/longhorn/replicas/e2fsprogs-1.45.6/util”
make[1]: 进入目录“/var/lib/longhorn/replicas/e2fsprogs-1.45.6”
make[1]: “util/subst.conf”是最新的。
.......
[root@node-02 e2fsprogs-1.45.6]# ls
ABOUT-NLS asm_types.h config.status debian e2fsck include intl MCONFIG parse-types.log RELEASE-NOTES SUBMITTING-PATCHES wordwrap.pl
acinclude.m4 CleanSpec.mk configure debugfs e2fsprogs.lsm INSTALL lib MCONFIG.in po resize tests
aclocal.m4 config configure.ac depfix.sed e2fsprogs.spec INSTALL.elfbin Makefile misc public_config.h scrub util
Android.bp config.log contrib doc ext2ed install-utils Makefile.in NOTICE README SHLIBS version.h
[root@node-02 e2fsprogs-1.45.6]# cd e2fsck/
[root@node-02 e2fsck]# ls
Android.bp dx_dirinfo.c e2fsck.conf.5 ehandler.c flushb.c logfile.o mtrace.c pass2.c pass5.c quota.c region.c scantest.c unix.o
badblocks.c dx_dirinfo.o e2fsck.conf.5.in ehandler.o iscan.c Makefile mtrace.h pass2.o pass5.o quota.o region.o sigcatcher.c util.c
badblocks.o e2fsck e2fsck.h emptydir.c jfs_user.h Makefile.in pass1b.c pass3.c problem.c readahead.c rehash.c sigcatcher.o util.o
CHANGES e2fsck.8 e2fsck.o extend.c journal.c message.c pass1b.o pass3.o problem.h readahead.o rehash.o super.c
dirinfo.c e2fsck.8.in ea_refcount.c extents.c journal.o message.o pass1.c pass4.c problem.o recovery.c revoke.c super.o
dirinfo.o e2fsck.c ea_refcount.o extents.o logfile.c mtrace.awk pass1.o pass4.o problemP.h recovery.o revoke.o unix.c
[root@node-02 e2fsck]# e2fsck #查看编译好的最新e2fsck信息
[root@node-02 e2fsck]# cp e2fsck /sbin #将e2fsck复制替换掉系统原有e2fsck
cp:是否覆盖"/sbin/e2fsck"? y

我们再使用fsck执行一下修复

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
[root@node-02 e2fsck]# fsck.ext4 -cvf /dev/longhorn/pvc-9784831a-3130-4377-9d44-7e7129473b90 
e2fsck 1.45.6 (20-Mar-2020)
Checking for bad blocks (read-only test): done
/dev/longhorn/pvc-9784831a-3130-4377-9d44-7e7129473b90: Updating bad block inode.
第一步: 检查inode,块,和大小
Inodes that were part of a corrupted orphan linked list found. 处理<y>? 是
Inode 131102 was part of the ��立的 inode list. 已处理.
Inode 131103 was part of the ��立的 inode list. 已处理.
Inode 131104 was part of the ��立的 inode list. 已处理.
Inode 131105 was part of the ��立的 inode list. 已处理.
Inode 131106 was part of the ��立的 inode list. 已处理.
Inode 131107 was part of the ��立的 inode list. 已处理.
Inode 131117 was part of the ��立的 inode list. 已处理.
Inode 131402 was part of the ��立的 inode list. 已处理.
Inode 131412 was part of the ��立的 inode list. 已处理.
Inode 131630 was part of the ��立的 inode list. 已处理.
Inode 131638 was part of the ��立的 inode list. 已处理.
Inode 131644 was part of the ��立的 inode list. 已处理.
第二步: 检查目录结构
第3步: 检查目录连接性
Pass 4: Checking reference counts
第5步: 检查簇概要信息
块位图差异: -(688640--690326)
处理<y>? 是
Free 块s count wrong for 簇 #21 (31069, counted=32756).
处理<y>? 是
Free 块s count wrong (1227977, counted=1229664).
处理<y>? 是
Inode位图差异: -(131101--131107) -131117 -131402 -131412 -131630 -131638 -131644
处理<y>? 是
Free inodes count wrong for 簇 #16 (7567, counted=7580).
处理<y>? 是
Free inodes count wrong (325295, counted=325308).
处理<y>? 是

/dev/longhorn/pvc-9784831a-3130-4377-9d44-7e7129473b90: ***** 文件系统已修改 *****

2372 inodes used (0.72%, out of 327680)
182 non-contiguous files (7.7%)
1 non-contiguous directory (0.0%)
# of inodes with ind/dind/tind blocks: 0/0/0
Extent depth histogram: 2361/3
81056 blocks used (6.18%, out of 1310720)
0 bad blocks
1 large file

1600 regular files
763 directories
0 character device files
0 block device files
0 fifos
0 links
0 symbolic links (0 fast symbolic links)
0 sockets
------------
2363 files
[root@node-02 e2fsck]

检查完成后我们使用descibe 查看之前报错的pod 发现如下

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
  Normal   Scheduled               12m                   default-scheduler        Successfully assigned devops/nexus3-5c9c5545d9-nmfjg to node-02
Normal SuccessfulAttachVolume 12m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-9784831a-3130-4377-9d44-7e7129473b90"
Warning FailedMount 3m46s (x12 over 12m) kubelet MountVolume.SetUp failed for volume "pvc-9784831a-3130-4377-9d44-7e7129473b90" : rpc error: code = Internal desc = 'fsck' found errors on device /dev/longhorn/pvc-9784831a-3130-4377-9d44-7e7129473b90 but could not correct them: fsck from util-linux 2.31.1
/dev/longhorn/pvc-9784831a-3130-4377-9d44-7e7129473b90 contains a file system with errors, check forced.
/dev/longhorn/pvc-9784831a-3130-4377-9d44-7e7129473b90: Inodes that were part of a corrupted orphan linked list found.

/dev/longhorn/pvc-9784831a-3130-4377-9d44-7e7129473b90: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.
(i.e., without -a or -p options)
Warning FailedMount 3m33s kubelet Unable to attach or mount volumes: unmounted volumes=[nexus-data], unattached volumes=[default-token-dv7nx nexus-data]: timed out waiting for the condition
Warning FailedMount 104s kubelet MountVolume.SetUp failed for volume "pvc-9784831a-3130-4377-9d44-7e7129473b90" : rpc error: code = Internal desc = mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t ext4 -o defaults /dev/longhorn/pvc-9784831a-3130-4377-9d44-7e7129473b90 /var/lib/kubelet/pods/8268934a-f1d9-4c14-ad4a-276d6986cee8/volumes/kubernetes.io~csi/pvc-9784831a-3130-4377-9d44-7e7129473b90/mount
Output: mount: /var/lib/kubelet/pods/8268934a-f1d9-4c14-ad4a-276d6986cee8/volumes/kubernetes.io~csi/pvc-9784831a-3130-4377-9d44-7e7129473b90/mount: /dev/longhorn/pvc-9784831a-3130-4377-9d44-7e7129473b90 already mounted or mount point busy.
Warning FailedMount 78s (x4 over 10m) kubelet Unable to attach or mount volumes: unmounted volumes=[nexus-data], unattached volumes=[nexus-data default-token-dv7nx]: timed out waiting for the condition

在Gitlab中规范提交的commit message的格式

我们有时候会遇到开发提交的千奇百怪的commit信息,这样给代码更新追踪溯源增加了麻烦,并且我们使用的gitlab ci 会使用commit信息判断构建步骤,所以有必要为GitLab 增加自定义 Commit 提交格式检测

介绍

Git 支持在不同操作上执行的钩子。这些钩子在服务器上运行,可用于根据存储库的状态强制执行特定的提交策略或执行其他任务。

Git 支持以下钩子:

  • pre-receive
  • post-receive
  • update

服务器端 Git 钩子可以配置为:

这里需要注意服务器端的git钩子必须在 GitLab 服务器的文件系统上配置.

创建服务端git钩子

如果您没有使用 hashed storage,,则项目的存储库目录则应该是下面:

监控minio的多种姿势

上一篇文章我们介绍了如何使用控制台和kubectl minio插件来部署minio租户,这里我们介绍一下使用配置文件去部署minio租户以及如何监控minio

使用配置文件部署minio租户

在这里我们先看一下使用控制台和kubectl minio插件来部署minio租户存在哪些问题:

  • Kubectl minio插件部署无法去定制安装的minioz租户的一些高级功能
  • 控制台部署繁琐,且不符合基础设施即代码或者gitops理念,如果出现问题不方便我们去快速重建一个同样的minio。

使用配置文件去创建我们就可以避免这些问题,我们可以快速的用同一个配置文件在多个namespace或者多个集群中创建同样的minio租户。当然 kubectl minio 插件和控制台实际上也是会

下面是一个配置文件的内容,我们首先创建了一个secret 保存了租户的凭据,然后再租户文件中使用这个secret。

注意:每个租户都可以单独开启控制台(默认开启)并对外暴露,我们使用operator console 中访问租户的console 无需再登陆,但是我们如果单独使用nodeport或者ingress 暴露租户的console 就需要使用这里的secret中配置的accesskey和secretkey 登陆。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
## Secret to be used as MinIO Root Credentials
apiVersion: v1
kind: Secret
metadata:
name: minio-creds-secret
type: Opaque
data:
## Access Key for MinIO Tenant, base64 encoded (echo -n 'minio' | base64)
accesskey: bWluaW8=
## Secret Key for MinIO Tenant, base64 encoded (echo -n 'minio123' | base64)
secretkey: bWluaW8xMjM=
---
## MinIO Tenant Definition
apiVersion: minio.min.io/v2
kind: Tenant
metadata:
name: minio-demo
## Optionally pass labels to be applied to the statefulset pods
labels:
app: minio
## Annotations for MinIO Tenant Pods
annotations:
prometheus.io/path: /minio/v2/metrics/cluster
prometheus.io/port: "9000"
prometheus.io/scrape: "true"

## If a scheduler is specified here, Tenant pods will be dispatched by specified scheduler.
## If not specified, the Tenant pods will be dispatched by default scheduler.
# scheduler:
# name: my-custom-scheduler

spec:
## Registry location and Tag to download MinIO Server image
image: minio/minio:RELEASE.2021-08-25T00-41-18Z
imagePullPolicy: IfNotPresent

## Secret with credentials to be used by MinIO Tenant.
## Refers to the secret object created above.
credsSecret:
name: minio-creds-secret

## Specification for MinIO Pool(s) in this Tenant.
pools:
## Servers specifies the number of MinIO Tenant Pods / Servers in this pool.
## For standalone mode, supply 1. For distributed mode, supply 4 or more.
## Note that the operator does not support upgrading from standalone to distributed mode.
- servers: 1

## volumesPerServer specifies the number of volumes attached per MinIO Tenant Pod / Server.
volumesPerServer: 4

## This VolumeClaimTemplate is used across all the volumes provisioned for MinIO Tenant in this Pool.
volumeClaimTemplate:
metadata:
name: data
spec:
storageClassName: longhorn
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi


## Mount path where PV will be mounted inside container(s).
mountPath: /export
## Sub path inside Mount path where MinIO stores data.
# subPath: /data

## Use this field to provide a list of Secrets with external certificates. This can be used to to configure
## TLS for MinIO Tenant pods. Create secrets as explained here:
## https://github.com/minio/minio/tree/master/docs/tls/kubernetes#2-create-kubernetes-secret
# externalCertSecret:
# - name: tls-ssl-minio
# type: kubernetes.io/tls

## Enable automatic Kubernetes based certificate generation and signing as explained in
## https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster
requestAutoCert: false

## Enable S3 specific features such as Bucket DNS which would allow `buckets` to be
## accessible as DNS entries of form `<bucketname>.minio.default.svc.cluster.local`
s3:
## This feature is turned off by default
bucketDNS: false

## This field is used only when "requestAutoCert" is set to true. Use this field to set CommonName
## for the auto-generated certificate. Internal DNS name for the pod will be used if CommonName is
## not provided. DNS name format is *.minio.default.svc.cluster.local
certConfig:
commonName: ""
organizationName: []
dnsNames: []

## PodManagement policy for MinIO Tenant Pods. Can be "OrderedReady" or "Parallel"
## Refer https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#pod-management-policy
## for details.
podManagementPolicy: Parallel

我们就使用这个文件创建一个minio租户

1
2
3
4
5
6
7
8
9
10
[root@master-01 demo]# kubectl apply -f demo.yaml  -n test 
secret/minio-creds-secret created
tenant.minio.min.io/minio-demo created
[root@master-01 demo]# kubectl -n test get pod
NAME READY STATUS RESTARTS AGE
minio-demo-ss-0-0 0/1 Running 0 7s
[root@master-01 demo]# kubectl -n test get tenants
NAME STATE AGE
minio-demo Provisioning MinIO Statefulset 18s

在k8s上部署支持多租户的minio

简介

img

MinIO 是一种高性能对象存储解决方案,原生支持 Kubernetes 部署。MinIO 提供与 Amazon Web Services S3 兼容的 API,并支持所有核心 S3 功能。MinIO 是在GNU Affero 通用公共许可证 v3.0下发布的。

MinIO 的不同之处在于,它从一开始就被设计为私有/混合云对象存储。因MinIO 是专门为服务于对象而构建的,所以单层架构可以实现所有必要的功能。它是一个同时具有高性能、可扩展性和轻量级的云原生对象服务器。

架构图

image-20210831170255043

特性

纠删码

MinIO 使用以汇编代码编写的每个对象内联擦除编码来保护数据,以提供尽可能高的性能。MinIO 使用 Reed-Solomon 代码将对象条带化为具有用户可配置冗余级别的数据和奇偶校验块。MinIO 的 Erasure Coding 在对象级别执行修复,可以独立修复多个对象。

在 N/2 的最大奇偶校验下,MinIO 的实现可以确保在部署中仅使用 ((N/2)+1) 个操作驱动器进行不间断的读写操作。例如,在 12 个驱动器的设置中,MinIO 将对象分片到 6 个数据和 6 个奇偶校验驱动器,并且可以可靠地写入新对象或重建现有对象,而部署中仅剩下 7 个驱动器。详细介绍可以参考 https://docs.min.io/minio/baremetal/concepts/erasure-coding.html

k8s 开启临时容器进行debug

工作中在调试集群中未包含bash sh等工具的pod往往比较麻烦,k8s提供了一个临时容器供我们添加到要调试的pod中进行工作。

什么是临时容器?

临时容器与其他容器的不同之处在于,它们缺少对资源或执行的保证,并且永远不会自动重启, 因此不适用于构建应用程序。 临时容器使用与常规容器相同的 ContainerSpec 节来描述,但许多字段是不兼容和不允许的。

  • 临时容器没有端口配置,因此像 portslivenessProbereadinessProbe 这样的字段是不允许的。
  • Pod 资源分配是不可变的,因此 resources 配置是不允许的。
  • 有关允许字段的完整列表,请参见 EphemeralContainer 参考文档

临时容器是使用 API 中的一种特殊的 ephemeralcontainers 处理器进行创建的, 而不是直接添加到 pod.spec 段,因此无法使用 kubectl edit 来添加一个临时容器。

与常规容器一样,将临时容器添加到 Pod 后,将不能更改或删除临时容器。

使用临时容器需要开启 EphemeralContainers 特性门控kubectl 版本为 v1.18 或者更高。

开启EphemeralContainers

master节点上操作

修改apiserver

史上最全之K8s使用nfs作为存储卷的五种方式

我们能将 NFS (网络文件系统) 挂载到Pod 中,不像 emptyDir 那样会在删除 Pod 的同时也会被删除,nfs 卷的内容在删除 Pod 时会被保存,卷只是被卸载。 这意味着 nfs 卷可以被预先填充数据,并且这些数据可以在 Pod 之间共享,NFS 卷可以由多个pod同时挂载

注意: 在使用 NFS 卷之前,你必须运行自己的 NFS 服务器并将目标 share 导出备用。

虽然官方不是很推荐使用nfs存储作为pv,但是实际中我们有时候因为种种原因需要使用nfs类型的volume来存储数据。

下面介绍kubernetes中使用nfs作为存储卷的几种方式

  1. deployment/statefulset中直接使用
  2. 创建类型为nfs的持久化存储卷,用于为PersistentVolumClaim提供存储卷
  3. 使用csi-driver-nfs提供StorageClass
  4. 使用NFS Subdir External Provisioner提供StorageClass
  5. 使用nfs-ganesha-server-and-external-provisioner提供StorageClass

我们事先在172.26.204.144机器上配置好了nfs server端,并共享了如下目录。

1
2
3
4
[root@node-02 ~]# showmount -e 172.26.204.144
Export list for 172.26.204.144:
/opt/nfs-deployment 172.26.0.0/16
/opt/kubernetes-nfs 172.26.0.0/16

deployment/statefulset中直接使用

如下示例我们为nginx使用nfs作为存储卷持久化/usr/share/nginx/html目录

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
volumeMounts:
- name: data
mountPath: /usr/share/nginx/html
volumes:
- name: data
nfs:
path: /opt/nfs-deployment
server: 172.26.204.144
三个小工具带你进一步了解K8S集群内RBAC

利用k8s审计日志生成RBAC规则

简介

很多时候我们在k8s上安装服务会遇到各种各样的权限问题,有时候为某个用户或者serviceaccount对象生成一个合适的role会比较头疼,这里推荐一个工具audit2rbac,它可以根据k8s的审计日志,为指定用户或者serviceaccount对象生成它们所需要的role.

audit2rbac下载地址: https://github.com/liggitt/audit2rbac/releases

前提要求

1、集群已经开启审计日志,且日志格式为json格式,开启审计日志可以参考https://kubernetes.io/docs/tasks/debug-application-cluster/audit/#advanced-audit

2、建议日志级别设置为Metadata,还可以减少日志大小

使用

我们这里已经开启了审计日志,这里截取一小段日志内容如下:

1
2
{"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata","auditID":"571b8d06-aa30-4aec-87cb-7bef2ef88d18","stage":"ResponseComplete","requestURI":"/apis/coordination.k8s.io/v1/namespaces/longhorn-system/leases/external-resizer-driver-longhorn-io","verb":"update","user":{"username":"system:serviceaccount:longhorn-system:longhorn-service-account","uid":"cdb0a05f-170d-4f02-aeec-88af904e68f7","groups":["system:serviceaccounts","system:serviceaccounts:longhorn-system","system:authenticated"]},"sourceIPs":["172.20.166.16"],"userAgent":"csi-resizer/v0.0.0 (linux/amd64) kubernetes/$Format","objectRef":{"resource":"leases","namespace":"longhorn-system","name":"external-resizer-driver-longhorn-io","uid":"81766194-e2e3-4edd-83d7-788a07562b91","apiGroup":"coordination.k8s.io","apiVersion":"v1","resourceVersion":"18772044"},"responseStatus":{"metadata":{},"code":200},"requestReceivedTimestamp":"2021-01-06T03:02:52.709670Z","stageTimestamp":"2021-01-06T03:02:52.710917Z","annotations":{"authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":"RBAC: allowed by ClusterRoleBinding \"longhorn-bind\" of ClusterRole \"longhorn-role\" to ServiceAccount \"longhorn-service-account/longhorn-system\""}}
{"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata","auditID":"0795fecc-38ea-46d7-a27d-6e73e6a27cd8","stage":"ResponseComplete","requestURI":"/apis/coordination.k8s.io/v1/namespaces/longhorn-system/leases/driver-longhorn-io","verb":"get","user":{"username":"system:serviceaccount:longhorn-system:longhorn-service-account","uid":"cdb0a05f-170d-4f02-aeec-88af904e68f7","groups":["system:serviceaccounts","system:serviceaccounts:longhorn-system","system:authenticated"]},"sourceIPs":["172.20.166.16"],"userAgent":"csi-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format","objectRef":{"resource":"leases","namespace":"longhorn-system","name":"driver-longhorn-io","apiGroup":"coordination.k8s.io","apiVersion":"v1"},"responseStatus":{"metadata":{},"code":200},"requestReceivedTimestamp":"2021-01-06T03:02:52.713255Z","stageTimestamp":"2021-01-06T03:02:52.713894Z","annotations":{"authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":"RBAC: allowed by ClusterRoleBinding \"longhorn-bind\" of ClusterRole \"longhorn-role\" to ServiceAccount \"longhorn-service-account/longhorn-system\""}}
使用Permission manager动态为k8s集群创建用户及kubeconfig

简介

Permission manager是一个简单便捷的RBAC管理界面工具,支持通过web界面创建用户,分配Namespace权限,并可以生成kubeconfig文件

项目地址https://github.com/sighupio/permission-manager.git

安装

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[root@master-01 k8s]# git clone https://github.com/sighupio/permission-manager.git 
正克隆到 'permission-manager'...
remote: Enumerating objects: 2350, done.
remote: Counting objects: 100% (593/593), done.
remote: Compressing objects: 100% (395/395), done.
remote: Total 2350 (delta 388), reused 349 (delta 189), pack-reused 1757
接收对象中: 100% (2350/2350), 10.79 MiB | 3.57 MiB/s, done.
处理 delta 中: 100% (1427/1427), done.
[root@master-01 k8s]# cd permission-manager/
[root@master-01 permission-manager]# ls
cmd development Dockerfile e2e-test go.sum internal Makefile reltag.sh tests
deployments development-compose.yml docs go.mod helm_chart LICENSE.md README.md statik web-client
##部署文件位于deployments/kubernetes下
[root@master-01 permission-manager]# ls deployments/kubernetes/
deploy.yml seeds
[root@master-01 permission-manager]# ls deployments/kubernetes/seeds/
crd.yml seed.yml

#

创建namespace

1
kubectl create namespace permission-manager

创建一个secert存储一些配置

1
2
3
4
5
6
7
8
9
10
11
12
---
apiVersion: v1
kind: Secret
metadata:
name: permission-manager
namespace: permission-manager
type: Opaque
stringData:
PORT: "4000" # port where server is exposed
CLUSTER_NAME: "my-cluster" # name of the cluster to use in the generated kubeconfig file
CONTROL_PLANE_ADDRESS: "https://apiserver.cluster.local:6443" # full address of the control plane to use in the generated kubeconfig file
BASIC_AUTH_PASSWORD: "changeMe" # password used by basic auth (username is `admin`)

参数解释:

使用RBAC Manager 简化Kubernetes 中的授权

简介

RBAC ManagerFairwinds公司开源的一个项目,旨在简化 Kubernetes 中的授权,它使用新的自定义资源来支持 RBAC 声明式配置的c rd。我们可以指定所需的状态,而不是直接管理role bindings service accountsRBAC Manager将进行必要的更改以实现该状态。

这个项目有三个主要目标:

  1. 为 RBAC 提供一种更易于理解和可扩展的声明式方法。
  2. 减少身份验证所需的配置量。
  3. 使用 CI/CD 实现 RBAC 配置更新的自动化。

安装

安装我们可以使用helm安装或者使用直接使用yaml文件安装

helm安装

1
2
helm repo add fairwinds-stable https://charts.fairwinds.com/stable
helm install fairwinds-stable/rbac-manager --name rbac-manager --namespace rbac-manager

yaml文件安装

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[root@master-01 deploy]# git clone https://github.com/FairwindsOps/rbac-manager.git
[root@master-01 deploy]# cd rbac-manager/deploy
#我们先查看一下有哪些文件
[root@master-01 deploy]# ls
0_namespace.yaml 1_rbac.yaml 2_crd.yaml 3_deployment.yaml
##部署
[root@master-01 deploy]# kubectl apply -f ./
namespace/rbac-manager created
serviceaccount/rbac-manager created
clusterrole.rbac.authorization.k8s.io/rbac-manager created
clusterrolebinding.rbac.authorization.k8s.io/rbac-manager created
customresourcedefinition.apiextensions.k8s.io/rbacdefinitions.rbacmanager.reactiveops.io created
deployment.apps/rbac-manager created
[root@master-01 deploy]# kubectl -n rbac-manager get pod
NAME READY STATUS RESTARTS AGE
rbac-manager-664c9df47f-sjwwh 1/1 Running 0 48s
avatar
misterli
大风起兮云飞扬
FRIENDS
baidu google