我们使用的是阿里云ack pro 托管版集群,创建集群时默认开启采集ingress-nginx的日志,开发和产品那边提了一个需求,排查问题的时候需要查看请求的body和request headers中的部分自定义字段用于排错, 查看了一下默认的nginx-ingress的日志输出格式,发现不包含这些信息,所以我们就要修改日志输出的内容,并且还要保证日志采集展示效果正常。
Available Commands: allow Allow the access to the resources completion Generate the autocompletion script for the specified shell delete Delete the kubernetes resources that were made specific allow command help Help about any command list List the number of times we ran the allow command version Print the version of akcess
Flags: -h, --helphelpfor akcess
Use "akcess [command] --help"for more information about a command.
Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 99s default-scheduler Successfully assigned devops/nexus3-84c8b98cb-rshlv to node-02 Warning FailedMount 78s kubelet MountVolume.SetUp failed for volume "pvc-9784831a-3130-4377-9d44-7e7129473b90" : rpc error: code = Internal desc = 'fsck' found errors on device /dev/longhorn/pvc-9784831a-3130-4377-9d44-7e7129473b90 but could not correct them: fsck from util-linux 2.31.1 /dev/longhorn/pvc-9784831a-3130-4377-9d44-7e7129473b90 contains a file system with errors, check forced. /dev/longhorn/pvc-9784831a-3130-4377-9d44-7e7129473b90: Inodes that were part of a corrupted orphan linked list found.
/dev/longhorn/pvc-9784831a-3130-4377-9d44-7e7129473b90: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.
describe信息提示我们执行fsck,我们到pv所在的node节点上执行fsck如下
1 2 3 4
[root@node-02 e2fsprogs-1.45.6]# fsck.ext4 -cvf /dev/longhorn/pvc-9784831a-3130-4377-9d44-7e7129473b90 e2fsck 1.42.9 (28-Dec-2013) /dev/longhorn/pvc-9784831a-3130-4377-9d44-7e7129473b90 has unsupported feature(s): metadata_csum e2fsck: Get a newer version of e2fsck!
[root@node-02 e2fsck]# fsck.ext4 -cvf /dev/longhorn/pvc-9784831a-3130-4377-9d44-7e7129473b90 e2fsck 1.45.6 (20-Mar-2020) Checking for bad blocks (read-only test): done /dev/longhorn/pvc-9784831a-3130-4377-9d44-7e7129473b90: Updating bad block inode. 第一步: 检查inode,块,和大小 Inodes that were part of a corrupted orphan linked list found. 处理<y>? 是 Inode 131102 was part of the ��立的 inode list. 已处理. Inode 131103 was part of the ��立的 inode list. 已处理. Inode 131104 was part of the ��立的 inode list. 已处理. Inode 131105 was part of the ��立的 inode list. 已处理. Inode 131106 was part of the ��立的 inode list. 已处理. Inode 131107 was part of the ��立的 inode list. 已处理. Inode 131117 was part of the ��立的 inode list. 已处理. Inode 131402 was part of the ��立的 inode list. 已处理. Inode 131412 was part of the ��立的 inode list. 已处理. Inode 131630 was part of the ��立的 inode list. 已处理. Inode 131638 was part of the ��立的 inode list. 已处理. Inode 131644 was part of the ��立的 inode list. 已处理. 第二步: 检查目录结构 第3步: 检查目录连接性 Pass 4: Checking reference counts 第5步: 检查簇概要信息 块位图差异: -(688640--690326) 处理<y>? 是 Free 块s count wrong for 簇 #21 (31069, counted=32756). 处理<y>? 是 Free 块s count wrong (1227977, counted=1229664). 处理<y>? 是 Inode位图差异: -(131101--131107) -131117 -131402 -131412 -131630 -131638 -131644 处理<y>? 是 Free inodes count wrong for 簇 #16 (7567, counted=7580). 处理<y>? 是 Free inodes count wrong (325295, counted=325308). 处理<y>? 是
2372 inodes used (0.72%, out of 327680) 182 non-contiguous files (7.7%) 1 non-contiguous directory (0.0%) # of inodes with ind/dind/tind blocks: 0/0/0 Extent depth histogram: 2361/3 81056 blocks used (6.18%, out of 1310720) 0 bad blocks 1 large file
Normal Scheduled 12m default-scheduler Successfully assigned devops/nexus3-5c9c5545d9-nmfjg to node-02 Normal SuccessfulAttachVolume 12m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-9784831a-3130-4377-9d44-7e7129473b90" Warning FailedMount 3m46s (x12 over 12m) kubelet MountVolume.SetUp failed for volume "pvc-9784831a-3130-4377-9d44-7e7129473b90" : rpc error: code = Internal desc = 'fsck' found errors on device /dev/longhorn/pvc-9784831a-3130-4377-9d44-7e7129473b90 but could not correct them: fsck from util-linux 2.31.1 /dev/longhorn/pvc-9784831a-3130-4377-9d44-7e7129473b90 contains a file system with errors, check forced. /dev/longhorn/pvc-9784831a-3130-4377-9d44-7e7129473b90: Inodes that were part of a corrupted orphan linked list found.
/dev/longhorn/pvc-9784831a-3130-4377-9d44-7e7129473b90: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY. (i.e., without -a or -p options) Warning FailedMount 3m33s kubelet Unable to attach or mount volumes: unmounted volumes=[nexus-data], unattached volumes=[default-token-dv7nx nexus-data]: timed out waiting for the condition Warning FailedMount 104s kubelet MountVolume.SetUp failed for volume "pvc-9784831a-3130-4377-9d44-7e7129473b90" : rpc error: code = Internal desc = mount failed: exit status 32 Mounting command: mount Mounting arguments: -t ext4 -o defaults /dev/longhorn/pvc-9784831a-3130-4377-9d44-7e7129473b90 /var/lib/kubelet/pods/8268934a-f1d9-4c14-ad4a-276d6986cee8/volumes/kubernetes.io~csi/pvc-9784831a-3130-4377-9d44-7e7129473b90/mount Output: mount: /var/lib/kubelet/pods/8268934a-f1d9-4c14-ad4a-276d6986cee8/volumes/kubernetes.io~csi/pvc-9784831a-3130-4377-9d44-7e7129473b90/mount: /dev/longhorn/pvc-9784831a-3130-4377-9d44-7e7129473b90 already mounted or mount point busy. Warning FailedMount 78s (x4 over 10m) kubelet Unable to attach or mount volumes: unmounted volumes=[nexus-data], unattached volumes=[nexus-data default-token-dv7nx]: timed out waiting for the condition
## Secret to be used as MinIO Root Credentials apiVersion:v1 kind:Secret metadata: name:minio-creds-secret type:Opaque data: ## Access Key for MinIO Tenant, base64 encoded (echo -n 'minio' | base64) accesskey:bWluaW8= ## Secret Key for MinIO Tenant, base64 encoded (echo -n 'minio123' | base64) secretkey:bWluaW8xMjM= --- ## MinIO Tenant Definition apiVersion:minio.min.io/v2 kind:Tenant metadata: name:minio-demo ## Optionally pass labels to be applied to the statefulset pods labels: app:minio ## Annotations for MinIO Tenant Pods annotations: prometheus.io/path:/minio/v2/metrics/cluster prometheus.io/port:"9000" prometheus.io/scrape:"true"
## If a scheduler is specified here, Tenant pods will be dispatched by specified scheduler. ## If not specified, the Tenant pods will be dispatched by default scheduler. # scheduler: # name: my-custom-scheduler
spec: ## Registry location and Tag to download MinIO Server image image:minio/minio:RELEASE.2021-08-25T00-41-18Z imagePullPolicy:IfNotPresent
## Secret with credentials to be used by MinIO Tenant. ## Refers to the secret object created above. credsSecret: name:minio-creds-secret
## Specification for MinIO Pool(s) in this Tenant. pools: ## Servers specifies the number of MinIO Tenant Pods / Servers in this pool. ## For standalone mode, supply 1. For distributed mode, supply 4 or more. ## Note that the operator does not support upgrading from standalone to distributed mode. -servers:1
## volumesPerServer specifies the number of volumes attached per MinIO Tenant Pod / Server. volumesPerServer:4
## This VolumeClaimTemplate is used across all the volumes provisioned for MinIO Tenant in this Pool. volumeClaimTemplate: metadata: name:data spec: storageClassName:longhorn accessModes: -ReadWriteOnce resources: requests: storage:1Gi
## Mount path where PV will be mounted inside container(s). mountPath:/export ## Sub path inside Mount path where MinIO stores data. # subPath: /data
## Use this field to provide a list of Secrets with external certificates. This can be used to to configure ## TLS for MinIO Tenant pods. Create secrets as explained here: ## https://github.com/minio/minio/tree/master/docs/tls/kubernetes#2-create-kubernetes-secret # externalCertSecret: # - name: tls-ssl-minio # type: kubernetes.io/tls
## Enable automatic Kubernetes based certificate generation and signing as explained in ## https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster requestAutoCert:false
## Enable S3 specific features such as Bucket DNS which would allow `buckets` to be ## accessible as DNS entries of form `<bucketname>.minio.default.svc.cluster.local` s3: ## This feature is turned off by default bucketDNS:false
## This field is used only when "requestAutoCert" is set to true. Use this field to set CommonName ## for the auto-generated certificate. Internal DNS name for the pod will be used if CommonName is ## not provided. DNS name format is *.minio.default.svc.cluster.local certConfig: commonName:"" organizationName: [] dnsNames: []
## PodManagement policy for MinIO Tenant Pods. Can be "OrderedReady" or "Parallel" ## Refer https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#pod-management-policy ## for details. podManagementPolicy:Parallel
我们就使用这个文件创建一个minio租户
1 2 3 4 5 6 7 8 9 10
[root@master-01 demo]# kubectl apply -f demo.yaml -n test secret/minio-creds-secret created tenant.minio.min.io/minio-demo created [root@master-01 demo]# kubectl -n test get pod NAME READY STATUS RESTARTS AGE minio-demo-ss-0-0 0/1 Running 0 7s [root@master-01 demo]# kubectl -n test get tenants NAME STATE AGE minio-demo Provisioning MinIO Statefulset 18s