上一篇介绍了CSI插件的设计原理,这一篇来介绍下如何实现一个自己的CSI Driver,通过实践的过程来加深一下对CSI插件工作机制的理解。

在正式开始之前先介绍一下我的Kubernetes测试集群,一台物理机通过KVM虚拟出来了几台虚拟机,虚拟机里跑着一套kubernetes集群。

我们编写CSI插件所要实现的功能是什么?

让上边的Kubernetes集群能使用KVM提供的虚拟磁盘作为容器的持久化存储。

下边我们就来实践一下此插件。

插件基于kubernetes-1.15.5版本开发。

程序目录结构

首先,我们创建程序目录结构

1
➜  ~ mkdir -p {cmd,bin,pkg/kvm}  deploy/{kubernetes,examples}

接下来我们来开发CSI specification中规定的两个插件程序:

  • Node Plugin
  • Controller Plugin

按照规范应该是俩独立的程序,但为了简单通常我们会在一个程序里实现spec规定的所有gRPC服务,总共有三个:

  • Identity Service:
  • Controller Service
  • Node Service

还是一样我们先把文件创建出来,然后再来填充代码逻辑。

1
touch pkg/kvm/{controllerserver.go,indentityserver.go,nodeserver.go}

程序主函数和初始化所要用的到两个文件我们也一并创建下

1
touch bin/main.go pkg/kvm/driver.go

这时目录结构如下:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
.
├── pkg
│   └── kvm
│       ├── nodeserver.go
│       ├── indentityserver.go
│       ├── driver.go
│       └── controllerserver.go
├── deploy
│   ├── kubernetes
│   └── examples
├── cmd
│   └── main.go
└── bin

编码

接下来开始编写代码,这里还是遵循先有再好的思路,我们先把程序最基本的框架给搭起来,先只编写最基本的逻辑让程序能快速运行起来,等和kubernetes联调成功以后再回过头来完善程序功能。

所以在这个阶段我们尽可能的只实现接口,函数先什么不也做,一些CSI specification中规定接口必须返回的信息我们先用假数据代替。

indentityserver.go

pkg/kvm/indentityserver.go

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
// Node Plugin 和 the Controller Plugin 都需要此服务
package kvm

import (
	"github.com/container-storage-interface/spec/lib/go/csi"
	"golang.org/x/net/context"
	"k8s.io/klog"
)

type IdentityServer struct{}

func NewIdentityServer() *IdentityServer {
	return &IdentityServer{}
}

// GetPluginInfo 返回插件信息
func (ids *IdentityServer) GetPluginInfo(ctx context.Context, req *csi.GetPluginInfoRequest) (*csi.GetPluginInfoResponse, error) {
	klog.V(4).Infof("GetPluginInfo: called with args %+v", *req)

	return &csi.GetPluginInfoResponse{
		Name:          driverName,
		VendorVersion: version,
	}, nil
}

// GetPluginCapabilities 返回插件支持的功能
func (ids *IdentityServer) GetPluginCapabilities(ctx context.Context, req *csi.GetPluginCapabilitiesRequest) (*csi.GetPluginCapabilitiesResponse, error) {
	klog.V(4).Infof("GetPluginCapabilities: called with args %+v", *req)
	resp := &csi.GetPluginCapabilitiesResponse{
		Capabilities: []*csi.PluginCapability{
			{
				Type: &csi.PluginCapability_Service_{
					Service: &csi.PluginCapability_Service{
						Type: csi.PluginCapability_Service_CONTROLLER_SERVICE,
					},
				},
			},
			{
				Type: &csi.PluginCapability_Service_{
					Service: &csi.PluginCapability_Service{
						Type: csi.PluginCapability_Service_VOLUME_ACCESSIBILITY_CONSTRAINTS,
					},
				},
			},
		},
	}

	return resp, nil
}

// Probe 插件健康检测
func (ids *IdentityServer) Probe(ctx context.Context, req *csi.ProbeRequest) (*csi.ProbeResponse, error) {
	klog.V(4).Infof("Probe: called with args %+v", *req)
	return &csi.ProbeResponse{}, nil
}

controllerserver.go

pkg/kvm/controllerserver.go

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
package kvm

import (
	"github.com/container-storage-interface/spec/lib/go/csi"
	"golang.org/x/net/context"
	"google.golang.org/grpc/codes"
	"google.golang.org/grpc/status"
	"k8s.io/klog"
)

var (
	// controllerCaps 代表Controller Plugin支持的功能,可选类型见https://github.com/container-storage-interface/spec/blob/4731db0e0bc53238b93850f43ab05d9355df0fd9/lib/go/csi/csi.pb.go#L181:6
	// 这里只实现Volume的创建/删除,附加/卸载
	controllerCaps = []csi.ControllerServiceCapability_RPC_Type{
		csi.ControllerServiceCapability_RPC_CREATE_DELETE_VOLUME,
		csi.ControllerServiceCapability_RPC_PUBLISH_UNPUBLISH_VOLUME,
	}
)

type ControllerServer struct{}

func NewControllerServer() *ControllerServer {
	return &ControllerServer{}
}

// ControllerGetCapabilities 返回Controller Plugin支持的功能
func (cs *ControllerServer) ControllerGetCapabilities(ctx context.Context, req *csi.ControllerGetCapabilitiesRequest) (*csi.ControllerGetCapabilitiesResponse, error) {
	klog.V(4).Infof("ControllerGetCapabilities: called with args %+v", *req)

	var caps []*csi.ControllerServiceCapability
	for _, cap := range controllerCaps {
		c := &csi.ControllerServiceCapability{
			Type: &csi.ControllerServiceCapability_Rpc{
				Rpc: &csi.ControllerServiceCapability_RPC{
					Type: cap,
				},
			},
		}
		caps = append(caps, c)
	}
	return &csi.ControllerGetCapabilitiesResponse{Capabilities: caps}, nil
}

// CreateVolume 创建
func (cs *ControllerServer) CreateVolume(ctx context.Context, req *csi.CreateVolumeRequest) (*csi.CreateVolumeResponse, error) {
	klog.V(4).Infof("CreateVolume: called with args %+v", *req)

	// 这里先返回一个假数据,模拟我们创建出了一块id为"qcow-1234567"容量为20G的云盘
	return &csi.CreateVolumeResponse{
		Volume: &csi.Volume{
			VolumeId:      "qcow-1234567",
			CapacityBytes: 20 * (1 << 30),
			VolumeContext: req.GetParameters(),
		},
	}, nil
}

// DeleteVolume 删除
func (cs *ControllerServer) DeleteVolume(ctx context.Context, req *csi.DeleteVolumeRequest) (*csi.DeleteVolumeResponse, error) {
	klog.V(4).Infof("DeleteVolume: called with args: %+v", *req)
	return &csi.DeleteVolumeResponse{}, nil
}

// ControllerPublishVolume 附加
func (cs *ControllerServer) ControllerPublishVolume(ctx context.Context, req *csi.ControllerPublishVolumeRequest) (*csi.ControllerPublishVolumeResponse, error) {
	klog.V(4).Infof("ControllerPublishVolume: called with args %+v", *req)
	pvInfo := map[string]string{DevicePathKey: "/dev/sdb"}
	return &csi.ControllerPublishVolumeResponse{PublishContext: pvInfo}, nil
}

// ControllerUnpublishVolume 卸载
func (cs *ControllerServer) ControllerUnpublishVolume(ctx context.Context, req *csi.ControllerUnpublishVolumeRequest) (*csi.ControllerUnpublishVolumeResponse, error) {
	klog.V(4).Infof("ControllerUnpublishVolume: called with args %+v", *req)
	return &csi.ControllerUnpublishVolumeResponse{}, nil
}

// TODO(xnile): implement this
func (cs *ControllerServer) ValidateVolumeCapabilities(ctx context.Context, req *csi.ValidateVolumeCapabilitiesRequest) (*csi.ValidateVolumeCapabilitiesResponse, error) {
	return nil, status.Error(codes.Unimplemented, "")
}

func (cs *ControllerServer) ListVolumes(ctx context.Context, req *csi.ListVolumesRequest) (*csi.ListVolumesResponse, error) {
	return nil, status.Error(codes.Unimplemented, "")
}

func (cs *ControllerServer) GetCapacity(ctx context.Context, req *csi.GetCapacityRequest) (*csi.GetCapacityResponse, error) {
	return nil, status.Error(codes.Unimplemented, "")
}

func (cs *ControllerServer) CreateSnapshot(ctx context.Context, req *csi.CreateSnapshotRequest) (*csi.CreateSnapshotResponse, error) {
	return nil, status.Error(codes.Unimplemented, "")
}

func (cs *ControllerServer) DeleteSnapshot(ctx context.Context, req *csi.DeleteSnapshotRequest) (*csi.DeleteSnapshotResponse, error) {
	return nil, status.Error(codes.Unimplemented, "")
}

func (cs *ControllerServer) ListSnapshots(ctx context.Context, req *csi.ListSnapshotsRequest) (*csi.ListSnapshotsResponse, error) {
	return nil, status.Error(codes.Unimplemented, "")
}

func (cs *ControllerServer) ControllerExpandVolume(ctx context.Context, req *csi.ControllerExpandVolumeRequest) (*csi.ControllerExpandVolumeResponse, error) {
	return nil, status.Error(codes.Unimplemented, "")
}

nodeserver.go

pkg/kvm/nodeserver.go

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
// Node插件
package kvm

import (
	// "fmt"
	// "os"

	"github.com/container-storage-interface/spec/lib/go/csi"
	"golang.org/x/net/context"
	"google.golang.org/grpc/codes"
	"google.golang.org/grpc/status"
	"k8s.io/klog"
	"k8s.io/kubernetes/pkg/util/mount"
)

type nodeServer struct {
	nodeID  string
	mounter mount.SafeFormatAndMount
}

func NewNodeServer(nodeid string) *nodeServer {
	return &nodeServer{
		nodeID: nodeid,
		mounter: mount.SafeFormatAndMount{
			Interface: mount.New(""),
			Exec:      mount.NewOsExec(),
		},
	}
}

// NodeStageVolume 格式化硬盘,Mount到全局目录
func (ns *nodeServer) NodeStageVolume(ctx context.Context, req *csi.NodeStageVolumeRequest) (*csi.NodeStageVolumeResponse, error) {
	klog.V(4).Infof("NodeStageVolume: called with args %+v", *req)

	return &csi.NodeStageVolumeResponse{}, nil
}

func (ns *nodeServer) NodeUnstageVolume(ctx context.Context, req *csi.NodeUnstageVolumeRequest) (*csi.NodeUnstageVolumeResponse, error) {
	klog.V(4).Infof("NodeUnstageVolume: called with args %+v", *req)

	return &csi.NodeUnstageVolumeResponse{}, nil
}

//NodePublishVolume 从全局目录mount到目标目录(后续将映射到Pod中)
func (ns *nodeServer) NodePublishVolume(ctx context.Context, req *csi.NodePublishVolumeRequest) (*csi.NodePublishVolumeResponse, error) {
	klog.V(4).Infof("NodePublishVolume: called with args %+v", *req)

	return &csi.NodePublishVolumeResponse{}, nil
}

func (ns *nodeServer) NodeUnpublishVolume(ctx context.Context, req *csi.NodeUnpublishVolumeRequest) (*csi.NodeUnpublishVolumeResponse, error) {
	klog.V(4).Infof("NodeUnpublishVolume: called with args %+v", *req)

	return &csi.NodeUnpublishVolumeResponse{}, nil
}

// NodeGetInfo 返回节点信息
func (ns *nodeServer) NodeGetInfo(ctx context.Context, req *csi.NodeGetInfoRequest) (*csi.NodeGetInfoResponse, error) {
	klog.V(4).Infof("NodeGetInfo: called with args %+v", *req)

	return &csi.NodeGetInfoResponse{
		NodeId: ns.nodeID,
	}, nil
}

// NodeGetCapabilities 返回节点支持的功能
func (ns *nodeServer) NodeGetCapabilities(ctx context.Context, req *csi.NodeGetCapabilitiesRequest) (*csi.NodeGetCapabilitiesResponse, error) {
	klog.V(4).Infof("NodeGetCapabilities: called with args %+v", *req)

	return &csi.NodeGetCapabilitiesResponse{
		Capabilities: []*csi.NodeServiceCapability{
			{
				Type: &csi.NodeServiceCapability_Rpc{
					Rpc: &csi.NodeServiceCapability_RPC{
						Type: csi.NodeServiceCapability_RPC_STAGE_UNSTAGE_VOLUME,
					},
				},
			},
		},
	}, nil
}

func (ns *nodeServer) NodeGetVolumeStats(ctx context.Context, in *csi.NodeGetVolumeStatsRequest) (*csi.NodeGetVolumeStatsResponse, error) {
	return nil, status.Error(codes.Unimplemented, "")
}

func (ns *nodeServer) NodeExpandVolume(ctx context.Context, req *csi.NodeExpandVolumeRequest) (*csi.NodeExpandVolumeResponse, error) {
	return nil, status.Error(codes.Unimplemented, "")
}

driver.go

pkg/kvm/driver.go

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
package kvm

import (
	"fmt"
	"github.com/container-storage-interface/spec/lib/go/csi"
	"github.com/kubernetes-csi/csi-lib-utils/protosanitizer"
	"golang.org/x/net/context"
	"google.golang.org/grpc"
	"k8s.io/klog"
	"net"
	"os"
	"strings"
)

type Driver struct {
	nodeID   string
	endpoint string
}

const (
	version       = "1.0.0"
	driverName    = "kvm.csi.dianduidian.com"
	DevicePathKey = "devicePath"
)

func NewDriver(nodeID, endpoint string) *Driver {
	klog.V(4).Infof("Driver: %v version: %v", driverName, version)

	n := &Driver{
		nodeID:   nodeID,
		endpoint: endpoint,
	}

	return n
}

func (d *Driver) Run() {

	ctl := NewControllerServer()
	identity := NewIdentityServer()
	node := NewNodeServer(d.nodeID)

	opts := []grpc.ServerOption{
		grpc.UnaryInterceptor(logGRPC),
	}

	srv := grpc.NewServer(opts...)

	csi.RegisterControllerServer(srv, ctl)
	csi.RegisterIdentityServer(srv, identity)
	csi.RegisterNodeServer(srv, node)

	proto, addr, err := ParseEndpoint(d.endpoint)
	klog.V(4).Infof("protocol: %s,addr: %s", proto, addr)
	if err != nil {
		klog.Fatal(err.Error())
	}

	if proto == "unix" {
		addr = "/" + addr
		if err := os.Remove(addr); err != nil && !os.IsNotExist(err) {
			klog.Fatalf("Failed to remove %s, error: %s", addr, err.Error())
		}
	}

	listener, err := net.Listen(proto, addr)
	if err != nil {
		klog.Fatalf("Failed to listen: %v", err)
	}

	srv.Serve(listener)
}

func logGRPC(ctx context.Context, req interface{}, info *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (interface{}, error) {
	klog.V(4).Infof("GRPC call: %s", info.FullMethod)
	klog.V(4).Infof("GRPC request: %s", protosanitizer.StripSecrets(req))
	resp, err := handler(ctx, req)
	if err != nil {
		klog.Errorf("GRPC error: %v", err)
	} else {
		klog.V(4).Infof("GRPC response: %s", protosanitizer.StripSecrets(resp))
	}
	return resp, err
}

func ParseEndpoint(ep string) (string, string, error) {
	if strings.HasPrefix(strings.ToLower(ep), "unix://") || strings.HasPrefix(strings.ToLower(ep), "tcp://") {
		s := strings.SplitN(ep, "://", 2)
		if s[1] != "" {
			return s[0], s[1], nil
		}
	}
	return "", "", fmt.Errorf("Invalid endpoint: %v", ep)
}

main.go

cmd/main.go

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
package main

import (
	"flag"
	"k8s.io/klog"
	"kvm-csi-driver/pkg/kvm"
)

var (
	endpoint string
	nodeID   string
)

func main() {
	flag.StringVar(&endpoint, "endpoint", "", "CSI Endpoint")
	flag.StringVar(&nodeID, "nodeid", "", "node id")

	klog.InitFlags(nil)
	flag.Parse()

	d := kvm.NewDriver(nodeID, endpoint)
	d.Run()
}

编译

1
➜  ~ CGO_ENABLED=0 GOOS=linux go build -o ./bin/kvm-csi-driver ./cmd

打包Docker镜像

Dockerfile

1
2
3
4
5
6
7
FROM alpine
LABEL maintainers="Xnile"
LABEL description="KVM CSI Driver"

RUN apk add util-linux e2fsprogs
COPY kvm-csi-driver /kvm-csi-driver
ENTRYPOINT ["/kvm-csi-driver"]

Build

1
2
➜  ~ docker build -t xnile/kvm-csi-driver:v0.1 ./
➜  ~ docker push xnile/kvm-csi-driver:v0.1

部署

RBAC

授权驱动程序操作相关API的权限。

deploy/kubernetes/rbac.yaml

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kvm-csi-driver
  namespace: csi-dev

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: kvm-csi-driver
rules:
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["get", "list"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "update", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["csinodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "watch", "list", "delete", "update", "create"]
  - apiGroups: [""]
    resources: ["configmaps"]
    verbs: ["get", "watch", "list", "delete", "update", "create"]
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["csi.storage.k8s.io"]
    resources: ["csinodeinfos"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["volumeattachments"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["snapshot.storage.k8s.io"]
    resources: ["volumesnapshotclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["snapshot.storage.k8s.io"]
    resources: ["volumesnapshotcontents"]
    verbs: ["create", "get", "list", "watch", "update", "delete"]
  - apiGroups: ["snapshot.storage.k8s.io"]
    resources: ["volumesnapshots"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["apiextensions.k8s.io"]
    resources: ["customresourcedefinitions"]
    verbs: ["create", "list", "watch", "delete"]

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: kvm-csi-driver
subjects:
  - kind: ServiceAccount
    name: kvm-csi-driver
    namespace: csi-dev
roleRef:
  kind: ClusterRole
  name: kvm-csi-driver
  apiGroup: rbac.authorization.k8s.io
1
➜  ~ kubectl apply -f deploy/kubernetes/rbac.yaml

部署驱动

deploy/kubernetes/kvm-csi-driver.yaml

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
kind: Deployment
apiVersion: apps/v1
metadata:
  name: kvm-csi-driver
  namespace: csi-dev
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kvm-csi-driver
  template:
    metadata:
      labels:
        app: kvm-csi-driver
    spec:
      nodeSelector:
        kubernetes.io/hostname: knode02
      serviceAccount: kvm-csi-driver
      containers:
        #plugin
        - name: kvm-csi-driver
          image: xnile/kvm-csi-driver:v0.1
          args:
            - --endpoint=$(CSI_ENDPOINT)
            - --nodeid=$(KUBE_NODE_NAME)
            - --logtostderr
            - --v=5
          env:
            - name: CSI_ENDPOINT
              value: unix:///csi/csi.sock
            - name: KUBE_NODE_NAME
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: spec.nodeName
          securityContext:
            privileged: true
          volumeMounts:
            - name: kubelet-dir
              mountPath: /var/lib/kubelet
              mountPropagation: "Bidirectional"
            - name: plugin-dir
              mountPath: /csi
            - name: device-dir
              mountPath: /dev
        #Sidecar:node-driver-registrar
        - name: node-driver-registrar
          image: quay.io/k8scsi/csi-node-driver-registrar:v1.2.0
          args:
            - --csi-address=$(ADDRESS)
            - --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)
            - --v=5
          lifecycle:
            preStop:
              exec:
                command: ["/bin/sh", "-c", "rm -rf /registration/kvm.csi.dianduidian.com-reg.sock /csi/csi.sock"]
          env:
            - name: ADDRESS
              value: /csi/csi.sock
            - name: DRIVER_REG_SOCK_PATH
              value: /var/lib/kubelet/plugins/kvm.csi.dianduidian.com/csi.sock
          volumeMounts:
            - name: plugin-dir
              mountPath: /csi
            - name: registration-dir
              mountPath: /registration
        #Sidecar: livenessprobe
        - name: liveness-probe
          image: quay.io/k8scsi/livenessprobe:v1.1.0
          args:
            - "--csi-address=/csi/csi.sock"
            - "--v=5"
          volumeMounts:
            - name: plugin-dir
              mountPath: /csi
        #Sidecar: csi-provisione
        - name: csi-provisioner
          image: quay.io/k8scsi/csi-provisioner:v1.3.1
          args:
            - "--csi-address=$(ADDRESS)"
            - "--v=5"
            - "--feature-gates=Topology=True"
          env:
            - name: ADDRESS
              value: unix:///csi/csi.sock
          imagePullPolicy: "IfNotPresent"
          volumeMounts:
            - name: plugin-dir
              mountPath: /csi
        #Sidecar: csi-attacher
        - name: csi-attacher
          image: quay.io/k8scsi/csi-attacher:v1.2.1
          args:
            - "--v=5"
            - "--csi-address=$(ADDRESS)"
          env:
            - name: ADDRESS
              value: /csi/csi.sock
          imagePullPolicy: "Always"
          volumeMounts:
            - name: plugin-dir
              mountPath: /csi
      volumes:
        - name: kubelet-dir
          hostPath:
            path: /var/lib/kubelet
            type: Directory
        - name: plugin-dir
          hostPath:
            path: /var/lib/kubelet/plugins/kvm.csi.dianduidian.com/
            type: DirectoryOrCreate
        - name: registration-dir
          hostPath:
            path: /var/lib/kubelet/plugins_registry/
            type: Directory
        - name: device-dir
          hostPath:
            path: /dev
            type: Directory
1
➜  ~ kubectl apply -f deploy/kubernetes/kvm-csi-driver.yaml

验证插件运行状态

查看pod状态

1
2
3
➜  ~ kubectl get pods -L app=kvm-csi-driver
NAME                                 READY   STATUS    RESTARTS   AGE     APP=KVM-CSI-DRIVER
kvm-csi-driver-77675b9d7b-db28j      5/5     Running   0          4h20m

查看插件Log

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
➜  ~ kubectl logs -f kvm-csi-driver-77675b9d7b-db28j -c kvm-csi-driver
I0110 03:15:13.125126       1 driver.go:27] Driver: kvm.csi.dianduidian.com version: 1.0.0
I0110 03:15:13.125350       1 mount_linux.go:160] Detected OS without systemd
I0110 03:15:13.125388       1 driver.go:54] protocol: unix,addr: /csi/csi.sock
I0110 03:15:13.259096       1 driver.go:75] GRPC call: /csi.v1.Identity/GetPluginInfo
I0110 03:15:13.259119       1 driver.go:76] GRPC request: {}
I0110 03:15:13.260501       1 indentityserver.go:18] Using default GetPluginInfo
I0110 03:15:13.260507       1 driver.go:81] GRPC response: {"name":"kvm.csi.dianduidian.com","vendor_version":"1.0.0"}
I0110 03:15:13.401214       1 driver.go:75] GRPC call: /csi.v1.Identity/GetPluginInfo
I0110 03:15:13.401232       1 driver.go:76] GRPC request: {}
I0110 03:15:13.401744       1 indentityserver.go:18] Using default GetPluginInfo
...

查看CSINode信息

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
➜  ~ kubectl get csinodes knode02 -o yaml
apiVersion: storage.k8s.io/v1beta1
kind: CSINode
metadata:
  name: knode02
...
spec:
  drivers:
  - name: kvm.csi.dianduidian.com
    nodeID: knode02
    topologyKeys: null

测试

到此我们的驱动程序已经正常运行起来了,接下来我们来创建一个测试Pod来验证下驱动程序能否为Pod完成Volume的创建、附加、挂载等操作。当然这里Volume的创建、附加、挂载都只是模拟,其实可以把这些操作做成一个webhook供csi驱动调用,类型公有云提供的相关api一样,这样就可以模拟真实的创建、附加、挂载等操作,有兴趣的朋友可以自己实现。

StorageClass

deploy/examples/storageclass.yaml

1
2
3
4
5
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: kvm-csi
provisioner: kvm.csi.dianduidian.com
1
➜  ~ kubectl apply -f deploy/examples/storageclass.yaml
1
2
3
➜  ~ kubectl get sc
NAME      PROVISIONER               AGE
kvm-csi   kvm.csi.dianduidian.com   4h55m

PVC

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: kvm-csi-pvc-01
  namespace: csi-dev
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 20Gi
  storageClassName: kvm-csi
1
➜  ~ kubectl apply -f deploy/examples/pvc.yaml

验证

1
2
3
➜  ~ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                    STORAGECLASS   REASON   AGE
pvc-828a361b-eab8-470c-bb39-73f6ba2bd5cc   20Gi       RWO            Delete           Bound    csi-dev/kvm-csi-pvc-01   kvm-csi                 4h43m
1
2
3
➜  ~ kubectl get pvc
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
kvm-csi-pvc-01   Bound    pvc-828a361b-eab8-470c-bb39-73f6ba2bd5cc   20Gi       RWO            kvm-csi        4h43m

查看程序Log

1
2
3
4
I0110 09:04:55.039279       1 driver.go:75] GRPC call: /csi.v1.Controller/CreateVolume
I0110 09:04:55.039333       1 driver.go:76] GRPC request: {"capacity_range":{"required_bytes":21474836480},"name":"pvc-48a7358d-6f3b-45b8-989d-396640e8cbe1","volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]}
I0110 09:04:55.041979       1 controllerserver.go:53] CreateVolume: called with args {Name:pvc-48a7358d-6f3b-45b8-989d-396640e8cbe1 CapacityRange:required_bytes:21474836480  VolumeCapabilities:[mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER > ] Parameters:map[] Secrets:map[] VolumeContentSource:<nil> AccessibilityRequirements:<nil> XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0110 09:04:55.042294       1 driver.go:81] GRPC response: {"volume":{"capacity_bytes":21474836480,"volume_id":"qcow-1234567"}}

测试使用PVC

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-test-csi-pvc
  namespace: csi-dev
  labels:
    app: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      nodeSelector:
        kubernetes.io/hostname: knode02
      containers:
      - name: nginx
        image: nginx:1.17
        ports:
        - containerPort: 80
        volumeMounts:
          - name: data
            mountPath: "/data"
      volumes:
        - name: data
          persistentVolumeClaim:
            claimName: kvm-csi-pvc-01
1
➜  ~ kubectl apply -f deploy/examples/nginx.yaml

查看pod状态

1
2
3
➜  ~ kubectl get pod -l app=nginx
NAME                                 READY   STATUS    RESTARTS   AGE
nginx-test-csi-pvc-6fc84ffff-ppnhc   1/1     Running   0          3m

查看pod信息

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
nginx-test-csi-pvc-6fc84ffff-ppnhc   1/1     Running   0          3m
➜  ~ kubectl describe pod nginx-test-csi-pvc-6fc84ffff-ppnhc
...
Events:
  Type    Reason                  Age    From                     Message
  ----    ------                  ----   ----                     -------
  Normal  Scheduled               3m55s  default-scheduler        Successfully assigned csi-dev/nginx-test-csi-pvc-6fc84ffff-ppnhc to knode02
  Normal  SuccessfulAttachVolume  3m55s  attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-48a7358d-6f3b-45b8-989d-396640e8cbe1"
  Normal  Pulled                  3m52s  kubelet, knode02         Container image "nginx:1.17" already present on machine
  Normal  Created                 3m52s  kubelet, knode02         Created container nginx
  Normal  Started                 3m52s  kubelet, knode02         Started container nginx

查看节点信息

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
➜  ~ kubectl get no knode02 -o yaml
apiVersion: v1
kind: Node
metadata:
  annotations:
    csi.volume.kubernetes.io/nodeid: '{"kvm.csi.dianduidian.com":"knode02",
...

  volumesAttached:
  - devicePath: ""
    name: kubernetes.io/csi/kvm.csi.dianduidian.com^qcow-1234567
  volumesInUse:
  - kubernetes.io/csi/kvm.csi.dianduidian.com^qcow-1234567

查看插件Log

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
I0110 09:06:01.908883       1 driver.go:75] GRPC call: /csi.v1.Controller/ControllerPublishVolume
I0110 09:06:01.908992       1 driver.go:76] GRPC request: {"node_id":"knode02","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"storage.kubernetes.io/csiProvisionerIdentity":"1578647010808-8081-kvm.csi.dianduidian.com"},"volume_id":"qcow-1234567"}
I0110 09:06:01.910803       1 controllerserver.go:72] ControllerPublishVolume: called with args {VolumeId:qcow-1234567 NodeId:knode02 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[storage.kubernetes.io/csiProvisionerIdentity:1578647010808-8081-kvm.csi.dianduidian.com] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0110 09:06:01.910879       1 driver.go:81] GRPC response: {"publish_context":{"devicePath":"/dev/sdb"}}
I0110 09:06:03.701281       1 driver.go:75] GRPC call: /csi.v1.Node/NodeGetCapabilities
I0110 09:06:03.701322       1 driver.go:76] GRPC request: {}
I0110 09:06:03.702143       1 nodeserver.go:68] NodeGetCapabilities: called with args {XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0110 09:06:03.702164       1 driver.go:81] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}}]}
I0110 09:06:03.808833       1 driver.go:75] GRPC call: /csi.v1.Node/NodeStageVolume
I0110 09:06:03.808862       1 driver.go:76] GRPC request: {"publish_context":{"devicePath":"/dev/sdb"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-48a7358d-6f3b-45b8-989d-396640e8cbe1/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"storage.kubernetes.io/csiProvisionerIdentity":"1578647010808-8081-kvm.csi.dianduidian.com"},"volume_id":"qcow-1234567"}
I0110 09:06:03.810655       1 nodeserver.go:33] NodeStageVolume: called with args {VolumeId:qcow-1234567 PublishContext:map[devicePath:/dev/sdb] StagingTargetPath:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-48a7358d-6f3b-45b8-989d-396640e8cbe1/globalmount VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Secrets:map[] VolumeContext:map[storage.kubernetes.io/csiProvisionerIdentity:1578647010808-8081-kvm.csi.dianduidian.com] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0110 09:06:03.810723       1 driver.go:81] GRPC response: {}
I0110 09:06:03.811953       1 driver.go:75] GRPC call: /csi.v1.Node/NodeGetCapabilities
I0110 09:06:03.811985       1 driver.go:76] GRPC request: {}
I0110 09:06:03.812466       1 nodeserver.go:68] NodeGetCapabilities: called with args {XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0110 09:06:03.812479       1 driver.go:81] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}}]}
I0110 09:06:04.021107       1 driver.go:75] GRPC call: /csi.v1.Node/NodePublishVolume
I0110 09:06:04.021137       1 driver.go:76] GRPC request: {"publish_context":{"devicePath":"/dev/sdb"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-48a7358d-6f3b-45b8-989d-396640e8cbe1/globalmount","target_path":"/var/lib/kubelet/pods/dcc40f38-dc6d-4cfc-b375-e38b34fba858/volumes/kubernetes.io~csi/pvc-48a7358d-6f3b-45b8-989d-396640e8cbe1/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"storage.kubernetes.io/csiProvisionerIdentity":"1578647010808-8081-kvm.csi.dianduidian.com"},"volume_id":"qcow-1234567"}
I0110 09:06:04.022416       1 nodeserver.go:46] NodePublishVolume: called with args {VolumeId:qcow-1234567 PublishContext:map[devicePath:/dev/sdb] StagingTargetPath:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-48a7358d-6f3b-45b8-989d-396640e8cbe1/globalmount TargetPath:/var/lib/kubelet/pods/dcc40f38-dc6d-4cfc-b375-e38b34fba858/volumes/kubernetes.io~csi/pvc-48a7358d-6f3b-45b8-989d-396640e8cbe1/mount VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[storage.kubernetes.io/csiProvisionerIdentity:1578647010808-8081-kvm.csi.dianduidian.com] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0110 09:06:04.022458       1 driver.go:81] GRPC response: {}
I0110 09:06:33.378488       1 driver.go:75] GRPC call: /csi.v1.Node/NodeGetCapabilities
I0110 09:06:33.378551       1 driver.go:76] GRPC request: {}
I0110 09:06:33.379466       1 nodeserver.go:68] NodeGetCapabilities: called with args {XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0110 09:06:33.379505       1 driver.go:81] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}}]}

源代码

https://github.com/xnile/kvm-csi-driver

调试工具

csc

The Container Storage Client (csc) is a command line interface (CLI) tool that provides analogues for all of the CSI RPCs.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
$ csc
NAME
    csc -- a command line container storage interface (CSI) client

SYNOPSIS
    csc [flags] CMD

AVAILABLE COMMANDS
    controller
    identity
    node

Use "csc -h,--help" for more information

安装

1
$ GO111MODULE=off go get -u github.com/rexray/gocsi/csc