3
3
## Manual
4
4
5
5
To build and install a development version of the driver:
6
+
6
7
```
7
8
$ GCE_PD_CSI_STAGING_IMAGE=gcr.io/path/to/driver/image:dev # Location to push dev image to
8
9
$ make push-container
@@ -13,9 +14,93 @@ $ ./deploy/kubernetes/deploy-driver.sh
13
14
```
14
15
15
16
To bring down driver:
17
+
16
18
```
17
19
$ ./deploy/kubernetes/delete-driver.sh
18
20
```
19
21
20
- ## TODO Testing
22
+ ## Debugging
23
+
24
+ We use https://github.com/go-delve/delve and its remote debugging feature for debugging, this feature
25
+ is only available in the PD CSI Controller (which runs in a linux node)
26
+
27
+ Requirements:
28
+
29
+ - https://github.com/go-delve/delve
30
+
31
+ Steps:
32
+
33
+ - Build the PD CSI driver with additional compiler flags
34
+
35
+ ```
36
+ export GCE_PD_CSI_STAGING_VERSION=latest
37
+ export GCE_PD_CSI_STAGING_IMAGE=image/repo/gcp-compute-persistent-disk-csi-driver
38
+ make build-and-push-multi-arch-dev
39
+ ```
40
+
41
+ - Update ` deploy/kubernetes/overlays/noauth-dev/kustomization.yaml ` to match the repo you wrote above e.g.
42
+
43
+ ``` yaml
44
+ images :
45
+ - name : gke.gcr.io/gcp-compute-persistent-disk-csi-driver
46
+ newName : image/repo/gcp-compute-persistent-disk-csi-driver
47
+ newTag : latest
48
+ ` ` `
49
+
50
+ - Delete and deploy the driver with this overlay
51
+
52
+ ` ` ` sh
53
+ ./deploy/kubernetes/delete-driver.sh && \
54
+ GCE_PD_DRIVER_VERSION=noauth-dev ./deploy/kubernetes/deploy-driver.sh
55
+ ```
56
+
57
+ At this point you could verify that delve is running in the controller logs:
58
+
59
+ ``` text
60
+ API server listening at: [::]:2345
61
+ 2021-04-15T18:28:51Z info layer=debugger launching process with args: [/go/src/sigs.k8s.io/gcp-compute-persistent-disk-csi-driver/bin/gce-pd-csi-driver --v=5 --endpoint=unix:/csi/csi.sock]
62
+ 2021-04-15T18:28:53Z debug layer=debugger continuing
63
+ ```
64
+
65
+ - Enable port forwading of the PD CSI controller of port 2345
66
+
67
+ ``` sh
68
+ kubectl -n gce-pd-csi-driver get pods | grep controller | awk ' {print $1}' | xargs -I % kubectl -n gce-pd-csi-driver port-forward % 2345:2345
69
+ ```
70
+
71
+ - Connect to the headless server and issue commands
72
+
73
+ ``` sh
74
+ dlv connect localhost:2345
75
+ Type ' help' for list of commands.
76
+ (dlv) clearall
77
+ (dlv) break pkg/gce-pd-csi-driver/controller.go:509
78
+ Breakpoint 1 set at 0x159ba32 for sigs.k8s.io/gcp-compute-persistent-disk-csi-driver/pkg/gce-pd-csi-driver.(*GCEControllerServer).ListVolumes () ./pkg/gce-pd-csi-driver/controller.go:509
79
+ (dlv) c
80
+ > sigs.k8s.io/gcp-compute-persistent-disk-csi-driver/pkg/gce-pd-csi-driver.(*GCEControllerServer).ListVolumes () ./pkg/gce-pd-csi-driver/controller.go:509 (hits goroutine(69):1 total:1) (PC: 0x159ba32)
81
+ Warning: debugging optimized function
82
+ 504: }
83
+ 505: }
84
+ 506:
85
+ 507: func (gceCS * GCEControllerServer) ListVolumes(ctx context.Context, req * csi.ListVolumesRequest) (* csi.ListVolumesResponse, error) {
86
+ 508: // https//cloud.google.com/compute/docs/reference/beta/disks/list
87
+ => 509: if req.MaxEntries < 0 {
88
+ 510: return nil, status.Error(codes.InvalidArgument, fmt.Sprintf(
89
+ 511: " ListVolumes got max entries request %v. GCE only supports values between 0-500" , req.MaxEntries))
90
+ 512: }
91
+ 513: var maxEntries int64 = int64(req.MaxEntries)
92
+ 514: if maxEntries > 500 {
93
+ (dlv) req
94
+ Command failed: command not available
95
+ (dlv) p req
96
+ * github.com/container-storage-interface/spec/lib/go/csi.ListVolumesRequest {
97
+ MaxEntries: 0,
98
+ StartingToken: " " ,
99
+ XXX_NoUnkeyedLiteral: struct {} {},
100
+ XXX_unrecognized: []uint8 len: 0, cap: 0, nil,
101
+ XXX_sizecache: 0,}
102
+ (dlv)
103
+ ` ` `
104
+
105
+ See https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver/pull/742 for the implementation details
21
106
0 commit comments