Skip to content
This repository was archived by the owner on Oct 28, 2024. It is now read-only.

Move images from docker hub to quay #75

Closed
gyliu513 opened this issue May 24, 2021 · 5 comments
Closed

Move images from docker hub to quay #75

gyliu513 opened this issue May 24, 2021 · 5 comments

Comments

@gyliu513
Copy link
Contributor

gyliu513 commented May 24, 2021

Now all of the images for VC are hosted in docker hub.

The following cmd will create a ClusterVersion named cv-sample-np, which specifies the tenant master components as:

  • etcd: a StatefulSet with virtualcluster/etcd-v3.4.0 image, 1 replica;
  • apiServer: a StatefulSet with virtualcluster/apiserver-v1.16.2 image, 1 replica;
  • controllerManager: a StatefulSet with virtualcluster/controller-manager-v1.16.2 image, 1 replica.

When testing with Kind, I often got some problems of docker rate limit and cannot pull the image.

Is it possible to move the images from docker hub to quay.io? Then we will not have such problem.

root@gyliu-dev21:~/.docker# kubectl get pods
NAME                                  READY   STATUS             RESTARTS   AGE
cluster-sample-apiserver-0            0/1     ImagePullBackOff   0          9m43s
cluster-sample-controller-manager-0   0/1     ImagePullBackOff   0          9m55s
cluster-sample-etcd-0                 1/1     Running            0          9m48s
root@gyliu-dev21:~/.docker# kubectl describe po cluster-sample-apiserver-0
Name:         cluster-sample-apiserver-0
Namespace:    default
Priority:     0
Node:         capn-control-plane/172.18.0.2
Start Time:   Sun, 23 May 2021 18:53:43 -0700
Labels:       component-name=nestedapiserver-sample
              controller-revision-hash=cluster-sample-apiserver-7bff79549
              statefulset.kubernetes.io/pod-name=cluster-sample-apiserver-0
Annotations:  <none>
Status:       Pending
IP:           10.244.0.12
IPs:
  IP:           10.244.0.12
Controlled By:  StatefulSet/cluster-sample-apiserver
Containers:
  nestedapiserver-sample:
    Container ID:
    Image:         virtualcluster/apiserver-v1.16.2
    Image ID:
    Port:          6443/TCP
    Host Port:     0/TCP
    Command:
      kube-apiserver
    Args:
      --bind-address=0.0.0.0
      --allow-privileged=true
      --anonymous-auth=true
      --client-ca-file=/etc/kubernetes/pki/apiserver/ca/tls.crt
      --tls-cert-file=/etc/kubernetes/pki/apiserver/tls.crt
      --tls-private-key-file=/etc/kubernetes/pki/apiserver/tls.key
      --kubelet-https=true
      --kubelet-certificate-authority=/etc/kubernetes/pki/apiserver/ca/tls.crt
      --kubelet-client-certificate=/etc/kubernetes/pki/kubelet/tls.crt
      --kubelet-client-key=/etc/kubernetes/pki/kubelet/tls.key
      --kubelet-preferred-address-types=InternalIP,ExternalIP
      --enable-bootstrap-token-auth=true
      --etcd-servers=https://cluster-sample-etcd-0.cluster-sample-etcd.$(NAMESPACE):2379
      --etcd-cafile=/etc/kubernetes/pki/etcd/ca/tls.crt
      --etcd-certfile=/etc/kubernetes/pki/etcd/tls.crt
      --etcd-keyfile=/etc/kubernetes/pki/etcd/tls.key
      --service-account-key-file=/etc/kubernetes/pki/service-account/tls.key
      --service-cluster-ip-range=10.32.0.0/16
      --service-node-port-range=30000-32767
      --authorization-mode=Node,RBAC
      --runtime-config=api/all
      --enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota
      --apiserver-count=1
      --endpoint-reconciler-type=master-count
      --v=2
    State:          Waiting
      Reason:       ImagePullBackOff
    Ready:          False
    Restart Count:  0
    Liveness:       tcp-socket :6443 delay=15s timeout=15s period=10s #success=1 #failure=8
    Readiness:      http-get https://:6443/healthz delay=5s timeout=30s period=2s #success=1 #failure=8
    Environment:
      NAMESPACE:  default (v1:metadata.namespace)
    Mounts:
      /etc/kubernetes/pki/apiserver from cluster-sample-apiserver-client (ro)
      /etc/kubernetes/pki/apiserver/ca from cluster-sample-ca (ro)
      /etc/kubernetes/pki/etcd from cluster-sample-etcd-client (ro)
      /etc/kubernetes/pki/etcd/ca from cluster-sample-etcd-ca (ro)
      /etc/kubernetes/pki/kubelet from cluster-sample-kubelet-client (ro)
      /etc/kubernetes/pki/service-account from cluster-sample-sa (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-fltrm (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  cluster-sample-apiserver-client:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  cluster-sample-apiserver-client
    Optional:    false
  cluster-sample-etcd-ca:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  cluster-sample-etcd
    Optional:    false
  cluster-sample-etcd-client:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  cluster-sample-etcd-client
    Optional:    false
  cluster-sample-kubelet-client:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  cluster-sample-kubelet-client
    Optional:    false
  cluster-sample-ca:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  cluster-sample-ca
    Optional:    false
  cluster-sample-sa:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  cluster-sample-sa
    Optional:    false
  default-token-fltrm:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-fltrm
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                     From                         Message
  ----     ------     ----                    ----                         -------
  Normal   Scheduled  9m51s                   default-scheduler            Successfully assigned default/cluster-sample-apiserver-0 to capn-control-plane
  Normal   Pulling    8m (x4 over 9m50s)      kubelet, capn-control-plane  Pulling image "virtualcluster/apiserver-v1.16.2"
  Warning  Failed     7m55s (x4 over 9m44s)   kubelet, capn-control-plane  Failed to pull image "virtualcluster/apiserver-v1.16.2": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/virtualcluster/apiserver-v1.16.2:latest": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/virtualcluster/apiserver-v1.16.2/manifests/sha256:81fc8bb510b07535525413b725aed05765b56961c1f4ed28b92ba30acd65f6fb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
  Warning  Failed     7m55s (x4 over 9m44s)   kubelet, capn-control-plane  Error: ErrImagePull
  Warning  Failed     7m41s (x6 over 9m43s)   kubelet, capn-control-plane  Error: ImagePullBackOff
  Normal   BackOff    4m38s (x19 over 9m43s)  kubelet, capn-control-plane  Back-off pulling image "virtualcluster/apiserver-v1.16.2"

@Fei-Guo ^^

@christopherhein
Copy link
Contributor

christopherhein commented May 24, 2021

As a part of #61 and #60 the images for both capn and vc will get pushed to GCP and k8s owned orgs. What do you think about closing this in favor of those issues?

@Fei-Guo
Copy link

Fei-Guo commented May 24, 2021

If you are from mainland china, try to add env GOPROXY=https://goproxy.cn and see if the problem can be resolved?

@gyliu513
Copy link
Contributor Author

@Fei-Guo Thanks, but I was using Kind, and Kind cluster failed to pull those images, how can I set this proxy to enable kind cluster can pick this up?

@gyliu513
Copy link
Contributor Author

@christopherhein I want to get a workaround solution for this issue first before close, hope it is OK :)

@gyliu513
Copy link
Contributor Author

Let me close it, seems GOPROXY does not work, but I can update the sts by referring to quay that managed by myself.

Thanks @christopherhein and @Fei-Guo !

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants