-
Notifications
You must be signed in to change notification settings - Fork 145
Help with "SSL: CERTIFICATE_VERIFY_FAILED" error. #198
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
It looks like it's improperly parsing the host (host in your error is |
Thanks for the reply. I apologize, I should have mentioned earlier that I purposely masked the value for the host and port when I posted this issue but the kubeconfig does load as I can see contents of the k8s_client variable at debug time. I was able to resolve the issue by adding the line below in ~/.kube/config file under the cluster section:
Thanks for directing me to this file and just looking into it more helped resolved my issue. Thank you so much, I think this issue can now be be closed. |
Hmm, but you were able to access the cluster without skipping TLS when using oc? We should be able to properly load any configuration object that oc/kubectl can, so I don't want to close the issue out until we figure out what caused the discrepancy. |
Hi, thanks again for the reply. Yes, that's correct. I was able to access the cluster without skipping TLS. Here are the steps that I was doing:
Results in error below (host info and port purposely masked):
Here is the complete test/sample program I am running:
OpenShift and Kubernetes version:
|
Interesting, I'm not able to reproduce this against OpenShift 3.10, does the cluster section of your kubeconfig have a I'll try to spin up an environment that more closely matches yours to see if something changed in the underlying kubernetes client, though it seems the configuration logic is largely unchanged since May 2017. |
It does not have the This is the current structure of my kubeconfig file:
|
hmm, I wonder if there's a default certificate location that's not being set in the configuration object |
Also having this issue:
OS: RHEL 7.6 testing done:
The mentioned workaround with |
@ghost @fabianvf I think i've located the root cause: https://github.com/kubernetes-client/python/blob/master/kubernetes/client/rest.py#L77 the kubernets client calls certifi and passes this - overriding the good system ca config. :-( |
Workaround: Set system ca pem file in .kube/config
|
Issues go stale after 90d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle stale |
Stale issues rot after 30d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle rotten |
Rotten issues close after 30d of inactivity. Reopen the issue by commenting /close |
@openshift-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Hello, I need help resolving CERTIFICATE_VERIFY_FAIL error. A simple test program below errors out at DynamicClient. Prior to running this program, I have already done
oc login
and can see my namespaces via openshift CLI commands but running the program below results in an error. Is there a configuration that I missed somewhere after I've installed the rest client? The modules below is what I have installed and I'm using python 2.7.Sample code:
Error:
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='*.com', port=****): Max retries exceeded with url: /version (Caused by SSLError(SSLError(1, u'[SSL: CERTIFICATE_VERIFY_FAIL
ED] certificate verify failed (_ssl.c:726)'),))
The text was updated successfully, but these errors were encountered: