Skip to content

Help with "SSL: CERTIFICATE_VERIFY_FAILED" error. #198

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
ghost opened this issue Aug 28, 2018 · 14 comments
Closed

Help with "SSL: CERTIFICATE_VERIFY_FAILED" error. #198

ghost opened this issue Aug 28, 2018 · 14 comments
Assignees
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@ghost
Copy link

ghost commented Aug 28, 2018

Hello, I need help resolving CERTIFICATE_VERIFY_FAIL error. A simple test program below errors out at DynamicClient. Prior to running this program, I have already done oc login and can see my namespaces via openshift CLI commands but running the program below results in an error. Is there a configuration that I missed somewhere after I've installed the rest client? The modules below is what I have installed and I'm using python 2.7.

dictdiffer 0.7.1
openshift 0.6.3
kubernetes 6.0.0
Jinja2 2.10
python-string-utils 0.6.0
ruamel.yaml 0.15.61
six 1.11.0

Sample code:

from kubernetes import client, config
from openshift.dynamic import DynamicClient

k8s_client = config.new_client_from_config()
dyn_client = DynamicClient(k8s_client)

Error:
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='*.com', port=****): Max retries exceeded with url: /version (Caused by SSLError(SSLError(1, u'[SSL: CERTIFICATE_VERIFY_FAIL
ED] certificate verify failed (_ssl.c:726)'),))

@fabianvf
Copy link
Member

It looks like it's improperly parsing the host (host in your error is .com), you could try something like k8s_client.configuration.host = $REAL_HOST_VALUE. I'm not sure why it wouldn't be picking up the kubeconfig properly though, can you paste it here? I would also explore the rest of the values in the kubernetes.configuration object, and see if the rest of them look sane.

@ghost
Copy link
Author

ghost commented Aug 30, 2018

Thanks for the reply. I apologize, I should have mentioned earlier that I purposely masked the value for the host and port when I posted this issue but the kubeconfig does load as I can see contents of the k8s_client variable at debug time.

I was able to resolve the issue by adding the line below in ~/.kube/config file under the cluster section:

  • cluster:
    insecure-skip-tls-verify: true

Thanks for directing me to this file and just looking into it more helped resolved my issue. Thank you so much, I think this issue can now be be closed.

@fabianvf
Copy link
Member

Hmm, but you were able to access the cluster without skipping TLS when using oc? We should be able to properly load any configuration object that oc/kubectl can, so I don't want to close the issue out until we figure out what caused the discrepancy.

@ghost
Copy link
Author

ghost commented Sep 5, 2018

Hi, thanks again for the reply. Yes, that's correct. I was able to access the cluster without skipping TLS.

Here are the steps that I was doing:

  1. Run CLI 'oc login ...' from command line in order to login. This command include/uses the 'https' as well as the login token.

  2. Run CLI 'oc projects' from command line in order to view all OpenShift namespaces and works fine as I can see all my namespaces.

  3. Run the test / sample program (see above)

Results in error below (host info and port purposely masked):

urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='******.ocp.***.com', port=***): Max retries exceeded with url: /version (Caused by SSLError(SSLError(1, u'[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:726)'),))
(openshift)
  1. The file ~/.kube/config is dynamically modified automatically/immediately after step Initial import from ftl-toolbox/python-openshift #1 removing any prior changes I made so I need to re-insert the line every time: insecure-skip-tls-verify: true in the 'cluster' section before running the simple program I was testing.

  2. Run 'oc projects' again in order to test if oc cli still works even with the skip tls added to the ~/.kube/config file. It still works and I can still see all my namespaces.

  3. Run the simple test program, this time it works without the error and it lists all OpenShift namespaces I have access to.

Here is the complete test/sample program I am running:

from kubernetes import client, config
from openshift.dynamic import DynamicClient

k8s_client = config.new_client_from_config()

dyn_client = DynamicClient(k8s_client)
v1_projects = dyn_client.resources.get(api_version='project.openshift.io/v1', kind='Project')
project_list = v1_projects.get()
for project in project_list.items:
    print(project.metadata.name)

OpenShift and Kubernetes version:

  1. OpenShift Master: v3.7.23
  2. Kubernetes Master: v1.7.6+a08f5eeb62

@fabianvf
Copy link
Member

fabianvf commented Sep 5, 2018

Interesting, I'm not able to reproduce this against OpenShift 3.10, does the cluster section of your kubeconfig have a certificate-authority or certificate-authority-data section? I'm also running the newer version of the openshift client (0.7.1) and kubernetes client (7.0.0).

I'll try to spin up an environment that more closely matches yours to see if something changed in the underlying kubernetes client, though it seems the configuration logic is largely unchanged since May 2017.

@ghost
Copy link
Author

ghost commented Sep 5, 2018

It does not have the certificate-authority or certificate-authority-data section.

This is the current structure of my kubeconfig file:

apiVersion: v1
clusters:
- cluster:
    insecure-skip-tls-verify: true
    server: 
  name: 
contexts:
- context:
    cluster: 
    user: 
  name: 
current-context: 
kind: Config
preferences: {}
users:
- name: 
  user:
    token: 

@fabianvf
Copy link
Member

fabianvf commented Sep 5, 2018

hmm, I wonder if there's a default certificate location that's not being set in the configuration object

@vinzent
Copy link

vinzent commented Jun 27, 2019

Also having this issue:

  • oc login - works fine
  • python openshift: CERTIFICATE_VERIFY_FAILED error

OS: RHEL 7.6
python: 2.7.5
openshift: 0.9.0
urllib3 1.25.3

testing done:

  • using a simple urrlib3 request works fine:
import urrlib3
http = urrlib3.PoolManager()
r = http.request('GET', 'https://openshift.cluster');
print(r.data)
  • curl: works fine
  • wget: works fine

The mentioned workaround with insecure-skip-tls-verify will allow to connect.

@vinzent
Copy link

vinzent commented Jun 27, 2019

@ghost @fabianvf I think i've located the root cause:

https://github.com/kubernetes-client/python/blob/master/kubernetes/client/rest.py#L77

the kubernets client calls certifi and passes this - overriding the good system ca config. :-(

@vinzent
Copy link

vinzent commented Jun 27, 2019

Workaround: Set system ca pem file in .kube/config

...
clusters:
- cluster:
    certificate-authority: /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem
    server: https://openshift.cluster
....

@openshift-bot
Copy link

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci openshift-ci bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 1, 2021
@openshift-bot
Copy link

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci openshift-ci bot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 1, 2021
@openshift-bot
Copy link

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

@openshift-ci openshift-ci bot closed this as completed Aug 1, 2021
@openshift-ci
Copy link

openshift-ci bot commented Aug 1, 2021

@openshift-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants