Skip to content

[WIP] Docker in Docker support #484

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 6 commits into from

Conversation

AndrewFarley
Copy link
Contributor

@AndrewFarley AndrewFarley commented Mar 3, 2020

DO NOT MERGE, STILL WORK IN PROGRESS

Replacing/deprecating: #198 from @sukovanej

Purpose

When running Docker-In-Docker this plugin would fail to run because the volume mount path given is based in the docker container, not the host.

Reasoning

To help increase the amount of compatible CI/CD systems/configurations that this plugin can run on. Namely Gitlab Autoscaling Runners and Jenkins Runners.

Result

To implement this I did the following...

  1. When running this plugin it detects if we are in docker by checking for the /.dockerenv file
  2. If inside docker AND we set dockerizePip to true, then we are in docker-in-docker mode
  3. Inside this mode, it uses docker to query the host to ask for the overlay mount of the current container, and it uses that as a prefix to any volume mounting request in docker
  4. Additionally, in this mode for caching purposes it intentionally does NOT allow the automatic Appdir plugin for caching because this may produce a volume mount that is invalid. This edge-case happened on two of the systems I manage, so I came up with a solution... instead it puts a folder at the root of the volume to use for caching purposes. Yeah I know... another folder in a random place, but I don't have a reliable alternative.
  5. Finally, to support unknown edge-cases, I added a new configuration option called dockerInDockerPath that allows the user to specify what volume path to pass into docker. This will prevent the system from trying to look this data up. According to a few authors in WIP: docker in docker fix #198 they said they have "fixed/known" mount points. And although I believe the automatic detection in this plugin will handle it, incase it doesn't this is a fallback.

Related to:

#198

Instructions:

To try this PR, from the root of your project run the following commands. Assumably run these from within' your first layer of Docker, in a container with a functioning docker command (eg: run docker ps successfully). Also, ensure your serverless.yaml has dockerizePip: true. It shouldn't matter if your Docker command works because of volume mounting the docker.sock, or if you are using the sidecar mechanism that ends up hosting docker on a TCP port routed through the sidecar. I confirmed both situations work.

npm remove --save serverless-python-requirements
npm install --save github:andrewfarley/serverless-python-requirements#dockerindocker
serverless package

TODO:

  • Have people test it and give feedback, iterate
  • Add tests (if possible? Does our CI support DinD? Or maybe these tests we run locally/manually for now)
  • Test/validate further on Kubernetes
  • Add documentation to the homepage about DinD support
  • Add example for GitlabCI for people to use this easily
  • Wait for upstream tests to work/pass properly and rebase this so we can pass as well

@AndrewFarley
Copy link
Contributor Author

Pinging for visibility from #198 - Looking for reviewers/testers...
@ccampell / @brettdh / @marlonchalegre / @sukovanej / @heri16

@campellcl
Copy link

campellcl commented Mar 6, 2020

@AndrewFarley I'll have a chance to run it through our CI/CD Sunday to test it out. Thanks again!

Edit (3/8/2020):
@AndrewFarley It does not appear to be working in our Jenkins CI/CD workflow. Here is the truncated output from attempting an sls deploy --stage jenkins -v from within the Service:

+ cd JobManagementService
+ npm install
+ sls deploy --stage jenkins -v
Serverless: Generated requirements from /var/jenkins_home/workspace/EP_test-docker-in-docker-support/JobManagementService/requirements.txt in /var/jenkins_home/workspace/EP_test-docker-in-docker-support/JobManagementService/.serverless/requirements.txt...

Serverless: Installing requirements from .cache/serverless-python-requirements/717d2b3fe394a147791937181371c1a35c16a04363f8a54e9abc8da17b26380f_slspyc/requirements.txt ...

Serverless: Docker Image: lambci/lambda:build-python3.8
Serverless: Docker-In-Docker: servicePath: .cache/serverless-python-requirements/717d2b3fe394a147791937181371c1a35c16a04363f8a54e9abc8da17b26380f_slspyc
Serverless: Docker-In-Docker: We have detected an docker-in-docker configuration.  NOTE: This feature is in beta for this plugin, verbose output for now
Serverless: Docker-In-Docker: Detected container: 91cad0f14d0a061338c37a664f58fd10f2d6336d9c32f6ca9a1113ec192cb2d6

  Error --------------------------------------------------

  Error: docker not found! Please install it.
      at dockerCommand (/var/jenkins_home/workspace/EP_test-docker-in-docker-support/JobManagementService/node_modules/serverless-python-requirements/lib/docker.js:45:13)
      at getBindPath (/var/jenkins_home/workspace/EP_test-docker-in-docker-support/JobManagementService/node_modules/serverless-python-requirements/lib/docker.js:153:14)
      at installRequirements (/var/jenkins_home/workspace/EP_test-docker-in-docker-support/JobManagementService/node_modules/serverless-python-requirements/lib/pip.js:198:37)
      at installRequirementsIfNeeded (/var/jenkins_home/workspace/EP_test-docker-in-docker-support/JobManagementService/node_modules/serverless-python-requirements/lib/pip.js:555:3)
      at ServerlessPythonRequirements.installAllRequirements (/var/jenkins_home/workspace/EP_test-docker-in-docker-support/JobManagementService/node_modules/serverless-python-requirements/lib/pip.js:634:29)
      at ServerlessPythonRequirements.tryCatcher (/var/jenkins_home/workspace/EP_test-docker-in-docker-support/JobManagementService/node_modules/bluebird/js/release/util.js:16:23)
      at Promise._settlePromiseFromHandler (/var/jenkins_home/workspace/EP_test-docker-in-docker-support/JobManagementService/node_modules/bluebird/js/release/promise.js:547:31)
      at Promise._settlePromise (/var/jenkins_home/workspace/EP_test-docker-in-docker-support/JobManagementService/node_modules/bluebird/js/release/promise.js:604:18)
      at Promise._settlePromise0 (/var/jenkins_home/workspace/EP_test-docker-in-docker-support/JobManagementService/node_modules/bluebird/js/release/promise.js:649:10)
      at Promise._settlePromises (/var/jenkins_home/workspace/EP_test-docker-in-docker-support/JobManagementService/node_modules/bluebird/js/release/promise.js:729:18)
      at _drainQueueStep (/var/jenkins_home/workspace/EP_test-docker-in-docker-support/JobManagementService/node_modules/bluebird/js/release/async.js:93:12)
      at _drainQueue (/var/jenkins_home/workspace/EP_test-docker-in-docker-support/JobManagementService/node_modules/bluebird/js/release/async.js:86:9)
      at Async._drainQueues (/var/jenkins_home/workspace/EP_test-docker-in-docker-support/JobManagementService/node_modules/bluebird/js/release/async.js:102:5)
      at Immediate.Async.drainQueues [as _onImmediate] (/var/jenkins_home/workspace/EP_test-docker-in-docker-support/JobManagementService/node_modules/bluebird/js/release/async.js:15:14)
      at processImmediate (internal/timers.js:456:21)
      at process.topLevelDomainCallback (domain.js:137:15)

I noticed that the cached container id didn't match up. So I added the following additional lines to this Service's serverless.yml:

custom:
    pythonRequirements:
        dockerizePip: true
        useDownloadCache: false
        useStaticCache: false

And I re-ran it again, and received a different error (which may not be related to your changes):

+ cd JobManagementService
+ npm install
+ sls deploy --stage jenkins -v

Serverless: Generated requirements from /var/jenkins_home/workspace/EP_test-docker-in-docker-support/JobManagementService/requirements.txt in /var/jenkins_home/workspace/EP_test-docker-in-docker-support/JobManagementService/.serverless/requirements.txt...

  Error --------------------------------------------------

  Error: EACCES: permission denied, mkdir '/_slspyreqs'
      at Object.mkdirSync (fs.js:840:3)
      at Object.mkdirsSync (/var/jenkins_home/workspace/EP_test-docker-in-docker-support/JobManagementService/node_modules/serverless-python-requirements/node_modules/fs-extra/lib/mkdirs/mkdirs-sync.js:31:9)
      at installRequirementsIfNeeded (/var/jenkins_home/workspace/EP_test-docker-in-docker-support/JobManagementService/node_modules/serverless-python-requirements/lib/pip.js:549:7)
      at ServerlessPythonRequirements.installAllRequirements (/var/jenkins_home/workspace/EP_test-docker-in-docker-support/JobManagementService/node_modules/serverless-python-requirements/lib/pip.js:634:29)
      at ServerlessPythonRequirements.tryCatcher (/var/jenkins_home/workspace/EP_test-docker-in-docker-support/JobManagementService/node_modules/bluebird/js/release/util.js:16:23)
      at Promise._settlePromiseFromHandler (/var/jenkins_home/workspace/EP_test-docker-in-docker-support/JobManagementService/node_modules/bluebird/js/release/promise.js:547:31)
      at Promise._settlePromise (/var/jenkins_home/workspace/EP_test-docker-in-docker-support/JobManagementService/node_modules/bluebird/js/release/promise.js:604:18)
      at Promise._settlePromise0 (/var/jenkins_home/workspace/EP_test-docker-in-docker-support/JobManagementService/node_modules/bluebird/js/release/promise.js:649:10)
      at Promise._settlePromises (/var/jenkins_home/workspace/EP_test-docker-in-docker-support/JobManagementService/node_modules/bluebird/js/release/promise.js:729:18)
      at _drainQueueStep (/var/jenkins_home/workspace/EP_test-docker-in-docker-support/JobManagementService/node_modules/bluebird/js/release/async.js:93:12)
      at _drainQueue (/var/jenkins_home/workspace/EP_test-docker-in-docker-support/JobManagementService/node_modules/bluebird/js/release/async.js:86:9)
      at Async._drainQueues (/var/jenkins_home/workspace/EP_test-docker-in-docker-support/JobManagementService/node_modules/bluebird/js/release/async.js:102:5)
      at Immediate.Async.drainQueues [as _onImmediate] (/var/jenkins_home/workspace/EP_test-docker-in-docker-support/JobManagementService/node_modules/bluebird/js/release/async.js:15:14)
      at processImmediate (internal/timers.js:456:21)
      at process.topLevelDomainCallback (domain.js:137:15)

     For debugging logs, run again after setting the "SLS_DEBUG=*" environment variable.

Note: Non-truncated logs available here

@AndrewFarley Any other logs I can produce to help with this?

@campellcl
Copy link

campellcl commented Mar 9, 2020

@AndrewFarley By design the Jenkins Agents are running as non-root users, which may explain the permission denied. However, I am not sure of a graceful workaround for this. The deployment happens from the Jenkins Pipeline via sh, so doing this from within the context of a Dockerfile (and therefore root) is not really an option. One possible workaround on my end could be to add the Jenkins user to the sudoers group in the Docker container prior to deployment. This is not considered best practice however.

In regards to the cache, you may wish to add a check to ensure that the cache matches up with the currently deployed Docker container instance, and forcefully invalidate the caches if they do not match. Then again, this may not be an issue if running from clean install with no previously cached files.

@AndrewFarley
Copy link
Contributor Author

AndrewFarley commented Mar 9, 2020

@ccampbell if it says docker command not found then you do not have docker in docker support or the image you are using doesn’t have the docker command installed which is a prerequisite for this plugin to work at all with docker or dind support.

Are you sure you need docker in docker support? It sounds to me like you can make the Jenkins runner use whatever image you want, correct? If so you can have it run the requisite lambci image and disable the unnecessary docker support.

@campellcl
Copy link

campellcl commented Mar 9, 2020

@ccampbell if it says docker command not found then you do not have docker in docker support or the image you are using doesn’t have the docker command installed which is a prerequisite for this plugin to work at all with docker in docker support.

@AndrewFarley I know that is what the error usually means, but in this context docker is installed; just not in the Agent container. How else could you detect the docker container the process was running in? Your output here:

Serverless: Docker-In-Docker: We have detected an docker-in-docker configuration.  NOTE: This feature is in beta for this plugin, verbose output for now
Serverless: Docker-In-Docker: Detected container: 91cad0f14d0a061338c37a664f58fd10f2d6336d9c32f6ca9a1113ec192cb2d6

To my knowledge, you wouldn't be able to pull the container Id (via the method you previously linked in #198 ) without Docker installed.

Are you sure you need docker in docker support? It sounds to me like you can make the Jenkins runner use whatever image you want, correct? If so you can have it run the requisite lambci image and disable the unnecessary docker support.

The Jenkins CI/CD setup with DinD is a bit convoluted. I have created an Ubuntu 18.04 Docker image with both NodeJS and Python installed and configured. The Jenkins tutorials instruct the user to run the DinD image, and run the jenkinsci/blueocean image as a container in Docker.

  • The DinD image is supposed to provide the ability for the jenkinsci/blueocean container to be able to execute Docker commands.

The Ubuntu 18.04 image I created serves to mirror the development environment on my local machine for testing. The jenkinsci/blueocean image utilizes the DinD image to deploy Jenkins Agents. We utilize the jenkinsci/blueocean image with an Agent container that is the Ubuntu 18.04 image. From here, we execute normal Serverless commands from inside the Agent/Ubuntu 18.04 container. So, utilizing the Jenkins CI/CD pipeline by running Jenkins Dockerized, does require Docker-in-Docker; or it requires bind-mounting the Docker socket as discussed briefly in #198. In our use case, the Ubuntu 18.04/Agent image is deployed by the jenkinsci/blueocean container. We would like the serverless-python-requirements plugin to still function from within the Agent/Ubuntu 18.04 Docker container which is deployed by the jenkinsci/blueocean Docker container.

Of course there are alternatives to this, such as not running Dockerized Jenkins, bind-mounting the Docker socket and using the sidecar pattern, or creating your own container with Docker installed and then installing and configuring Jenkins within the Dockerfile somehow. So do we really need docker-in-docker? No, but the Jenkins CI/CD folks recommend the Dockerized Jenkins approach for getting up and running quickly. So it's a pretty common use case to have a Jenkins container deploying another Agent container, which would be executing the actual Serverless deployment commands. I can try installing Docker in the Ubuntu 18.04 Agent container, but this is redundant. The Agent doesn't need to run Docker commands, however the parent Jenkins container does.

Does that clarify our intended use case?

@AndrewFarley
Copy link
Contributor Author

AndrewFarley commented Mar 9, 2020

Detecting the container id doesn’t relate to the ability to run docker or the docker command. It honestly sounds like all you need to do is install docker in your pipeline before running serverless if you are sure it is bind mounted to make dind work.

Could you try installing docker? Or use a different image with it preinstalled

@campellcl
Copy link

campellcl commented Mar 9, 2020

Detecting the container id doesn’t relate to the ability to run docker or the docker command. It honestly sounds like all you need to do is install docker in your pipeline if you are sure it is bind mounted to make dind work.

Could you try installing docker?

You're right, sorry; Docker isn't installed in the Ubuntu 18.04/Agent container. I should have made that a bit more explicit. But my point is that it really doesn't need to be. The Jenkins Agent container doesn't need to execute Docker commands, only the Jenkins jenkinsci/blueocean parent container needs to execute the Docker commands in order to deploy the child Agent container.

If it would be helpful for you, I can try installing Docker on the Ubuntu 18.04/Agent container. But again, we wouldn't be utilizing Docker commands inside the Agent container, so this would be a step we would only undergo if it made the plugin function correctly. In some use cases, maybe there would be a need to run Docker commands inside the child Agent container, but I don't think that is nearly as common of a use case.

We want Docker-in-Docker support in the sense that the plugin is supported by the child Agent container that the Jenkins container deploys (via the DinD image). Does that make sense?

@AndrewFarley Would you like me to try again with Docker installed in the Agent container to aid in debugging? Also, I'm not sure it will work without invalidating the caches. Which appears to cause that second error which I do not have a solution for.

@AndrewFarley
Copy link
Contributor Author

AndrewFarley commented Mar 9, 2020

I'm not sure I entirely understand. It sounds like you "think" you want docker-in-docker support but you don't really and should just use a lambci image as the jenkins "agent" runner. Or, if your setup is configured so DinD should work, you don't have docker installed to confirm if it actually works. I would first go install docker in your agent container and see if it works. If it does, then re-try this fork/branch. If it doesn't see my next comment below.

If that doesn't work, let's talk interactively to figure this out. @ccampell Can you jump on the serverless slack team by visiting this url: https://serverless.com/slack and chat with me interactively... @farley on there.

@AndrewFarley
Copy link
Contributor Author

AndrewFarley commented Mar 9, 2020

@ccampell If your parent container supports running commands to initialise Jenkins runners, then it sounds like a simple solution would be to make a runner that is meant for serverless that uses the lambci images, on which you install a jenkins runner as well. Then you can just start an agent that already has an environment which wouldn't require docker within it. Instead of using a generic ubuntu 18.04 container and trying to hodge-podge everything onto it. This is what I mentioned already in the other post where you commented. This is generally how I do CI/CD for Serverless with this plugin, I use lambci/lambda:build-python3.6 as the image, then install the requisite runner software and then typically tag it accordingly so it can used when needed. Then when it grabs jobs it doesn't need to use docker, it can just install reqs and deploy.

@campellcl
Copy link

@AndrewFarley Yeah, It's not the most graceful setup; but it's what the Jenkins folks recommended. We have DinD running per the jenkins tutorials:

docker container run --name jenkins-docker --rm --detach \
  --privileged --network jenkins --network-alias docker \
  --env DOCKER_TLS_CERTDIR=/certs \
  --volume jenkins-docker-certs:/certs/client \
  --volume jenkins-data:/var/jenkins_home \
  --volume "$HOME":/home docker:dind

And we have the parent Jenkins container running:

docker container run --name jenkins-tutorial --rm --detach \
  --network jenkins --env DOCKER_HOST=tcp://docker:2376 \
  --env DOCKER_CERT_PATH=/certs/client --env DOCKER_TLS_VERIFY=1 \
  --volume jenkins-data:/var/jenkins_home \ 
  --volume jenkins-docker-certs:/certs/client:ro \
  --volume "$HOME":/home \ 
  --publish 8080:8080 jenkinsci/blueocean

Then for our Jenkins pipeline (Jenkinsfile), we have our Ubuntu 18.04/agent Dockerfile running:

pipeline {
    agent {
        dockerfile {
            filename 'Dockerfile'
        }
    }
    ...
    stages {
        stage('Deploy some example Serverless service') {
            steps {
                sh '''
                    cd someDir
                    npm install
                    sls deploy --stage jenkins -v
                 '''
            }
        }
        stage('Test the deployed Serverless service') {
            steps {
                sh '''
                 pytest --verbose --junit-xml ./test-reports/results.xml test/IntegrationTests/TestService.py
                '''
            }
        }
    }
}

And to be complete, here is some of what our Dockerfile looks like:

FROM ubuntu:18.04
...
RUN apt-get install -y nodejs
...
RUN npm install -g serverless
...

We don't have Docker installed in the child "agent" container that the parent Jenkins container spawns (i.e. the Dockerfile above), and it is this "agent" container that we sls deploy from (filename Dockerfile in the above Jenkins pipeline code). We do have access to normal Docker commands inside the parent Jenkins container, presumably via the DinD image. As the Jenkins tutorials state:

In order to execute Docker commands inside Jenkins nodes, download and run the docker:dind Docker image using the following docker container run command:

docker container run --name jenkins-docker --rm --detach \
 --privileged --network jenkins --network-alias docker \
 --env DOCKER_TLS_CERTDIR=/certs \
 --volume jenkins-docker-certs:/certs/client \
 --volume jenkins-data:/var/jenkins_home \
 --volume "$HOME":/home docker:dind

It is the above sh command block which performs the sls deploy that does not work with the serverless-python-requirements plugin in its current state. And that is what we want supported; to be able to run the plugin from within the "agent" container (defined by our Dockerfile which Jenkins deploys). We can run Serverless just fine from our Jenkins pipeline, but not with the serverless-python-requirements plugin. Presumably since it is executing from within a child "agent" container, hence the request for nested docker support.

@AndrewFarley As per your suggestion, that is a good idea and would simplify our CI/CD setup. However, we don't really want to "very closely mimic the live AWS Lambda environment " as in lambdaci. Our CI/CD pipeline does a full live deployment via our agent container, and we run tests in the live cloud. We have had discrepancies simulating the cloud locally via plugins. So we prefer to test against a live deployment under a unique stage name. I haven't used lambdaci before so I can't speak to its ability to replicate the cloud offline.

I'll hop on the serverless chat and we can discuss tomorrow, if you are available? I wish I could make this more clear, sorry for being so verbose! Again, thank you for your time on this! The plugin works great when we deploy from the host machine. It does not work at all when we have Jenkins run the deployment command from inside the "agent" container as I have demonstrated above.

@AndrewFarley
Copy link
Contributor Author

AndrewFarley commented Mar 9, 2020

@ccampell I think I am getting a better picture of what you have in place, but still not a full picture. I think you have the wrong picture of the "mimicking the Lambda environment" means. In that, in this scenario all it means is you are using a different container. From what I see above, it sounds like you could just change your Dockerfile to FROM lambci/lambda:build-python3.6 and then your problem would go away. Or alternatively, running RUN apt-get install docker-ce (or whatever) in that Docker file. Nowhere did I suggest changing your testing to be run locally, or anything, we aren't trying to "clone" or "replicate" anything on AWS locally. This is merely for getting the serverless-python-requirements plugin functioning which to make it generate its output reliably so it'll function on AWS, it needs to have very specific software libraries and environment, which is in those lambci images.

If you made this plugin work with Docker support enabled, that's all it is doing for you, starting up another docker container with this exact image. You would just be simplifying things, removing the unnecessary dependancy of starting yet another docker container for no reason, is all I'm saying.

Finally, I must say, if you won't budge on any of the above suggested items I've mentioned, then I'm not sure I can help you get this plugin working for you. You have to make your environment work with DinD properly before this plugin can work, or you need to have this plugin working in a comparable environment to Lambda when it runs so it can generate compatible requirements. This is "your" problem to solve, not magically this software.

Anyways, Slack me up when you get a chance, I'm in a weird timezone in the future (Auckland, NZ) but ping me on that serverless slack and we can find a time that we're both online see if I can get a better understanding.

@campellcl
Copy link

campellcl commented Mar 9, 2020

@AndrewFarley I see what you're saying! I thought you were advocating to use lambci:

... for running your functions in the same strict Lambda environment, knowing that they'll exhibit the same behavior when deployed live. -lambci

I misunderstood, and thought you were pitching this as a workaround for not being able to sls deploy live with the plugin. I hope you can understand my confusion, seeing as this is a touted use case of lambci.

I did not realize that there were additional dependencies for the plugin besides those required for the Serverless framework itself, and those brought in by npm during the plugin's installation. I'm not trying to be obstinate, I will definitely try your suggestion of using the lambci image to get the plugin to work. And I would be happy to install Docker on the agent image if it helps with your debugging for #484.

I don't expect the software to magically solve any problems. I am aware that it is my problem to get the software working. I had just hoped that it would still function in a child Docker container without additional modifications, like it does on the host OS. I am definitely guilty of (perhaps incorrectly) thinking that #484 would solve this problem.

I have spent valuable time explaining my organizations' use case above, in order to draw attention to a fairly common CI/CD setup that is not currently supported out-of-the-box by the plugin. I did not go through this trouble, with the expectation that you would solve this problem for us. As I have tried to reiterate above; I appreciate you spending your valuable time both: responding, and considering our use case as it pertains to #484 if/where applicable. I am working from the United States' Eastern Coast (EST time). I'll try and ping you on slack at a time that is reasonable for Auckland, NZ. Thank you again, for all your efforts on this!

@robertmarkevans
Copy link

Used the temporary fix with VSCode's dev in container, running serverless cli. Was getting the /var/task/requirements.txt not found error, after fix installed, it worked fine, deployed to aws Ok.
Thanks for fix !

@felschr
Copy link

felschr commented May 13, 2020

This fails for me on GitLab CI with:

 Serverless: Docker-In-Docker: servicePath: /root/.cache/serverless-python-requirements/508f2850e7c1fa6ddeef9255f33ec8f2ae6c14a09ea5b1731da87214b4de00c2_slspyc
 Serverless: Docker-In-Docker: We have detected an docker-in-docker configuration.  NOTE: This feature is in beta for this plugin, verbose output for now
 Serverless: Docker-In-Docker: Detected container: 0482597c5e09e4565a2d9c9425fadaf4a76604da5fdb46cf776e432199e72295
  
   Error --------------------------------------------------
  
   Error: Error: No such object: 0482597c5e09e4565a2d9c9425fadaf4a76604da5fdb46cf776e432199e72295
   
       at dockerCommand (/builds/user/project/node_modules/serverless-python-requirements/lib/docker.js:49:11)
       at getBindPath (/builds/user/project/node_modules/serverless-python-requirements/lib/docker.js:153:14)
       at installRequirements (/builds/user/project/node_modules/serverless-python-requirements/lib/pip.js:198:37)
       at installRequirementsIfNeeded (/builds/user/project/node_modules/serverless-python-requirements/lib/pip.js:555:3)
       at targetFuncs.filter.map.f (/builds/user/project/node_modules/serverless-python-requirements/lib/pip.js:598:35)
       at Array.map (<anonymous>)
       at ServerlessPythonRequirements.installAllRequirements (/builds/user/project/node_modules/serverless-python-requirements/lib/pip.js:592:8)
       at ServerlessPythonRequirements.tryCatcher (/builds/user/project/node_modules/bluebird/js/release/util.js:16:23)
       at Promise._settlePromiseFromHandler (/builds/user/project/node_modules/bluebird/js/release/promise.js:512:31)
       at Promise._settlePromise (/builds/user/project/node_modules/bluebird/js/release/promise.js:569:18)
       at Promise._settlePromise0 (/builds/user/project/node_modules/bluebird/js/release/promise.js:614:10)
       at Promise._settlePromises (/builds/user/project/node_modules/bluebird/js/release/promise.js:693:18)
       at Async._drainQueue (/builds/user/project/node_modules/bluebird/js/release/async.js:133:16)
       at Async._drainQueues (/builds/user/project/node_modules/bluebird/js/release/async.js:143:10)
       at Immediate.Async.drainQueues [as _onImmediate] (/builds/user/project/node_modules/bluebird/js/release/async.js:17:14)
       at runCallback (timers.js:705:18)
       at tryOnImmediate (timers.js:676:5)
       at processImmediate (timers.js:658:5)
       at process.topLevelDomainCallback (domain.js:126:23)

@felschr
Copy link

felschr commented May 14, 2020

I've tried setting dockerInDockerPath to ${env:CI_PROJECT_DIR}, but then I get this error again:

ERROR: Could not open requirements file: [Errno 2] No such file or directory: '/var/task/requirements.txt'

The logged docker run command is:

Serverless: Running docker run --rm -v /builds/user/project/.serverless\:/var/task\:z -v /builds/user/project/.serverless\:/var/useDownloadCache\:z lambci/lambda\:build-python3.7 /bin/sh -c 'chown -R 0\\:0 /var/useDownloadCache && python3.7 -m pip install -t /var/task/ -r /var/task/requirements.txt --cache-dir /var/useDownloadCache && chown -R 0\\:0 /var/task && chown -R 0\\:0 /var/useDownloadCache'...

I'm using package.individually, not sure if that's relevant, though.

@felschr
Copy link

felschr commented May 15, 2020

The only thing that's worked for me was reverting back to the 5.1.0 release and setting cacheLocation: ./.serverless/.requirements_cache as mentioned in #106 (comment).

It seems that this PR mainly attempts support the bind mounting /var/run/docker.sock approach rather than docker-in-docker. As with docker-in-docker the volume mounts actually work (as far as I could tell).

UPDATE:
Unfortunately even setting the cacheLocation didn't work. The build finished without any errors but the lambda functions were deployed without any dependencies and an empty requirements.txt.
I tried setting it to a different location (./.cache/.requirements_cache) but that didn't work either.

@jack1902
Copy link

jack1902 commented Sep 3, 2020

Just tested this branch locally and it worked nicely. I would love to be able to do the following:

  • Node Container - Yarn installs serverless for use in CI
  • Python Container - Used to package the lambdas

Currently i have a bloated python-lambci container which also has Node in order to run the serverless framework and package. This works but i much prefer the Docker in Docker method.

Whats left in order to get this PR moving again?

@bsamuel-ui
Copy link
Contributor

Please see #550

@AndrewFarley
Copy link
Contributor Author

Since it's my baby, I'll help this through the finish line. I'm glad this has helped some people, I confirm it helps make this plugin work for me in some previously challenging environments. I think what's left really is on the checklist above...

Rebase
Add tests (if possible? Does our CI support DinD? Or maybe these tests we run locally/manually for now, will confirm)
Add documentation to the README about DinD support and how to use it
(optional) Add example for GitlabCI / CircleCI for people to use this easily (this might be good for this plugin overall)

@alecgerona
Copy link

Any updates on this @AndrewFarley? Unfortunately gitlab dind still doesn't work for me on your branch.

@AndrewFarley
Copy link
Contributor Author

AndrewFarley commented May 17, 2021

@alecgerona I'm not sure exactly what's missing or needed for this branch to be able to merge, its usefulness seems questionable at best and it doesn't seem to have solved an actual problem.

At arms length when I heard of this desire/interest and saw the original MR I thought "yeah, I have had this problem, so let me implement this feature". And I did, but, especially having gone through it, thinking of its actual uses, every alternate use of this plugin is better and in (some) ways simpler than using DinD.

As I said above I believe, I've used this branch successfully with Gitlab (self-hosted) dind runners on Kubernetes, and locally on Linux and Mac without issue. Although, that cluster with Gitlab I ended up removing DinD support because it's insecure as hell. Generally in my experience anyone doing DinD is often a sign of "doing something wrong" and there's often a better solution, as my thread above with someone highlighted. There seems to be a big lack-of-understanding of how "else" this plugin can be used besides DinD. If you're already "in" docker, you can just be sure to run in a Linux container with Python support so it can already work without requiring DinD. If you need to run this along side whatever native tools you have that are in a docker container, just put this in your docker container and add NodeJS and then run your serverless commands in that container. You just bypassed the need for DinD and the bad/insecure practice of it.

I'd love to hear if there's anyone subscribed to this thread that can really justify continuing work on this, otherwise I'll likely just close and abandon the PR. Almost every use-case I've heard can be done differently, and better from a security point of view. Thoughts?

@pgrzesik
Copy link
Contributor

Hey @AndrewFarley - it's been a long time since this PR was proposed. I'm going to close it, if you feel like the issue is valid, please open a new issue or a new PR against the latest main branch. Thanks 🙇

@pgrzesik pgrzesik closed this Sep 27, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants