-
Notifications
You must be signed in to change notification settings - Fork 5.9k
Setting up code-server for multi-tenancy #792
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
This would really be an awesome feature. I assume it would would let monitor connections to the server so it would be easy to do 'auto shutdown' feature for the server. |
It's unlikely that we'll implement support for Google IAP in code-server. It should be relatively easy to write your own proxy that can handle this. |
Yes and equally, i’d happily have Kube cluster running at all times so startup was instant.
Yes the best implementation would be a proxy, and we’re going to test that theory next week, but respecting the passed-through header (email address and user ID) would be useful for attaching persistent disk claims, keys, and a bunch of other stuff right? |
Hey @asomervell any update on your experiments? |
code-server doesn't have any functionality which lets you attach disks/keys, it just provides full access to the computer/container it's running on. If you wanted to programmatically attach disks and keys, you'd have to make your proxy work that out and send the relevant commands to the kube cluster. |
One container per person in a docker swarm... This is already being done to isolate students in individual software development environments for 200+ students in university web development course. Each student gets his own single container docker service (php:7.3-apache) with code server installed and running on port 8443 (apache runs on port 80). And a nginx server connects the students web browser (via a wildcard DNS entry) to the individual students service. Code-server itself runs as www-data, in the container, with /var/www as a working home mount. Currently special remote SSH commands are use to allow students to start/stop their containers and get the code-server password (via the "docker service logs")/ This will eventually become a web-based interface, but for now SSH login provides all the authentication and authorization to the system. SSH is also used to provide file transfer to/from there working directory, though it is not used for CLI access (that is provided by code-server). GIT, is also installed inside the container so students can use a GIT repository as an alternative file transfer method for their project work. Basically it can be done, and is being done, and was put together in under a month by one person (me). |
To provide another perspective, I'm operating my own dev environment using Cloudflare Access (for user auth), Cloudflare Argo (such that the backend instances aren't exposed to the internet), GCP Compute with Microk8s installed. I have a small portal that allows me to define container templates/projects (that use a template) that brings along with it disk configurations. I have a set of container images at https://github.com/davefinster/coder that I use for various languages. At the moment, the portal allows me to manually start/stop my instances and I assign a project to one instance at a time. This configures DNS records such as -ide. and say its a web based project, also -ui.. It all gets trunked over the Argo tunnel so despite the GCE instances having public IPs, their firewalls block all HTTP/S traffic so the only reachable path is via Cloudflare Access. I then have Cloudflare Access tied to my personal GSuite which is secured via Yubikey. Ideally one day I'd like to somehow monitor the websocket connectivity and automatically idle out the machine and would like it to get compute costs to $0 when not in use. One aspect I don't have a good answer for is interacting with private Git credentials given the remote nature and trying to stay away from SSH and having credentials stored on the server. |
I'll pin this one so people have an idea on what to do, I recommend everyone to pool over your ideas and have them documented in |
@davefinster Ideally one day I'd like to somehow monitor the websocket connectivity and automatically idle out the machine and would like it to get compute costs to $0 when not in use. In CS version 1 I have been doing idle testing by checking the timestamp of one of the log files (file meta data, not its contents). The file is the latest file (sorted by name, or by date) with this name
This file was updated every 5 minutes by code-server while the user has it open in the browser. Once the browser is closed the file no longer updated, and a hour later I automatically shutdown that users docker environment. As it is a file, I could do the test outside docker, or even from a different node to where the docker container is running. Easy - Peasy... I have not found a similar easy solution for this in code-server v2, but have an active issue logged for it.. #1050 |
Update. A Heartbeat file has been added... So idle checking is now just a matter of checking when a file was last updated, once code-server has started. |
Hey folks, |
@rafket Hey, I started to work on a similar solution a couple months ago named multiverse, but I've since archived it because I thought things got a bit too messy. I got as far as having username/password authentication, with a reverse proxy (traefik) to lock paths. The entire plan was to have it kinda template based so dev teams could use the same template for consistency, I'd love to collaborate on a new solution however. |
I am trying something with traefik + docker and some magic with labels to match with https://github.com/dexidp/dex with https://github.com/mesosphere/traefik-forward-auth for authentication. The big issue I have right now is that for some reason the websockets keep getting lost (#1161) but might be trafik's fault (traefik/traefik#5533). I have something written with node-proxy and passport, but I would rather use traefik in the end. |
@geiseri I have an ldap/traefik-forward-auth up and running but need help with the multi-tenancy part. how are you attaching users to separate drives? |
@sr229 I have built a basic multi-tenant solution with traefik authelia openldap and a small starlette server i wrote to manage the spin-up of user containers. Would this interest anyone and would I infringe on any licenses by posting a gist of my solution? It should also take care of auto ssl with let's encrypt |
Do you have trafik on the same server as code server? I have them different. |
This is going to be discussed in detail in the FAQ I'm writing. Thank you all for your comments. |
@nhooyr Would you post a link to the FAQ? |
This sounds really interesting and similar to what I'm trying to achieve. Would you mind sharing your nginx.conf that's doing the wildcard proxying to the docker containers? |
Sure... It is run in a docker container, with a wildcard domain. Without a username, it gives out the top level website. With a username in that domain, it proxies the request to the given ports to that service via a docker network. It was design with version 1 of code-server and still works fine with version 3.4 Yes each of the docker environments has an apache as well as a code-server. EG: ssh [email protected] start
We are working on a web interface to start/stop user containers instead. |
That's awesome, @antofthy! Thanks so much for sharing. I'm sure this will be helpful for others. |
@antofthy , I'm wondering if there were any progress on the web interface you were talking about - or if you stumbled upon anything in the wild which would start/serve/kill server on demand for the user? |
Yes in deed. There was a major delay as I personally had no experience in how to program in PHP, but after some help, and a number of weeks development and testing... It is now working... Users no longer need to use SSH remote commands to control there software development environments (docker service). PHP front-end, with a python back-end, that can control docker itself. The back-end does all the 'high security' aspect like user authentication and docker service (the users environment) starting/stopping/wiping. Users using the "Software Development Environments" can use the PHP front-end control panel to select/wipe/start/stop various prepared software environments. Each environment has a 'code-server' they can connect to providing IDE, and a terminal (command line) to Debian UNIX system of their environment. Most environments have compilers as well as their own Apache-PHP server. A separate docker 'ingress' service links user web requests (using a wildcard proxy) to the appropriate software development environment for either code-server, or Apache, or even a developed NodeJS server they create. The ONLY troublesome aspect of this system, is that a random password is generated for code-server to use each time they start an their own environment (docker service). They need to copy this (a button press), before clicking the 'connect to code-server' link. The code-server in their own environment then asks for that password to verify the user, which they paste in, before access is granted. Ideally we would love to be able to set a code-server authentication cookie, both in the starting code-server inside the docker service, and on the 'started' control panel display. That way the already authenticated user, can just click the code-server link without needing a extra code-server authentication step (copy and paste a random password) An Academic Paper is being prepared by the lecturer that instigated the project originally. |
Code Server seems particularly close to being able to run for a team of engineers on a single Kubernetes cluster, each with their own container and persistent data store. That would be incredibly efficient, secure and highly available.
I don’t think server side collaboration is necessary, that’s what Github is for, i’d prefer each engineer is sandboxed, the key utility of code-server being in-browser and consistent.
A generic OAuth implementation as described in other feature requests might work agnostic of cloud providers... and be a first step.
But i’d suggest a well formed Kubernetes deployment with Google Identity Aware Proxy on the front of it would be epic. Brings with it a host of benefits, not the least of which being their zero trust corp security.
IAP is easy to attach to a GCP Load Balancer, and AFAICT the server would just need to understand the identity that is asserted in headers, and route to an appropriate container.
Thoughts? How would I go about resourcing that?
The text was updated successfully, but these errors were encountered: