-
Notifications
You must be signed in to change notification settings - Fork 1.3k
[content-service] cannot restart stopped workspace #11183
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@utam0k is this something you could look at next, once you are free? 🙏 It appears to be impacting many users. |
There really seem to be a few more users with the same error. |
I removed the assignee by myself because I'll go into the vacation next week. |
I have tried everything and could not reproduce it with only this information in its current state. Someone else encountered the same error in our repository below, but I was able to reproduce it. Maybe some manipulation is needed in the container. |
I encounter the same issue. I cannot start/restart a workspace, so I need to create a new one everytime. Work not pushed is work lost. |
Decided to try it out again today. Worked fine for an hour until the container stopped while working. Afterwards, cannot start the workspace. All my hour worth of work is gone - so annoying. Come one GitPod, get it together and fix it.
|
Hi, @filipjnc. Thanks for your report. This issue is already scheduled but has not yet been worked on due to priorities. |
Hi @utam0k, I can reproduce it as follows:
|
@filipjnc Thanks for your information. It helps us to resolve it. Is the reproduction rate 100%? |
Yes. Can reproduce it every time on my end. All old (damaged) workspaces could never be started again. |
Sorry for the trouble. Thanks for your help. |
As way of possibly testing, I'm using a skeleton of this project: https://github.com/sprintcube/docker-compose-lamp |
@semiautomatix Sorry for the late reply 🙏 I tried to use the repo https://github.com/sprintcube/docker-compose-lamp you provided
However, I can't reproduce it. Would you please provide more detailed steps? Thank you. |
@filipjnc Thanks for providing the information. I tried to access your repo to reproduce it, however since it's private so I can't do any further testing and reproduce it. |
Thanks for the update. I was hoping a clean pull would recreated the error. Additionally, I've cloned the project, added code to the www folder, started docker-compose, imported an SQL file into the database. I'll attempt to recreate the error, and provide access to the repo. |
@6uliver sorry for the trouble, could you please email us at [email protected] with your workspace ID? Our support team could try getting a backup for you. |
Sure, thank you for your help, maybe I will write. But it's a temporary solution to ask support to restore my workspace every time when this problem happens :) |
I'm sorry we haven't solved this yet, @6uliver and @YoungElPaso. I just tested the repository linked above (thanks, @6uliver!) and the error continues to happen. @kylos101 given the huge impact of this error, I will add this to breakdown. Btw was it marked as blocked because of PVC? |
@atduarte I removed the block, as we've stopped the PVC work. In other words, we blocked this issue to focus effort (time) on PVC. Now that that's stopped, we should resume resolving this issue. As a 🛹 , perhaps on stop, we can issue |
@kylos101 👍 if that doesn't work for somet reason, another (not great, but pragmatic and temporary) option might be allowing users to define shutdown tasks in the Hoping we can fix the core issue thought 🙏 |
@kylos101 @svenefftinge was working on a more generic approach to this problem, allowing users to specify shutdown tasks. Here is that PR: #11287 |
Added steps to recreate and how to test (by inspecting prior PRs), moving to Scheduled |
@utam0k can you move this issue to In Validation? It's PR is deployed. |
👋 @6uliver @YoungElPaso @filipjnc @semiautomatix @nisan1337 @Nishchit14 , we wanted to reach out and let you know that this issue is in fact resolved as of gen79 (us79 or eu79). I just double checked logs, and there are no traces of this error as of gen79 (which contains the fix). Let us know if you continue to have any trouble restarting stopped workspaces? |
Great news! We tried it out for our project and it's working fine! Thank you very much, it has a great impact for us! |
I'm able to restart the pods without issues now too. Loving gitpod again :) |
Having this issue now with a workspace. See also #16660. Opening a new workspace with the same prebuild does not help. Rebuilding the prebuild also does not help. This is serious. |
👋 @srgwsrgwetgethg this was an incident https://www.gitpodstatus.com/incidents/rs838czq8clg, and resolved by #18236 |
Uh oh!
There was an error while loading. Please reload this page.
Bug description
The full error on start is:
Logs:
https://cloudlogging.app.goo.gl/82CpwrNzW9oJWX1Q8
Log entry for the error:
https://console.cloud.google.com/logs/query;cursorTimestamp=2022-07-06T11:31:39Z;query=resource.labels.cluster_name%20:%20%2528%22eu51%22%2529%0A%22eb595f2b-7de4-4248-800d-7dfe0280f802%22%0Atimestamp%3D%222022-07-06T11:31:39Z%22%0AinsertId%3D%227hkbcwop4k5qd0t4%22;summaryFields=:false:32:beginning:false;timeRange=P1D?project=workspace-clusters
Trend over last 7 days:
https://cloudlogging.app.goo.gl/qdGdJ7CrD1zmLyrQ8
Steps to reproduce
Expected result
The workspace can be started successfully and Supertokens is running.
Actual result
You can see a Gitpod error page with the text "Oh, no! Something went wrong!" and the following long error message:
Workspace affected
https://prlct-shipapp-5830i5ovwcb.ws-eu51.gitpod.io/
Expected behavior
The ability to restart a stopped workspace
Example repository
No response
Anything else?
How to test? From here.
How to test
ls -lat
, should be similar tols -lat
Assert the steps to recreate here do not fail
Follow the steps to recreate in this issue, but, before stopping your workspace, stop the
supertokens
container, and then stop your workspace. Restart your workspace, it should avoid this error.The text was updated successfully, but these errors were encountered: