How to make docker container running judge0 use cgroupV1 in ubuntu? - docker

I have installed a online IDE backend architecture Judge0 for a coding web app. I am running a container in docker. The problem is that judge0 works only with cgroupV1 not with cgroupV2. So I am facing a issue here where on running the app I am getting interal error.
On researching judge0 github repo I found the issue is due to cgroupv2 being used by default.
On running command
grep cgroup /proc/mounts
I am getting output as
cgroup2 /sys/fs/cgroup cgroup2
rw,nosuid,nodev,noexec,relatime,nsdelegate,memory_recursiveprot 0 0
I need to change the whole permission to cgroup1 not to cgroup2. Even after running editing the GRUB_CMDLINE_LINUX line in /etc/default/grub and running
sudo update-grub
I am not able to resolve the issue. Need help on how to change permission to cgroup1 in ubuntu 22.04

Related

Docker Desktop(Windows) is not starting after adding ".wslconfig" file

Docker Desktop is not starting after adding ".wslconfig" file.
I followed steps given in link-
https://mrakelinggar.medium.com/set-up-configs-like-memory-limits-in-docker-for-windows-and-wsl2-80689997309c
https://itnext.io/wsl2-tips-limit-cpu-memory-when-using-docker-c022535faf6f
When Starting docker desktop getting error message-
It look like there is an error with Docker Desktop,restart it to fix it.
.wslconfig file-
[wsl2]
memory=3GB # Limits VM memory in WSL 2 up to 3GB
processors=4 # Makes the WSL 2 VM use two virtual processors
Snap-
Question - if the error goes away after removing the .wslconfig file?
Ans- No
Then I installed Docker Desktop using Hyper V instead of wsl-2.
But When I restart laptop, Docker Desktop again failed to start in this method also.
Git Issue- https://github.com/docker/for-win/issues/11822
C:\Users[USER]\AppData\Local\Docker
C:\Users[USER]\AppData\Roaming\Docker
C:\Users[USER]\AppData\Roaming\Docker Desktop
Once deleted, I didn’t have to do anything else, Docker Desktop started booting up as normal.
After Deleting above files docker starts but it's like resetting the docker.
After that I restart my laptop to check whether Docker Runs or not after restarting laptop.
But Still facing same issue.Docker Desktop is not starting.
Current Docker Version - 4.9.0
Snap-
Finally, Docker Desktop started running after downgrading the Docker Desktop version form "4.9.0" to "4.8.0", which is one of solution share in below post.
The issue was not with ".wslconfig", the issue was Docker Desktop Version.
Current Docker Desktop version 4.8.0 is working on both "Hyper V" and "wsl-2"
http://www.dev.fyicenter.com/1001459_Docker_failed_to_initialize_Error.html

Docker container and FIPS mode enabled

We're trying to get a SAP HanaExpress container running on a VM that has FIPS mode enabled, but it will not start up due to a FATAL FIPS SELFTEST FAILURE error. The VM is running CentOS 7, though I'm not sure that matters. I've read several articles/posts and it appears the running container does recognize that FIPS mode is enabled, but the service still doesn't work. Both of these checks work:
cat /proc/sys/crypto/fips_enabled
sysctl crypto.fips_enabled
The container is running in privileged mode and the /etc/system-fips file is mounted into the container as well. Is there anything else I need to check to make this work or at least debug the issue? I have a feeling there's something small, and not HanaExpress specific, that I'm missing, but I just haven't found it yet.
Edit1: It looks like the issue is Docker and not the container. I finally found info that fips mode should be reported as active when you run the docker info command, but it wasn't showing up. CentOS 7 installs the 1.13.1 version, which appears to be too old, so I downloaded and installed 20.10.9. However, even following the instructions provided by docker, it's not reporting fips mode as active. If anyone else has had this issue and solved it, any advice would be great.
/proc/... files are READ-only files
If you don't need FIPS enabled, then try this and see if it goes away. https://www.thegeekdiary.com/how-to-disable-fips-mode-on-centos-rhel-7/
then test if fips is enabled or not by running sysctl -a 2>/dev/null | grep fips_enabled and see if sudo yum list installed | grep "dracut.fips" shows dracut-fips is installed or not.
I believe that if the HOST OS where you are running Docker container has this disabled, then you may not see this error

Cannot create container for service xxx: open /var/lib/docker/overlay2/969rf5...fdf-init/merged/etc/resolv.conf: Operation not permitted

while deploying docker images on the production server using docker-compose, I got the following error.
Cannot create container for service xxx: open /var/lib/docker/overlay2/969rf5...fdf-init/merged/etc/resolv.conf: Operation not permitted
What I tried:
Changed permissions ( So much so that gave 777 for all the directories involved)
Upgraded kernel (Saw somewhere that it could be kernel issue)
Removed ACL using setfacl -Rb /var/lib/docker
Added the "graph: /var/lib/docker" line in daemon.json
Restarted docker service couple of times.
Tried running the images individually and not using docker-compose.
Tried running with sudo
Set the --storage-opt overlay2.override_kernel_check=1 (Since it is rhel and kernel version supported for docker overlay2 is >4.0, so was suggested to override it.)
I failed everytime!
Have no idea what the issue is or what it could be. Every time I run the docker-compose file, it creates a new directory in /var/lib/docker/overlay2 with old permissions even though I change the permissions for all the other directories in it. At this point I'm not even sure if it is permissions issue or not.
Any help would be appreciated! Thank you!
Finally after trying everything on the internet and nothing working out, the issue was resolved. It was due to the Antivirus that was installed on the server. Had it removed by the client and docker/docker-compose started working absolutely fine.
To give a little more information on the antivirus, it was some FIM antivirus that was installed on a rhel 3.10-xxx which was blocking docker. It was an older version and thank goodness newer versions are compatible with docker. So that's a wrap, thank you all!

Docker commands and setup running successfully yesterday started giving error "ENOENT" today suddenly on windows 10

I am relatively new for the docker technology.
Yesterday I setup docker on windows 10 machine and ran few dockers.
Today I first used command "docker system prune" so that I can run all of it once again without any conflicts.
But now I am firing below command:
docker run --name DockerName -v /c/collections:/etc/newman -t
postman/newman:ubuntu run
"MyAPITestCollection.postman_collection.json"
--environment="MyAPITestEnvironment.postman_environment.json" --reporters="json,cli" --reporter-json-export="reports/MyAPITestReport.json"
And getting below error:
error: ENOENT: no such file or directory, open 'MyAPITestEnvironment.postman_environment.json'
I haven't made any changes to the directories or anything else.
I checked the docker desktop setup option and found that the drive on which the file is located is still showing as the shared drive.
I tried by restarting the docker desktop several times and once by rebooting the machine as well but still found same error.
Kindly requesting to help me figure out the root cause of this issue and the solution as well.
My docker network had once again become public so firewall might be preventing the file access.
I changed it to private by using below command and it ran successfully:
Set-NetConnectionProfile -interfacealias "vEthernet (DockerNAT)" -NetworkCategory Private

gcloud docker pull fails with Untar exit status 2 unexpected fault address

EDIT: A huge thank you #mattmoor for helping me debug the issue. After I had to create a new docker-machine. There was a problem with the docker daemon that must've arisen due to the first machine not being created correctly.
I am having trouble pulling images from another computer, both of which are running OSX Yosemite. Both machines have the docker daemon running, and have successfully authenticated with the desired project to pull from with
gcloud auth login
On my computer I am able to run:
gcloud docker pull gcr.io/projectid/image-tag without any issues.
However when I try to repeat this on another machine, I get a large error message that begins with:
Error pulling image (tag-here) from gcr.io/projectid/image-tag, endpoint: https://gcr.io/v1/, Untar exit status 2 unexpected fault address 0xc208ce5d04
fatal error: faultr downloading dependent layers
[signal 0xb code=0x1 addr=0xc208ce5d04 pc=0x94109e]
Followed by a goroutine 1 stack trace.
The docker version on both machines is 1.6.2, the client and server api version is 1.18, both Go versions are go1.4.2
The Google Cloud SDK version on both machines is 0.9.67, and both have the following components installed:
bq 2.0.18
bq-nix 2.0.18
core 2015.06.30
core-nix 2015.06.02
gcloud 2015.06.30
gcutil-msg 2015.06.09
gsutil 4.13
gsutil-nix 4.12
preview 2015.06.30
and the machine that works also has these extra components installed:
alpha 2015.06.30
beta 2015.06.30
kubectl
kubectl-darwin-x86_64 0.18.1
Any help would be greatly appreciated, I'm truly baffled as to why I can't pull from the gcr registry on the other machine.
I'm baffled too, this looks like Docker dying trying to untar the blob, and I haven't seen that before.
Would you mind starting a thread with gcr-contact#google.com, as this may take a little debugging, and email will be a bit easier for the back-and-forth.
We can update this with what we find, if that works for you?

Resources