centos issue with docker sysctl changes specific to net.ipv4.tcp_timestamps - docker

The goal is to launch the container with specific tcp_timestamps and tcp_sack flags.
This command is working as expected on ubuntu (22.05) and fedora (36) hosts.
docker run --privileged --rm -dt --name ubuntu -p 8080:80 ubuntu /bin/bash -c "sysctl -w net.ipv4.tcp_timestamps=0 && sysctl -w net.ipv4.tcp_sack=0 && sleep 15"
Container shell from a fedora host, where I launched ubuntu container with this command. It worked as expected.
root#3c7583143b0d:/# sysctl -a | grep -E "tcp_timestamps|tcp_sack"
net.ipv4.tcp_sack = 0
net.ipv4.tcp_timestamps = 0
root#3c7583143b0d:/#
However on centos (7.9.2009) host it doesn't work for either timestamps or sack. The container exits immediately with error. Container shell shows neither of these are present inside the container.
root#fa306af3eb37:/# sysctl -a | grep -i "timestamp"
net.netfilter.nf_conntrack_timestamp = 0
root#fa306af3eb37:/#
I can apply the changes to both tiemstamps and sack on the centos host but somehow its not exposed to the container.
Any ideas why this wouldn't work just on centos host?

Right, so after more than 12 hours on this I was finally able to figure it out. The behavior was due to the kernel version. I had taken the latest centos 7 iso (July 2022) from official website, it installed kernel v3.10.0. Started with my docker work upon installation and ran in to the issue. Should have taken a break earlier and started with relaxed mind.
Finally decided to upgrade the kernel from 3.10.0 to latest stable version 6.0.2 and that fixed it.
root#71e6bd7e3088:/# sysctl -a | grep -E "tcp_timestamps|tcp_sack"
net.ipv4.tcp_sack = 0
net.ipv4.tcp_timestamps = 0
root#71e6bd7e3088:/#

Related

docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]]

I saw similar treads but they are different because I am using WSL2 and docker and GPU aware docker.
I have windows 10 version 2004 (build 20161.1000)
I have installed WSL 2 and have Docker Desktop 2.3.0.3 on my Windows System running.
I have Ubuntu 18.04 LTS installed in WSL 2 too.
I have installed the NVIDIA driver
The linux version is 4.19.121-microsoft-standard.
The NVIDIA driver version is 455.41 for my Laptop GPU QUADRO M2000M.
Actually I followed all the steps described in https://ubuntu.com/blog/getting-started-with-cuda-on-ubuntu-on-wsl-2 until the step where I have to run "sudo service docker stop" in an Ubuntu terminal.
This results in a message docker: unrecognized service.
I have to restart docker desktop in WIndows 10 in order to get the deamon running.
I test then in the Ubuntu terminal : docker run hello-world ==> this runs fine
Also the command docker run -it ubuntu bash ==> runs file in the Ubuntu terminal os WSL 2.
BUT when I run :
docker run -u $(id -u):$(id -g) -it --gpus all -p 8888:8888 tensorflow/tensorflow:latest-gpu-py3-jupyter
then I get the error : docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]]
This invoves microsoft,Ubuntu,NVIDIA. I have search the support sites but could not find anything that solves my prblem.
Can anyone help me here?
There is this strange answer mentioned here and here:
sudo service docker start
sudo mkdir /sys/fs/cgroup/systemd
sudo mount -t cgroup -o none,name=systemd cgroup /sys/fs/cgroup/systemd
This worked for me on WSL (Ubuntu 20.04), so I added it to the ~/.bashrc script.
Note, the first part may need to be restarting docker!

Teamcity Build won't run until Build Agents is configured with Docker?

I created a new build for my Teamcity pipeline. For the first time I use then Docker buildstep. After I setup everything I realized the build agent does not seem to be ready for it.
I understand that my agent does not seem to be ready for building with docker but nobody is actually telling me how you can do that. I read the official guides but no word about how to actually install docker into my agent (if that's the way to solve the problem).
Can someone tell me what I have to do to get it to work?
EDIT
#Senior Pomidor helped me to get one step closer. I added his first example to the docker run command
docker run -it -e SERVER_URL="<url to TeamCity server>" \
--privileged -e DOCKER_IN_DOCKER=start \
jetbrains/teamcity-agent
After doing so I got rid of the mentioned messages in the screenshot. My Agents configuration now has the following:
docker.server.osType linux
docker.server.version 18.06.1
docker.version 18.06.1
But still Teamcity is complaining with this message:
Which kinda leaves me clueless again.
Final Solution:
The upcoming EDIT2 issue could be resolved by just restarting the teamcity server instance. The agent was actually able to run the build but teamcity was not able to realise that without a reboot.
EDIT2
Request Information:
My CI Server OS:
PRETTY_NAME="Debian GNU/Linux 9 (stretch)"
NAME="Debian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
Running Container:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0f8e0b04d6a6 jetbrains/teamcity-agent "/run-services.sh" 19 hours ago Up 19 hours 9090/tcp teamcity-agent
20964c22b2d9 jetbrains/teamcity-server "/run-services.sh" 37 hours ago Up 37 hours 0.0.0.0:80->8111/tcp teamcity-server-instance
Container run by:
## Server
docker run -dit --name teamcity-server-instance -v /data/teamcity:/data/teamcity_server/datadir -v /var/log/teamcity:/opt/teamcity/logs -p 80:8111 jetbrains/teamcity-server
## Agent
docker run -itd --name teamcity-agent -e SERVER_URL="XXX.XXX.XXX.XXX:80" --privileged -e DOCKER_IN_DOCKER=start -v /etc/teamcity/agent/conf:/data/teamcity_agent/conf jetbrains/teamcity-agent
Build Step Information:
TC restricted the configuration because of TA doesn't start Docker daemon.
You should pass -e DOCKER_IN_DOCKER=start for automatically staring the docker daemon in the container. Also, docker daemon needs the docker socket. In a Linux container, if you need a Docker daemon available inside your builds, you have two options:
--privileged flag. New Docker daemon running within your container
-v docker_volumes:/var/lib/docker Docker from the host (in this case you will benefit from the caches shared between the host and all your containers but there is a security concern: your build may actually harm your host Docker, so use it at your own risk)
In a Linux container, if you need a Docker daemon available inside your builds, you have two options:
Docker from the host (in this case you will benefit from the caches shared between the host and all your containers but there is a security concern: your build may actually harm your host Docker, so use it at your own risk)
examples
docker run -it -e SERVER_URL="<url to TeamCity server>" \
--privileged -e DOCKER_IN_DOCKER=start \
jetbrains/teamcity-agent
docker run -it -e SERVER_URL="<url to TeamCity server>" \
-v /var/run/docker.sock:/var/run/docker.sock \
jetbrains/teamcity-agent
UPD
docker.server.osType required because in the build step was sets linux
What worked for me was changing permissions on the agent container for /var/run/docker.sock
Run a shell inside the container:
docker exec -u 0 -it <CONTAINER_ID> bash
Change permissions of the docker socket:
chmod 666 /var/run/docker.sock
Verify the docker container use the socket:
docker version

How to run interactive Centos 6 within docker

I'm unable to run an interactive session with Centos:6 in docker. Works perfectly with Centos:7
>docker -v
Docker version 18.03.0-ce, build 0520e24302
>docker pull centos:6
...
>docker run -it centos:6
[just returns to my terminal]
>docker pull centos:7
...
>docker run -it centos:7
>[root#f8c0430ed2ba /]#cat /etc/redhat-release
CentOS Linux release 7.4.1708 (Core)
Any idea what is going on ?
I tried with older version of docker and i pulled all new images Centos:6/6.6/6.7/6.8/6.9 and it shows the same problems. I also tried with /bin/bash or sh at the end.
Also i'm sure that it used to work more or less one year ago.
I'm using ArchLinux
It is a known issue that seems to be linked to the Spectre patch:
issue 103 says:
Running a docker run --rm -it centos:6 bash fails with exit status 139 (i.e. bash exits with SIGSEGV) on Linux kernel 4.15.9. Downgrading to 4.14.15 (which is vulnerable to Spectre V1) gets rid of the segfault.

Error running openshift/origin docker: Error running 'chcon' to set the kubelet volume root directory SELinux context

When I run the openshift/origin docker image , I see this error in logs of the container ($ docker logs origin).
Error running 'chcon' to set the kubelet volume root directory SELinux context
Is this a known issue or can it be ignored or did I miss anything?
The commandline I used is as
docker run -d --name "origin" -e "http_proxy=$http_proxy" -e "https_proxy=$https_proxy" -e "no_proxy=$no_proxy" --privileged --pid=host --net=host -v /:/rootfs:ro -v /var/run:/var/run:rw -v /sys:/sys openshift/origin start --cors-allowed-origins='.*'
Some information of my OS and environment:
3.12.28-4-default
SUSE Linux Enterprise Server 12 (x86_64)
VERSION = 12
PATCHLEVEL = 0
NAME="SLES"
VERSION="12"
VERSION_ID="12"
PRETTY_NAME="SUSE Linux Enterprise Server 12"
ID="sles"
That error can be ignored unless you are trying to run with SELinux enabled. The container is trying to set the label on the volumes directory to ensure that labels are properly inherited.

Docker daemon fails

I am using Ubuntu 14.04 on Dell Latitude E6430 and have installed docker.io package. Now whenever I restart my system, first docker command fails with message
Cannot connect to the Docker daemon. Is 'docker -d' running on this host?
Hence to fix this i run below command and daemon works like charm
ps -ef |grep docker
kill -9 *pid found* | rm /var/run/docker.pid
I have check previous questions and none answer this behaviour
Any idea why is breaking ?

Resources