docker service inside a LXC container: unable to apply RC_ULIMIT settings - docker

I have a Debian hypervisor in which I ran a LXC Alpine 3.14 container. In the Alpine container, I would like to install a docker service. Alpine provides a docker package, but starting the docker service raises this error:
$ sudo service docker start
sh: error setting limit: Operation not permitted
* docker: unable to apply RC_ULIMIT settings
* Starting Docker Daemon ...
Is the problem on the hypervisor or the container? How can I solve this?

As the FAQ mentions it, I had to enable container nesting:
lxc config set <container> security.nesting true

Related

Why I got an error while trying to use Minikube as Docker Desktop on Windows?

I have Windows.
I have tried to set up minikube as Docker Desktop and followed the instruction : https://minikube.sigs.k8s.io/docs/tutorials/docker_desktop_replacement/
I do not see any errors in cmd.
C:\WINDOWS\system32>minikube start --container-runtime=docker --vm=true
* minikube v1.28.0 on Microsoft Windows 10 Enterprise 10.0.18363 Build 18363
* Using the hyperv driver based on existing profile
* Starting control plane node minikube in cluster minikube
* Restarting existing hyperv VM for "minikube" ...
* Preparing Kubernetes v1.25.3 on Docker 20.10.20 ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
docker --version also works fine.
C:\WINDOWS\system32>docker --version
Docker version 20.10.21, build baeda1f
However when I try to do docker compose up I got an error: error during connect
d:\projects\ui-tests>docker compose up
error during connect: this error may indicate that the docker daemon is not running: Get "http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.24/containers/json?all=1&filters=%7B%22label%22%3A%7B%22com.docker.compose.project%3Dui-tests%22%3Atrue%7D%7D": open //./pipe/docker_engine: The system cannot find the file specified.
Can someone help with that?
UPD1:
I have run this command in cmd, but there was no output(do not know if it is ok):
C:\WINDOWS\system32>#FOR /f "tokens=*" %i IN ('minikube -p minikube docker-env --shell cmd') DO #%i
C:\WINDOWS\system32>
The error might have been caused because you are using hyper-v based docker deployment or because of exporting the variables incorrectly. I tried a different approach and it worked for me.
Instead of using hyper-v I have used wsl2 for docker. First you need to
install wsl refer to this doc for more information.
Next you need to download docker engine binary files from the official
docker site and make some configuration as mentioned in this doc.
Now you can install minikube for windows and select the driver as
docker now as docker path is accessible across the system docker-
compose will work fine.
Hope this will help you in resolving your issues.
I was not able to resole this problem. Rancher desktop was chosen instead

Run docker inside ubuntu container

2 days I try to run the docker inside an ubuntu container:
docker run -it ubuntu bash
Install docker by instruction of https://docs.docker.com/engine/install/ubuntu/ or/and https://phoenixnap.com/kb/how-to-install-docker-on-ubuntu-18-04
Finally I have installed docker:
root#e65411d2b70a:/# docker -v
Docker version 19.03.6, build 369ce74a3c
But when I try to run docker run hello-world have some problem
root#5ac21097b6f6:/# docker run hello-world
docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?.
See 'docker run --help'.
In service list not docker:
root#5ac21097b6f6:/# service docker start
docker: unrecognized service
root#5ac21097b6f6:/# service --status-all
[ - ] apparmor
[ + ] cgroupfs-mount
[ - ] dbus
[ ? ] hwclock.sh
[ - ] procps
[ ? ] ubuntu-fan
When try to run dockerd:
root#5ac21097b6f6:/# dockerd
INFO[2020-04-23T07:01:11.622627006Z] Starting up
INFO[2020-04-23T07:01:11.624389266Z] libcontainerd: started new containerd process pid=154
INFO[2020-04-23T07:01:11.624460438Z] parsed scheme: "unix" module=grpc
INFO[2020-04-23T07:01:11.624477203Z] scheme "unix" not registered, fallback to default scheme module=grpc
INFO[2020-04-23T07:01:11.624532871Z] ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 <nil>}] <nil>} module=grpc
INFO[2020-04-23T07:01:11.624560679Z] ClientConn switching balancer to "pick_first" module=grpc
INFO[2020-04-23T07:01:11.664827037Z] starting containerd revision= version="1.3.3-0ubuntu1~18.04.2"
ERRO[2020-04-23T07:01:11.664943052Z] failed to change OOM score to -500 error="write /proc/154/oom_score_adj: permission denied"
...
INFO[2020-04-23T07:01:11.816951247Z] stopping event stream following graceful shutdown error="context canceled" module=libcontainerd namespace=plugins.moby
failed to start daemon: Error initializing network controller: error obtaining controller instance: failed to create NAT chain DOCKER: iptables failed: iptables -t nat -N DOCKER: iptables v1.6.1: can't initialize iptables table `nat': Permission denied (you must be root)
Perhaps iptables or your kernel needs to be upgraded.
(exit status 3)
Not understand why Permission denied if user root.
Install sudo and add root to the group, but it's not help.
apt-get install sudo
usermod -a -G sudo root
- sudo dockerd have the save problem.
How to make work docker inside ubuntu container? Do you have ideas?
ps. I know about docker-in-docker, I need exactly docker inside ubuntu-container
pss. I know about -v /var/run/docker.sock:/var/run/docker.sock - but needed independent the docker service inside ubuntu-container.
When running docker in docker, the container must use the docker engine on your host.
Here is a simple working setup:
1) Create a dockerfile with docker CLI installed. I am using the official compose image, so you also have docker-compose
FROM docker/compose:1.25.5
WORKDIR /app
ENTRYPOINT ["/bin/sh"]
2) When running it, mount the docker sock
$ docker build -t dind .
$ docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock dind
Form within the container, you now have docker. Try running docker ps
If you want to do docker in docker without -v /var/run/docker.sock:/var/run/docker.sock then I am afraid that there is no good way to do this.
Sharing the docker socket from host is the classic way to make docker containers run within another docker container.
I was trying my best to run containers within containers just like you for the past few days. Wasted many hours. So far most of the people advise me to do stuff like using the docker's DIND image which is not applicable for my case, as I need the main container to be Ubuntu OS, or to run some privilege command and map the daemon socket into container, like -v /var/run/docker.sock:/var/run/docker.sock
(Which never ever works for me, or for any Ubuntu OS I tried. Reason being, the main container which is based on Ubuntu OS does not comes with systemd which is important to run docker containers conveniently like a usual local machine)
The solution I found was to use Nestybox on my Ubuntu 20.04 system and it works best. Its also extremely simple to execute, provided your local system is ubuntu (which they support best), as the container runtime are specifically deigned for such application. It also has the most flexible options.
The free edition of Nestybox is perhaps the best method as of Nov 2022. Highly recommends you to try it without bothering all the tedious setup other people suggest. They have many pre-constructed solutions to address such specific needs with a simple command line.
The Nestybox provide special runtime environment for newly created docker container, they also provides some ubuntu/common OS images with docker and systemd in built.
Their goal is to make the main container function exactly the same as a virtual machine securely. You can literally ssh into your ubuntu main container as well without the ability to access anything in the main machine. From your main container you may create all kinds of containers like a normal local system does. That systemd is very important for you to setup docker conveniently inside the container.
One simple common command to execute sysbox:
dock run --runtime=sysbox-runc -it any_image
If you think thats what you are looking for, you can find out more at their github:
https://github.com/nestybox/sysbox
Quicklink to instruction on how to deploy a simple sysbox runtime environment container:
https://github.com/nestybox/sysbox/blob/master/docs/quickstart/README.md

running an image created through Docker in lxc

I want to run an image which I have already created and uploaded on the docker hub. Is it possible to run that image on lxc/lxd? Basically I want to do performance comparison between docker and lxc.
I have installed skopeo, umoci, go-md2man and jq.
Now, when I try to run the command lxc-create c1 -t oci – --url docker://awaisaz/test:part2
it gives trust policy error. /etc/containers/policy.json not such file or directory
Can anyone suggest me a solution or alternate way to do this?
So, you want to run a docker container inside an LXC Container.
firstly, you need to make docker process up and running inside an lxc container.
sudo lxc config edit <lxc-container-name>
In Config Object, Add
linux.kernel_modules: overlay,ip_tables
security.nesting: true
security.privileged: true
Then Exit from that YAML File, And Restart the LXC Container
sudo lxc restart <container_name>
After Successfull restart of LXC Container.
exec into that container by
sudo lxc exec <container_name> /bin/bash
Then,
sudo rm /var/lib/docker/network/files/local-kv.db
Restart Docker Service,
service docker restart (In LXC Container)
Then you can use docker process in LXC Container as if you are in a VM.

Packer Docker Builder with remote docker daemon

I'm using packer docker builder with ansible to create docker image (https://www.packer.io/docs/builders/docker.html)
I have a machine(client) which is meant to run build scripts. The packer docker is executed with ansible from this machine. This machine has docker client. It's connected to a remote docker daemon. The environment variable DOCKER_HOST is set to point to the remote docker host. I'm able to test the connectivity and things are working good.
Now the problem is, when I execute packer docker to build the image, it errors out saying:
docker: Run command: docker run -v /root/.packer.d/tmp/packer-docker612435850:/packer-files -d -i -t ubuntu:latest /bin/bash
==> docker: Error running container: Docker exited with a non-zero exit status.
==> docker: Stderr: docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?.
==> docker: See 'docker run --help'.
It seems the packer docker is stuck looking at local daemon.
Workaround: I renamed docker binary and introduced a script called "docker" which sets DOCKER_HOST and invokes the original docker binary with parameters passed on.
Is there a better way to deal this?
Packers Docker builder doesn't work with remote hosts since packer uses the /packer-files volume mount to communicate with the container. This is vaguely expressed in the docs with:
The Docker builder must run on a machine that has Docker installed.
And explained in Overriding the host directory.

How do I specify nvidia runtime from docker-compose.yml?

I am able to run a tensorflow container w/ access to the GPU from the command line w/ the following command
$ sudo docker run --runtime=nvidia --rm gcr.io/tensorflow/tensorflow:latest-gpu
I would like to be able to run this container from docker-compose. Is it possible to specify the --runtime flag from docker-compose.yml?
Currently (Aug 2018), NVIDIA container runtime for Docker (nvidia-docker2) supports Docker Compose.
Yes, use Compose format 2.3 and add runtime: nvidia to your GPU service. Docker Compose must be version 1.19.0 or higher.
Example docker-compose.yml:
version: '2.3'
services:
nvsmi:
image: ubuntu:16.04
runtime: nvidia
environment:
- NVIDIA_VISIBLE_DEVICES=all
command: nvidia-smi
More example from NVIDIA blog uses Docker Compose to show how to launch multiple GPU containers with the NVIDIA Container Runtime.
You should edit /etc/docker/daemon.json, adding the first level key "default-runtime": "nvidia", restart docker daemon (ex. "sudo service docker restart") and then all containers on that host will run with nvidia runtime.
More info on daemon.json here
Or better: using systemd and assuming the path is /usr/libexec/oci/hooks.d/nvidia
Configure
mkdir -p /etc/systemd/system/docker.service.d/
cat > /etc/systemd/system/docker.service.d/nvidia-containers.conf <<EOF
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -D --add-runtime nvidia=/usr/libexec/oci/hooks.d/nvidia --default-runtime=nvidia
EOF
Restart
systemctl daemon-reload
systemctl restart docker
Demo
Don't need to specify --runtime=nvidia since we set default-runtime=nvidia in the configuration step.
docker run --rm gcr.io/tensorflow/tensorflow:latest-gpu
Solution Inspired from my tutorial about KATA runtime.

Resources