I installed docker on nixos, using:
nix-env -i docker
after that, dockerd was not running, so I started the daemon manually with:
dockerd
and in the logs, I see:
WARN[2019-06-26T01:02:31.784701442Z] could not change group
/var/run/docker.sock to docker: group docker not found
should I care about this warning?
When installing docker on NixOS, it's best to enable it in the NixOS configuration. Doing so will install docker as a system service.
Snippet for /etc/nixos/configuration.nix:
virtualisation.docker.enable = true;
# ...
users.users.YOU = { # merge this with your unix user definition, "YOU" is for illustration
isNormalUser = true;
# ...
extraGroups = [
# ...
"docker"
];
};
created a group docker. Docker needs that user group to start as a service.
Related
I am trying to deploy logspout container in docker, but keep running into an issue which I have searched in this website and github but to no avail, so hoping someone knows.
I followed the following commands as per the Readme here: https://github.com/gliderlabs/logspout
(1) docker pull gliderlabs/logspout:latest (also tried with logspout:master, same results)
(2) docker run -d --name="logspout" --volume=/var/run/docker.sock:/var/run/docker.sock --publish=127.0.0.1:8000:80 gliderlabs/logspout (also tried with -v /var/run/docker.sock:/var/run/docker.sock, same results)
The container gets created but stops immediately. When I check the container logs (docker container logs logspout), I only see the following entries:
2021/12/19 06:37:12 # logspout v3.2.14 by gliderlabs
2021/12/19 06:37:12 # adapters: raw syslog tcp tls udp multiline
2021/12/19 06:37:12 # options :
2021/12/19 06:37:12 persist:/mnt/routes
2021/12/19 06:37:12 # jobs : pump routes http[health,logs,routes]:80
2021/12/19 06:37:12 # routes : none
2021/12/19 06:37:12 pump ended: Get http://unix.sock/containers/json?: dial unix /var/run/docker.sock: connect: no such file or directory
I checked docker.sock as ls -la /var/run/docker.sock results in srw-rw---- 1 root docker 0 Dec 12 09:49 /var/run/docker.sock. So docker.sock does exist, which adds to the confusion as to why the container can't find it.
I am new to linux/docker, but my understanding is that using -v or --version would automatically mount the location to the container, but does not seem to be happening here. So I am wondering if anyone has any suggestion on what needs to be done so that the logspout container can find the docker.sock.
System Info: Docker version 20.10.11, build dea9396; Raspberry Pi 4 ARM 64, OS: Debian GNU/Linux 11 (bullseye)
EDIT: added comment about -v tag in step (2) above
The container must be able to access the Docker Unix socket to mount it. This is typically a problem when namespace remapping is enabled. To disable remapping for the logspout container, pass the --userns=host flag to docker run, .. create, etc.
In my Docker (Spring Boot) application I would like to execute Docker commands. I use the docker-spotify-api (client).
I get different connection errors. I start the application as part of a docker-compose.yml.
This is what I tried so far on an EC2 AWS VPS:
docker = DefaultDockerClient.builder()
.uri(URI.create("tcp://localhost:2376"))
.build();
=> TCP protocol not supported.
docker = DefaultDockerClient.builder()
.uri(URI.create("tcp://localhost:2375"))
.build();
=> TCP protocol not supported.
docker = new DefaultDockerClient("unix:///var/run/docker.sock");
==> No such file
docker = DefaultDockerClient.builder()
.uri("unix:///var/run/docker.sock")
.build();
==> No such file
docker = DefaultDockerClient.builder()
.uri(URI.create("http://localhost:2375")).build();
or
docker = DefaultDockerClient.builder()
.uri(URI.create("http://localhost:2376")).build();
or
docker = DefaultDockerClient.builder()
.uri(URI.create("https://localhost:2376"))
.build();
==> Connect to localhost:2376 [localhost/127.0.0.1] failed: Connection refused (Connection refused)
Wthat is my environment on EC2 VPS:
$ ls -l /var/run
lrwxrwxrwx 1 root root 6 Nov 14 07:23 /var/run -> ../run
$ groups ec2-user
ec2-user : ec2-user adm wheel systemd-journal docker
$ ls -l /run/docker.sock
srw-rw---- 1 root docker 0 Feb 14 17:16 /run/docker.sock
echo $DOCKER_HOST $DOCKER_CERT_PATH
(empty)
This situation is similar to https://github.com/spotify/docker-client/issues/838#issuecomment-318261710.
You use docker-compose on the host to start up your application; Within the container, the Spring Boot application is using docker-spotify-api.
What you can try is to mount /var/run/docker.sock:/var/run/docker.sock in you compose file.
As #Benjah1 indicated, /var/run/docker.sock had to be mounted first.
To do so in a docker-compose / Docker Swarm environment you can do:
volumes:
- /var/run/docker.sock:/var/run/docker.sock
Furthermore, the other options resulted in errors because the default setting of Docker is that it won't open up to tcp/http connections. You can change this, of course, taking a small risk.
What is your DOCKER_HOST and DOCKER_CERT_PATH env vars value.
Try below as docker-client communicates with your local Docker daemon using the HTTP Remote API
final DockerClient docker = DefaultDockerClient.builder()
.uri(URI.create("https://localhost:2376"))
.build();
please also verify the privileges of docker.sock is it visible to your app and check weather your docker service is running or not as from above screenshot your docker.sock looks empty but if service is running it should contain pid in it
It took me some time to figure out, but I was running https://hub.docker.com/r/alpine/socat/ locally, and also wanted to connect to my Docker daemon and couldn't (same errors). Then it struck me: the solution on that webpage uses 127.0.0.1 as the ip address to bind to. Instead, start that container with 0.0.0.0, and then inside your container, you can do this: DockerClient dockerClient = new DefaultDockerClient("http://192.168.1.215:2376"); (use your own ip-address of course).
This worked for me.
I have a custom boot2docker with this configuration attached to the .iso
config.vm.provider "virtualbox" do |v|
v.customize ['storageattach', :id, '--storagectl', 'SATA', '--port', 0, '--device', 0, '--type', 'dvddrive', '--medium', File.expand_path("../boot2docker.iso", __FILE__)]
v.customize ['modifyvm', :id, '--nictype1', 'virtio']
end
config.vm.network "private_network", ip: "192.168.10.10", id: "default-network", nic_type: "virtio"
My files in the directory
mycompany/
dockerhost/
Vagrantfile
Vagrantfile
The vagrantfile of the docker-host that replaces the above.
## This is required with the plugin winnfsd
config.vm.network "private_network", type: "dhcp"
config.vm.synced_folder "../", "/vagrant", type: "nfs"
When I start up the dockerhost with vagrant up it works well and the NFS is setup correctly, but when I start a service container vagrant up myservice
shows the error
==> myservice: Docker host is required. One will be created if necessary...
myservice: Docker host VM is already ready.
==> myservice: Syncing folders to the host VM...
dockerhost: Mounting shared folders...
dockerhost: /var/lib/docker/docker_1472079332_51007 => C:/Users/myuser/Desktop/mycompany
Vagrant was unable to mount VirtualBox shared folders. This is usually because the filesystem "vboxsf" is not available. This filesystem is made available via the VirtualBox Guest Additions and kernel module.
Please verify that these guest additions are properly installed in the guest. This is not a bug in Vagrant and is usually caused by a faulty
Vagrant box. For context, the command attemped was:
set -e
mount -t vboxsf -o uid=`id -u docker`,gid=`getent group docker | cut -d: -f3` b5973a5087 /var/lib/docker/docker_1472079332_51007 mount -t vboxsf -o uid=`id -u docker`,gid=`id -g docker` b5973a5087 /var/lib/docker/docker_1472079332_51007
The error output from the command was:
mount: mounting b5973a5087 on /var/lib/docker/docker_1472079332_51007 failed: Protocol error
It looks like Vagrant mount another volumen when starts a container.
Any idea about how to fix it?
or why vagrant mounts another for my?
Tks
Vagrant and VirtualBox version incompatibility problems
This fix my problem:
Vagrant 1.8.5 should be compatible with the latest VirtualBox 5.1.2 release, however VirtualBox Guest Additions version we currently have is 5.0.20 and the latest boot2docker release is at 5.0.24. Both will most likely have issues with VirtualBox 5.1.2.
So for now avoid VirtualBox 5.1.x and stick with:
Vagrant 1.7.4 - 1.8.4
VirtualBox 5.0.x
Reference:
https://github.com/blinkreaction/boot2docker-vagrant/issues/83
I am running Vagrant on Mac OS X. I have created following Vagrantfile:
Vagrant.configure("2") do |config|
config.vm.synced_folder ".", "/vagrant", disabled: true
config.ssh.insert_key = true
config.vm.provider "docker" do |doc|
doc.image = "httpd"
doc.ports = ["80:80"]
doc.name = 'apache'
doc.remains_running = true
doc.has_ssh = false
end end
It is starting, however I can't execute following command:
vagrant docker-exec -dt apache -- /bin/bash
I have also tried to change apache into container ID, but have failed too.
The container is running as I can check it in Virtualbox.
I can only see that I have vagrant docker-logs and vagrant docker-run, but the documentation of Vagrant says that there should be docker-exec.
Any ideas?
-i --interactive is required if you want a bash shell you can type in.
-d --detach will not work for typing either as the process will be in started in the background.
Use vagrant docker-exec -it apache -- /bin/bash
Yep, that is correct. I also know now that you need to do list-commands to see this docker-exec command.
thank you
I'm trying to get my head around something that's been working on a Centos+Vagrant, but not on our providers RHEL (Red Hat Enterprise Linux Server release 6.5 (Santiago)). A sudo service docker restart hands this:
Stopping docker: [ OK ]
Starting cgconfig service: Error: cannot mount cpuset to /cgroup/cpuset: Device or resource busy
/sbin/cgconfigparser; error loading /etc/cgconfig.conf: Cgroup mounting failed
Failed to parse /etc/cgconfig.conf [FAILED]
Starting docker: [ OK ]
The service starts okey enough, but images cannot run. A mounting failed error is shown when I try. And the startup-log also gives a warning or two. Regarding the kernelwarning, centos gives the same and has no problems as Epel should resolve this:
WARNING: You are running linux kernel version 2.6.32-431.17.1.el6.x86_64, which might be unstable running docker. Please upgrade your kernel to 3.8.0.
2014/08/07 08:58:29 docker daemon: 1.1.2 d84a070; execdriver: native; graphdriver:
[1233d0af] +job serveapi(unix:///var/run/docker.sock)
[1233d0af] +job initserver()
[1233d0af.initserver()] Creating server
2014/08/07 08:58:29 Listening for HTTP on unix (/var/run/docker.sock)
[1233d0af] +job init_networkdriver()
[1233d0af] -job init_networkdriver() = OK (0)
2014/08/07 08:58:29 WARNING: mountpoint not found
Anyone had any success overcoming this problem or should I throw in the towel and wait for the provider to update to RHEL 7?
I have the same issue.
(1) check cgconfig status
# /etc/init.d/cgconfig status
if it stopped, restart it
# /etc/init.d/cgconfig restart
check cgconfig is running
(2) check cgconfig is on
# chkconfig --list cgconfig
cgconfig 0:off 1:off 2:off 3:off 4:off 5:off 6:off
if cgconfig is off, turn it on
(3) if still does not work, may be some cgroups modules is missing. In the kernel .config file, make menuconfig, add those modules into kernel and recompile and reboot
after that, it should be OK
I ended up asking the same question at Google Groups and in the end finding a solution with some help. What worked for me was this:
umount cgroup
sudo service cgconfig start
The project of making Docker work was put on halt all the same. Later a problem of network connection for the containers. This took to much time to solve and had to give up.
So I spent the whole day trying to rig docker to work on my vps. I was running into this same error. Basically what it came down to was the fact that OpenVZ didn't support docker containers up until a couple months ago. Specifically this RHEL update:
https://openvz.org/Download/kernel/rhel6/042stab105.14
Assuming this is your problem, or some variation of it, the burden of solving it is on your host. They will need to follow these steps:
https://openvz.org/Docker_inside_CT
In my case
/etc/rc.d/rc.cgconfig start
was generating
Starting cgconfig service: Error: cannot mount cpu,cpuacct,memory to
/cgroup/cpu_and_mem: Device or resource busy /usr/sbin/cgconfigparser;
error loading /etc/cgconfig.conf: Cgroup mounting failed Failed to
parse /etc/cgconfig.conf
i had to use:
/etc/rc.d/rc.cgconfig restart
and it automagicly umouted and mounted groups
Stopping cgconfig service: Starting cgconfig service:
it seems like the cgconfig service not running,so check it!
# /etc/init.d/cgconfig status
# mkdir -p /cgroup/cpuacct /cgroup/memory /cgroup/devices /cgroup/freezer net_cls /cgroup/blkio
# cat /etc/cgconfig.conf |tail|grep "="|awk '{print "mount -t cgroup -o",$1,$1,$NF}'>cgroup_mount.sh
# sh ./cgroup_mount.sh
# /etc/init.d/cgconfig restart
# /etc/init.d/docker restart
This situation occurs when the kernel is booted with cgroup_disable=memory and /etc/cgconfig.conf contains memory = /cgroup/memory;
This causes only /cgroup/cpuset to be mounted instead of the full set.
Solution: either remove cgroup_disable=memory from your kernel boot options or comment out memory = /cgroup/memory; from cgconfig.conf.
The cgconfig service startup uses mount and umount which requires an extra privilege bump from docker.
See the --privileged=true flag here for more info.
I was able to overcome this issue by starting my container with:
docker run -it --privileged=true my-image.
Tested in Centos6, Centos6.5.