I found a video about setting up the docker remote api by Packt publishing.
In the video we are told to change the /etc/init/docker.conf file by adding "-H tcp://127.0.0.1:4243 -H unix:///var/run/docker/sock" to DOCKER_OPTS=. Then we have to restart docker for the changes to take effect.
However after I do all that, I still can't curl localhost at that port. Doing so returns:
vagrant#vagrant-ubuntu-trusty-64:~$ curl localhost:4243/_ping
curl: (7) Failed to connect to localhost port 4243: Connection refused
I'm relativity new to docker, if somebody could help me out here I'd be very grateful.
Edit:
docker.conf
description "Docker daemon"
start on (filesystem and net-device-up IFACE!=lo)
stop on runlevel [!2345]
limit nofile 524288 1048576
limit nproc 524288 1048576
respawn
kill timeout 20
pre-start script
# see also https://github.com/tianon/cgroupfs-mount/blob/master/cgroupfs-mount
if grep -v '^#' /etc/fstab | grep -q cgroup \
|| [ ! -e /proc/cgroups ] \
|| [ ! -d /sys/fs/cgroup ]; then
exit 0
fi
if ! mountpoint -q /sys/fs/cgroup; then
mount -t tmpfs -o uid=0,gid=0,mode=0755 cgroup /sys/fs/cgroup
fi
(
cd /sys/fs/cgroup
for sys in $(awk '!/^#/ { if ($4 == 1) print $1 }' /proc/cgroups); do
mkdir -p $sys
if ! mountpoint -q $sys; then
if ! mount -n -t cgroup -o $sys cgroup $sys; then
rmdir $sys || true
fi
fi
done
)
end script
script
# modify these in /etc/default/$UPSTART_JOB (/etc/default/docker)
DOCKER=/usr/bin/$UPSTART_JOB
DOCKER_OPTS="-H tcp://127.0.0.1:4243 -H unix:///var/run/docker.sock"
if [ -f /etc/default/$UPSTART_JOB ]; then
. /etc/default/$UPSTART_JOB
fi
exec "$DOCKER" daemon $DOCKER_OPTS
end script
# Don't emit "started" event until docker.sock is ready.
# See https://github.com/docker/docker/issues/6647
post-start script
DOCKER_OPTS=
if [ -f /etc/default/$UPSTART_JOB ]; then
"/etc/init/docker.conf" 60L, 1582C
EDIT2: Output of ps aux | grep docker
vagrant#vagrant-ubuntu-trusty-64:~$ ps aux | grep docker
root 858 0.2 4.2 401836 21504 ? Ssl 06:12 0:00 /usr/bin/docker daemon --insecure-registry 11.22.33.44
:5000
vagrant 1694 0.0 0.1 10460 936 pts/0 S+ 06:15 0:00 grep --color=auto docker
The problem
According to the output of ps aux|grep docker it can be noticed that the options the daemon is started with do not match the ones in the docker.conf file. Another file is then used to start the docker daemon service.
Solution
To solve this, track down the file that contains the option "--insecure-registry 11.22.33.44:5000 that may either /etc/default/docker or /etc/init/docker.conf or /etc/systemd/system/docker.service or idk-where-else and modify it accordingly with the needed options.
Then restart the daemon and you're good to go !
Related
My goal is to query Haproxy Runtime API using dockerized socat.
Below command returns empty result (/var/run/haproxy.stat is haproxy socket located on the docker host)
echo "-h" | docker run -a stdin -a stderr alpine/socat stdio /var/run/haproxy.stat
I've tried to add haproxy socket via volume, but the result is still empty.
echo "-h" | docker run -a stdin -a stderr -v /var/run/haproxy.stat:/var/run/haproxy.stat alpine/socat stdio /var/run/haproxy.stat
Command that worked is:
echo "-h" | docker run -i -a stdin -a stderr -a stdout -v /var/run/haproxy.stat:/var/run/haproxy.stat alpine/socat stdio /var/run/haproxy.stat
Needed to add -a stdout and -i to docker run
Following suggestion by BMitch" tried below command and it worked as well
echo "-h" | docker run -i -v /var/run/haproxy.stat:/var/run/haproxy.stat alpine/socat stdio /var/run/haproxy.stat
This command
echo 1 | sudo tee /proc/sys/net/ipv6/conf/all/disable_ipv6
when run inside a CentOS docker container (running on Mac), gives:
echo 1 | sudo tee /proc/sys/net/ipv6/conf/all/disable_ipv6
tee: /proc/sys/net/ipv6/conf/all/disable_ipv6: Read-only file system
1
When run inside a CentOS virtual machine, it succeeds and gives no error.
The directory permissions inside docker container and VM are exactly the same:
VM:
$ ls -ld /proc/sys/net/ipv6/conf/all/disable_ipv6
-rw-r--r-- 1 root root 0 Jan 4 21:09 /proc/sys/net/ipv6/conf/all/disable_ipv6
docker:
$ ls -ld /proc/sys/net/ipv6/conf/all/disable_ipv6
-rw-r--r-- 1 root root 0 Jan 5 05:05 /proc/sys/net/ipv6/conf/all/disable_ipv6
This is a fresh, brand new container.
Docker version:
$ docker --version
Docker version 18.09.0, build 4d60db4
What am I missing?
Try hackish solution and add extended privileges to the container with --privileged:
$ docker run --rm -ti centos \
bash -c "echo 1 | tee /proc/sys/net/ipv6/conf/all/disable_ipv6"
tee: /proc/sys/net/ipv6/conf/all/disable_ipv6: Read-only file system
1
vs
$ docker run --privileged --rm -ti centos \
bash -c "echo 1 | tee /proc/sys/net/ipv6/conf/all/disable_ipv6"
1
You can use --cap-add to add precise privilege instead of --privileged.
However --sysctl looks like the best solution, instead of hacking networking in the container with --privileged:
$ docker run --sysctl net.ipv6.conf.all.disable_ipv6=1 \
--rm -ti centos bash -c "cat /proc/sys/net/ipv6/conf/all/disable_ipv6"
1
I'm running two docker container vm1 and vm2 with the same docker image. Both can run successful separately. But can not run at the same time. I've checked the CPU and memory. What's other resource limited the docker to run multiple containers?
systemctl status docker result when run 'vm1' only.
Tasks: 148
Memory: 357.9M
systemctl status docker result when run 'vm2' only.
Tasks: 140
Memory: 360.0M
My system still contains about 4GB free RAM, the CPUs are idle too.
When I run vm1 then vm2, the vm2 will failed with some log like:
[17:55:50.504452] connect(172.17.0.2, 2889) failed: Operation now in progress
And other log like
/etc/bashrc: fork: retry: Resource temporarily unavailable
systemctl status docker result when run 'vm1' then 'vm2'.
Tasks: 244
Memory: 372.2M
vm1 docker run command
exec docker run --rm -it --name vm1
-e OUT_IP="$MYIP" \
-h vm1 \
-v /MyLib/opt:/opt:ro \
-v /home/myid:/home/guest \
-v /sybase:/sybase \
-v /sybaseDB:/sybaseDB \
run-image $*
vm2 docker run command
exec docker run --rm -it --name vm2
-e OUT_IP="$MYIP" \
-h vm2 \
-v /MyLib/opt:/opt:ro \
-v /home/myid:/home/guest \
-v /sybase2:/sybase \
-v /sybaseDB2:/sybaseDB2 \
run-image $*
Some command result according to: fork: retry: Resource temporarily unavailable
# in host os
$ sysctl fs.file-nr
fs.file-nr = 4064 0 814022
# in docker container (vm2)
$ sudo su - guest
$ ulimit -Hn
1048576
$ sudo lsof -u guest 2>/dev/null | wc -l
230
The docker run user is 'guest', but I run program by 'ap' user account through sudo. I found there is different of 'ulimit -u' result inside the container, the run-image is based on centos:6
$ sudo su - guest
$ ulimit -u
unlimited
$ sudo su - ap
$ ulimit -u
1024
In my case, I found the result is caused by 'ap' user's default ulimit -u is only 1024. When run only vm1 or vm2, the 'ap' user's process/thread count is under 1024. When I run both vm1 and vm2, the total process count is larger than 1024.
The solution is enlarge the default user nproc limitation for centos 6:
sudo sed -i 's/1024/4096/' /etc/security/limits.d/90-nproc.conf
File name: dockerHandler.sh
#!/bin/bash
set -e
to=$1
shift
cont=$(docker run -d "$#")
code=$(timeout "$to" docker wait "$cont" || true)
docker kill $cont &> /dev/null
docker rm $cont
echo -n 'status: '
if [ -z "$code" ]; then
echo timeout
else
echo exited: $code
fi
echo output:
# pipe to sed simply for pretty nice indentation
docker logs $cont | sed 's/^/\t/'
docker rm $cont &> /dev/null
But whenever I check the docker container status after running the the docker image it is giving list of exited docker containers.
command: docker ps -as
Hence to delete those exited containers I am running manually below command
rm $(docker ps -a -f status=exited -q)
You should add the flag --rm to your docker command:
From Docker man:
➜ ~ docker run --help | grep rm
--rm Automatically remove the container when it exits
removed lines
docker kill $cont &> /dev/null
docker rm $cont
docker logs $cont | sed 's/^/\t/'
and used gtimeout instead timeout in Mac, it works fine.
To install gtimeout on Mac:
Installing CoreUtils
brew install coreutils
In line 8 of DockerTimeout.sh change timeout to gtimeout
Attempting to add an insercure docker registry to a dind image that I run in a concourse task:
I tried beginning my task by running:
export DOCKER_OPTS="$DOCKER_OPTS --insecure-registry=${INSECURE_REG}"
and tried spinning up the daemon and compose:
docker daemon --insecure-registry=${INSECURE_REG} &
docker-compose up
However the task errors: server gave http response to https client, and no such image
The whole task looks like this (basically it is a shell script executed in the dind container that ends with a docker-compose):
# Connect to insecure docker registry:
export DOCKER_OPTS="$DOCKER_OPTS --insecure-registry=${INSECURE_REG}"
# Install docker-compose:
apk add --no-cache py-pip curl
pip install docker-compose
# Verify docker registry:
curl http://${INSECURE_REG}/v2/_catalog #curl does return the expected json
sanitize_cgroups() {
mkdir -p /sys/fs/cgroup
mountpoint -q /sys/fs/cgroup || \
mount -t tmpfs -o uid=0,gid=0,mode=0755 cgroup /sys/fs/cgroup
mount -o remount,rw /sys/fs/cgroup
sed -e 1d /proc/cgroups | while read sys hierarchy num enabled; do
if [ "$enabled" != "1" ]; then
# subsystem disabled; skip
continue
fi
grouping="$(cat /proc/self/cgroup | cut -d: -f2 | grep "\\<$sys\\>")"
if [ -z "$grouping" ]; then
# subsystem not mounted anywhere; mount it on its own
grouping="$sys"
fi
mountpoint="/sys/fs/cgroup/$grouping"
mkdir -p "$mountpoint"
# clear out existing mount to make sure new one is read-write
if mountpoint -q "$mountpoint"; then
umount "$mountpoint"
fi
mount -n -t cgroup -o "$grouping" cgroup "$mountpoint"
if [ "$grouping" != "$sys" ]; then
if [ -L "/sys/fs/cgroup/$sys" ]; then
rm "/sys/fs/cgroup/$sys"
fi
ln -s "$mountpoint" "/sys/fs/cgroup/$sys"
fi
done
}
# https://github.com/concourse/concourse/issues/324
sanitize_cgroups
# Spin up the stack as described in docker-compose:
docker daemon --insecure-registry=${INSECURE_REG} &
docker-compose up
dockerd --insecure-registry=${INSECURE_REG}
Is the correct way of starting docker daemon with insecure registry, even though it reported errors, it got the images and started them successfully