Attempting to add an insercure docker registry to a dind image that I run in a concourse task:
I tried beginning my task by running:
export DOCKER_OPTS="$DOCKER_OPTS --insecure-registry=${INSECURE_REG}"
and tried spinning up the daemon and compose:
docker daemon --insecure-registry=${INSECURE_REG} &
docker-compose up
However the task errors: server gave http response to https client, and no such image
The whole task looks like this (basically it is a shell script executed in the dind container that ends with a docker-compose):
# Connect to insecure docker registry:
export DOCKER_OPTS="$DOCKER_OPTS --insecure-registry=${INSECURE_REG}"
# Install docker-compose:
apk add --no-cache py-pip curl
pip install docker-compose
# Verify docker registry:
curl http://${INSECURE_REG}/v2/_catalog #curl does return the expected json
sanitize_cgroups() {
mkdir -p /sys/fs/cgroup
mountpoint -q /sys/fs/cgroup || \
mount -t tmpfs -o uid=0,gid=0,mode=0755 cgroup /sys/fs/cgroup
mount -o remount,rw /sys/fs/cgroup
sed -e 1d /proc/cgroups | while read sys hierarchy num enabled; do
if [ "$enabled" != "1" ]; then
# subsystem disabled; skip
continue
fi
grouping="$(cat /proc/self/cgroup | cut -d: -f2 | grep "\\<$sys\\>")"
if [ -z "$grouping" ]; then
# subsystem not mounted anywhere; mount it on its own
grouping="$sys"
fi
mountpoint="/sys/fs/cgroup/$grouping"
mkdir -p "$mountpoint"
# clear out existing mount to make sure new one is read-write
if mountpoint -q "$mountpoint"; then
umount "$mountpoint"
fi
mount -n -t cgroup -o "$grouping" cgroup "$mountpoint"
if [ "$grouping" != "$sys" ]; then
if [ -L "/sys/fs/cgroup/$sys" ]; then
rm "/sys/fs/cgroup/$sys"
fi
ln -s "$mountpoint" "/sys/fs/cgroup/$sys"
fi
done
}
# https://github.com/concourse/concourse/issues/324
sanitize_cgroups
# Spin up the stack as described in docker-compose:
docker daemon --insecure-registry=${INSECURE_REG} &
docker-compose up
dockerd --insecure-registry=${INSECURE_REG}
Is the correct way of starting docker daemon with insecure registry, even though it reported errors, it got the images and started them successfully
Related
How do I move volumes from docker-for-mac into colima?
Will copy all the volumes from docker-for-mac and move them to colima.
Note: there will be a lot of volumes you may not want to copy over since they're temporary ones, you can ignore them by simply adding a | grep "YOUR FILTER" to the for loop, either before or after the awk.
The following code makes 2 assumptions:
you have docker-for-mac installed and running
you have colima running
That is all you need, now copy-and-paste this into your terminal. No need to touch anything.
(
# set -x # uncomment to debug
set -e
# ssh doesn't like file descriptor piping, we need to write the configuration into someplace real
tmpconfig=$(mktemp);
# Need to have permissions to copy the volumes, and need to remove the ControlPath and add ForwardAgent
(limactl show-ssh --format config colima | grep -v "^ ControlPath\| ^User"; echo " ForwardAgent=yes") > $tmpconfig;
# Setup root account
ssh -F $tmpconfig $USER#lima-colima "sudo mkdir -p /root/.ssh/; sudo cp ~/.ssh/authorized_keys /root/.ssh/authorized_keys"
# Loop over each volume inside docker-for-mac
for volume_name in $(DOCKER_CONTEXT=desktop-linux docker volume ls | awk '{print $2}'); do
echo $volume_name;
# Make the volume backup
DOCKER_CONTEXT=desktop-linux docker run -d --rm --mount source=$volume_name,target=/volume --name copy-instance busybox sleep infinate;
DOCKER_CONTEXT=desktop-linux docker exec copy-instance sh -c "tar czf /$volume_name.tar /volume";
DOCKER_CONTEXT=desktop-linux docker cp copy-instance:/$volume_name.tar /tmp/$volume_name.tar;
DOCKER_CONTEXT=desktop-linux docker kill copy-instance;
# Restore the backup inside colima
DOCKER_CONTEXT=colima docker volume create $volume_name;
ssh -F $tmpconfig root#lima-colima "rm -rf /var/lib/docker/volumes/$volume_name; mkdir -p /var/lib/docker/volumes/$volume_name/_data";
scp -r -F $tmpconfig /tmp/$volume_name.tar root#lima-colima:/tmp/$volume_name.tar;
ssh -F $tmpconfig root#lima-colima "tar -xf /tmp/$volume_name.tar --strip-components=1 --directory /var/lib/docker/volumes/$volume_name/_data";
done
)
i want to use an image in docker-compose with systemctl in it, i saw this image in the internet https://github.com/solita/docker-systemd, it work good , but when i tried to use it with docker-compose it dosnt work(it worket but systemctl dosen't, it give this error "System has not been booted with systemd as init system (PID 1). Can't operate." )
test1:
container_name: 'test1'
build: './test'
volumes:
- /:/host
- /sys/fs/cgroup:/sys/fs/cgroup:ro
security_opt:
- "seccomp=unconfined"
tmpfs:
- /run
- /run/lock
privileged: true
and the build file is test.sh
#!/bin/sh
set -eu
if nsenter --mount=/host/proc/1/ns/mnt -- mount | grep /sys/fs/cgroup/systemd >/dev/null 2>&1; then
echo "The systemd cgroup hierarchy is already mounted at /sys/fs/cgroup/systemd."
else
if [ -d /host/sys/fs/cgroup/systemd ]; then
echo "The mount point for the systemd cgroup hierarchy already exists at /sys/fs/cgroup/systemd."
else
echo "Creating the mount point for the systemd cgroup hierarchy at /sys/fs/cgroup/systemd."
mkdir -p /host/sys/fs/cgroup/systemd
fi
echo "Mounting the systemd cgroup hierarchy."
nsenter --mount=/host/proc/1/ns/mnt -- mount -t cgroup cgroup -o none,name=systemd /sys/fs/cgroup/systemd
fi
echo "Your Docker host is now configured for running systemd containers!"
If you want to run "systemctl" in docker to start/stop services then you could do that without systemd. The docker-systemctl-replacement is made for that.
If you need to use systemd, you can follow the repo it's for Rockylinux.
However you can use this repo to have systemd enable for Ubuntu
When you use 'docker run' you need to set like below ((enable rw) and connect to it with docker exec.)
`--volume=/sys/fs/cgroup:/sys/fs/cgroup:rw`
You can encounter other problems of compatibilities with systemd in docker (for timedatectl, loginctl etc...) or service command
$> dnf install initscripts systemd
$> systemctl start systemd-logind
You can after migrate it to docker-compose mode
File name: dockerHandler.sh
#!/bin/bash
set -e
to=$1
shift
cont=$(docker run -d "$#")
code=$(timeout "$to" docker wait "$cont" || true)
docker kill $cont &> /dev/null
docker rm $cont
echo -n 'status: '
if [ -z "$code" ]; then
echo timeout
else
echo exited: $code
fi
echo output:
# pipe to sed simply for pretty nice indentation
docker logs $cont | sed 's/^/\t/'
docker rm $cont &> /dev/null
But whenever I check the docker container status after running the the docker image it is giving list of exited docker containers.
command: docker ps -as
Hence to delete those exited containers I am running manually below command
rm $(docker ps -a -f status=exited -q)
You should add the flag --rm to your docker command:
From Docker man:
➜ ~ docker run --help | grep rm
--rm Automatically remove the container when it exits
removed lines
docker kill $cont &> /dev/null
docker rm $cont
docker logs $cont | sed 's/^/\t/'
and used gtimeout instead timeout in Mac, it works fine.
To install gtimeout on Mac:
Installing CoreUtils
brew install coreutils
In line 8 of DockerTimeout.sh change timeout to gtimeout
I found a video about setting up the docker remote api by Packt publishing.
In the video we are told to change the /etc/init/docker.conf file by adding "-H tcp://127.0.0.1:4243 -H unix:///var/run/docker/sock" to DOCKER_OPTS=. Then we have to restart docker for the changes to take effect.
However after I do all that, I still can't curl localhost at that port. Doing so returns:
vagrant#vagrant-ubuntu-trusty-64:~$ curl localhost:4243/_ping
curl: (7) Failed to connect to localhost port 4243: Connection refused
I'm relativity new to docker, if somebody could help me out here I'd be very grateful.
Edit:
docker.conf
description "Docker daemon"
start on (filesystem and net-device-up IFACE!=lo)
stop on runlevel [!2345]
limit nofile 524288 1048576
limit nproc 524288 1048576
respawn
kill timeout 20
pre-start script
# see also https://github.com/tianon/cgroupfs-mount/blob/master/cgroupfs-mount
if grep -v '^#' /etc/fstab | grep -q cgroup \
|| [ ! -e /proc/cgroups ] \
|| [ ! -d /sys/fs/cgroup ]; then
exit 0
fi
if ! mountpoint -q /sys/fs/cgroup; then
mount -t tmpfs -o uid=0,gid=0,mode=0755 cgroup /sys/fs/cgroup
fi
(
cd /sys/fs/cgroup
for sys in $(awk '!/^#/ { if ($4 == 1) print $1 }' /proc/cgroups); do
mkdir -p $sys
if ! mountpoint -q $sys; then
if ! mount -n -t cgroup -o $sys cgroup $sys; then
rmdir $sys || true
fi
fi
done
)
end script
script
# modify these in /etc/default/$UPSTART_JOB (/etc/default/docker)
DOCKER=/usr/bin/$UPSTART_JOB
DOCKER_OPTS="-H tcp://127.0.0.1:4243 -H unix:///var/run/docker.sock"
if [ -f /etc/default/$UPSTART_JOB ]; then
. /etc/default/$UPSTART_JOB
fi
exec "$DOCKER" daemon $DOCKER_OPTS
end script
# Don't emit "started" event until docker.sock is ready.
# See https://github.com/docker/docker/issues/6647
post-start script
DOCKER_OPTS=
if [ -f /etc/default/$UPSTART_JOB ]; then
"/etc/init/docker.conf" 60L, 1582C
EDIT2: Output of ps aux | grep docker
vagrant#vagrant-ubuntu-trusty-64:~$ ps aux | grep docker
root 858 0.2 4.2 401836 21504 ? Ssl 06:12 0:00 /usr/bin/docker daemon --insecure-registry 11.22.33.44
:5000
vagrant 1694 0.0 0.1 10460 936 pts/0 S+ 06:15 0:00 grep --color=auto docker
The problem
According to the output of ps aux|grep docker it can be noticed that the options the daemon is started with do not match the ones in the docker.conf file. Another file is then used to start the docker daemon service.
Solution
To solve this, track down the file that contains the option "--insecure-registry 11.22.33.44:5000 that may either /etc/default/docker or /etc/init/docker.conf or /etc/systemd/system/docker.service or idk-where-else and modify it accordingly with the needed options.
Then restart the daemon and you're good to go !
I saw some blog posts where people talk about JMeter and Docker. I understand that Docker will be helpful for setting up a container with all the dependencies. But they all run/create the containers in the same host. So ideally all the containers will share the host resources. It is like you run multiple instances of jmeter in the same host. It will not be helpful to generate more load.
When a host has 12GB RAM, I think 1 instance of JMeter with 10GB heap can generate more load than running 10 containers with 1 jmeter instance in each container.
What is the point of running docker here?
I made an automatic solution that can be easily integrated with Jenkins.
The dockerfile should be extended from java8 and add the JMeter build. This Docker image I will call jmeter-base:
FROM java:8
RUN mkdir /jmeter \
&& cd /jmeter/ \
&& wget https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-3.3.tgz \
&& tar -xvzf apache-jmeter-3.3.tgz \
&& rm apache-jmeter-3.3.tgz
ENV JMETER_HOME /jmeter/apache-jmeter-3.3/
# Add Jmeter to the Path
ENV PATH $JMETER_HOME/bin:$PATH
If you want to use a master-slave solution, this is the jmeter master Dockerfile:
FROM jmeter-base
WORKDIR $JMETER_HOME
# Ports to be exposed from the container for JMeter Master
RUN mkdir scripts
EXPOSE 60000
And this is the jmeter slave Dockerfile:
FROM jmeter-base
# Ports to be exposed from the container for JMeter Slaves/Server
EXPOSE 1099 50000
# Application to run on starting the container
ENTRYPOINT $JMETER_HOME/bin/jmeter-server \
-Dserver.rmi.localport=50000 \
-Dserver_port=1099
Now, with the both images, you should execute a script to execute you should know all slave IPs. This script make all the job:
#!/bin/bash
COUNT=${1-1}
docker build -t jmeter-base jmeter-base
docker-compose build && docker-compose up -d && docker-compose scale master=1 slave=$COUNT
SLAVE_IP=$(docker inspect -f '{{.Name}} {{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $(docker ps -aq) | grep slave | awk -F' ' '{print $2}' | tr '\n' ',' | sed 's/.$//')
WDIR=`docker exec -it master /bin/pwd | tr -d '\r'`
mkdir -p results
for filename in scripts/*.jmx; do
NAME=$(basename $filename)
NAME="${NAME%.*}"
eval "docker cp $filename master:$WDIR/scripts/"
eval "docker exec -it master /bin/bash -c 'mkdir $NAME && cd $NAME && ../bin/jmeter -n -t ../$filename -R$SLAVE_IP'"
eval "docker cp master:$WDIR/$NAME results/"
done
docker-compose stop && docker-compose rm -f
I came to understand from this post from a friend of mine that we should not be running multiple docker containers in the same host to generate more load.
http://www.testautomationguru.com/jmeter-distributed-load-testing-using-docker/
Instead the usage of docker here is to quickly setup the jmeter environment.