I was trying to use the hostnetworking mode, which is supposed to share the same networking namespace as the Docker host, according to https://docs.docker.com/network/host/.
However, when I use nc to listen in on a port within the image, I'm unable to see that port listening on the host.
Dockerfile:
FROM kalilinux/kali-rolling
LABEL version="1.0" \
author="[Redacted Author]" \
description="Kali Docker Image"
RUN echo "deb http://http.kali.org/kali kali-rolling main contrib non-free" > /etc/apt/sources.list && \
echo "deb-src http://http.kali.org/kali kali-rolling main contrib non-free" >> /etc/apt/sources.list && \
echo "kali-docker" > /etc/hostname && \
set -x && \
apt-get -yqq update && \
apt-get -yqq dist-upgrade && \
apt-get clean && \
apt-get install -yqq vim telnet nmap metasploit-framework sqlmap wpscan netcat
WORKDIR /root
docker-compose.yml:
version: "3.4"
services:
kali:
build:
network: host
context: .
dockerfile: Dockerfile
volumes:
- ./kali-root:/root
tty: true
privileged: true
I built and ran the image in the following manner:
docker-compose build
docker-compose up -d
When I run nc -lvnp 80 from within the image, I'm unable to see anything on the host. Example:
root#kali:~/kali# docker-compose exec kali bash
root#29613f1a15fe:~# nc -lvnp 80
Listening on 0.0.0.0 80
and on the host:
root#kali:~# netstat -antp | grep -i listen
root#kali:~#
What am I doing wrong or missing here?
Related
I have rootless docker host, jenkins on docker and a fastapi app inside a container as well.
Jenkins dockerfile:
FROM jenkins/jenkins:lts-jdk11
USER root
RUN apt-get update && \
apt-get -y install apt-transport-https \
ca-certificates \
curl \
gnupg2 \
software-properties-common && \
curl -fsSL https://download.docker.com/linux/$(. /etc/os-release; echo "$ID")/gpg > /tmp/dkey; apt-key add /tmp/dkey && \
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") \
$(lsb_release -cs) \
stable" && \
apt-get update && \
apt-get -y install docker-ce docker-ce-cli containerd.io docker-compose-plugin
This is the docker run command:
docker run -d --name jenkins-docker --restart=on-failure -v jenkins_home:/var/jenkins_home -v /run/user/1000/docker.sock:/var/run/docker.sock -p 8080:8080 -p 5000:5000 jenkins-docker-image
Where -v /run/user/1000/docker.sock:/var/run/docker.sock is used so jenkins-docker can use the host's docker engine.
Then, for the tests I have a docker compose file:
services:
app:
volumes:
- /home/jap/.local/share/docker/volumes/jenkins_home/_data/workspace/vlep-pipeline_main/test-result:/usr/src
depends_on:
- testdb
...
testdb:
image: postgres:14-alpine
...
volumes:
test-result:
Here I am using the volume create on the host when I ran the jenkins-docker-image. After running jenkins 'test' stage I can see that a report.xml file was created inside the host and jenkins-docker volumes.
Inside jenkins-docker
root#89b37f219be1:/var/jenkins_home/workspace/vlep-pipeline_main/test-result# ls
report.xml
Inside host
jap#jap:~/.local/share/docker/volumes/jenkins_home/_data/workspace/vlep-pipeline_main/test-result $ ls
report.xml
I then have the following steps on my jenkinsfile:
steps {
sh 'docker compose -p testing -f docker/testing.yml up -d'
junit "/var/jenkins_home/workspace/vlep-pipeline_main/test-result/report.xml"
}
I also tried using the host path for the junit step, but either way I get on jenkins logs:
Recording test results
No test report files were found. Configuration error?
What am I doing wrong?
I'm trying to deploy a stack to a docker swarm to use an Icecc build farm but I'm running into trouble as the other nodes on my swarm can't seem to start the Icecc daemon service, I assume due to swarm doing something with the network.
compose file:
version: "3.3"
services:
icecc-scheduler:
image: git.example.com/devops/docker-services/icecc-scheduler
build:
context: ./
dockerfile: services/icecc-scheduler.dockerfile
restart: unless-stopped
ports:
- "8765:8765"
deploy:
mode: replicated
replicas: 1
placement:
constraints: [node.role == manager]
icecc-daemon:
image: git.example.com/devops/docker-services/icecc-daemon
build:
context: ./
dockerfile: services/icecc-daemon.dockerfile
restart: unless-stopped
ports:
- "8766:8766"
- "10245:10245"
depends_on:
- "icecc-scheduler"
deploy:
mode: global
I currently have 2 nodes on the swarm both of them have ports 2377/tcp, 7964/tcp, 7946/udp, and 4789/udp open on their firewalls.
docker node ls:
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
i6edk9ny6z38krv6m5738uzwu st12873 Ready Active 20.10.12
phnvvy2139wft9innou0uermq * st12874 Ready Active Leader 20.10.12
After starting the stack everything works as expected on the manager node but the worker node's daemon never starts, I even opened Icemon to make sure that the containers were really working.
docker stack services build-farm:
ID NAME MODE REPLICAS IMAGE PORTS
svz61qchpqeb build-farm_icecc-daemon global 1/2 git.example.com/devops/docker-services/icecc-daemon:latest *:8766->8766/tcp, *:10245->10245/tcp
vy2gjhze70ji build-farm_icecc-scheduler replicated 1/1 git.example.com/devops/docker-services/icecc-scheduler:latest *:8765->8765/tcp
On the worker node I listed the logs from the container and it's saying it's not permitted to send a broadcast to find the scheduler?
logs from the stuck container:
[7] 2022-07-15 15:20:35: open_send_broadcast sendto(Error: Operation not permitted)
[7] 2022-07-15 15:20:35: scheduler not yet found/selected.
[7] 2022-07-15 15:20:38: ignoring localhost lo for broadcast
The dockerfiles for the daemon and scheduler:
# Don't prompt for manual setup
ENV DEBIAN_FRONTEND=noninteractives
RUN apt-get update \
&& apt-get install -y \
icecc \
build-essential \
libncurses-dev \
libssl-dev \
libelf-dev \
libudev-dev \
libpci-dev \
libiberty-dev \
&& apt-get autoclean \
&& rm -rf \
/var/lib/apt/lists/* \
/var/tmp/* \
/tmp/*
EXPOSE 10245
EXPOSE 8766
COPY configs/icecc.conf /etc/icecc/icecc.conf
ENTRYPOINT [ "iceccd", "-vvv" ]
# Don't prompt for manual setup
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update \
&& apt-get install -y \
icecc \
&& apt-get autoclean \
&& rm -rf \
/var/lib/apt/lists/* \
/var/tmp/* \
/tmp/*
EXPOSE 8765
COPY configs/icecc.conf /etc/icecc/icecc.conf
ENTRYPOINT [ "icecc-scheduler", "--port", "8765", "-vvv" ]
I have a pretty simple docker-compose setup working fine in Ubuntu 20 Desktop, but not working the same in Ubuntu 20 WSL2 in Windows 10:
version: "3.8"
services:
webserver_awesome:
container_name: myawesomesite.xyz
hostname: myawesomesite.xyz
build: ./webserver
volumes:
- './app/:/var/www/html'
depends_on:
- db_awesome
networks:
- internal_myawesomesite
db_awesome:
image: mysql:5.7
ports:
- '3310:3306'
environment:
MYSQL_ROOT_PASSWORD: 'secret'
MYSQL_DATABASE: 'myawesomesite'
MYSQL_USER: 'myawesomesite'
MYSQL_PASSWORD: 'secret'
MYSQL_ALLOW_EMPTY_PASSWORD: 'yes'
networks:
- internal_myawesomesite
volumes:
- './mysql:/var/lib/mysql'
redis_awesome:
image: 'redis:alpine'
ports:
- '6381:6379'
volumes:
- './redis/:/data'
networks:
- internal_myawesomesite
networks:
internal_myawesomesite:
driver: bridge
My ./webserver Dockerfile is an ubuntu with an nginx, php7.4, xdebug, and looks like so:
FROM ubuntu:20.04
LABEL maintainer="Cristian E."
WORKDIR /var/www/html
ENV TZ=UTC
RUN apt-get update \
&& apt-get install -y iputils-ping \
&& apt-get install -y nginx \
&& apt-get install -y gnupg gosu curl ca-certificates zip unzip git supervisor sqlite3 libcap2-bin libpng-dev python2 \
&& mkdir -p ~/.gnupg \
&& chmod 600 ~/.gnupg \
&& echo "disable-ipv6" >> ~/.gnupg/dirmngr.conf \
&& apt-key adv --homedir ~/.gnupg --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys E5267A6C \
&& apt-key adv --homedir ~/.gnupg --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys C300EE8C \
&& echo "deb http://ppa.launchpad.net/ondrej/php/ubuntu focal main" > /etc/apt/sources.list.d/ppa_ondrej_php.list \
&& apt-get update \
&& apt-get install -y php7.4-cli php7.4-dev \
php7.4-pgsql php7.4-sqlite3 php7.4-gd \
php7.4-curl php7.4-memcached \
php7.4-imap php7.4-mysql php7.4-mbstring \
php7.4-xml php7.4-zip php7.4-bcmath php7.4-soap \
php7.4-intl php7.4-readline \
php7.4-msgpack php7.4-igbinary php7.4-ldap \
php7.4-redis \
php7.4-fpm \
nano \
&& pecl install xdebug-3.0.0 \
&& php -r "readfile('http://getcomposer.org/installer');" | php -- --install-dir=/usr/bin/ --filename=composer \
&& curl -sL https://deb.nodesource.com/setup_15.x | bash - \
&& apt-get install -y nodejs \
&& apt-get -y autoremove \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# DELETE DEFAULT NGINX SITE & REPLACE WITH OUR ONE
RUN rm -rf /etc/nginx/sites-available/default
RUN npm install -g laravel-echo-server
# Turn off daemon mode, so we can control nginx via supervisor
# supervisord can only handle processes in foreground. The default for nginx is running in background as daemon. To ensure that your nginx is running with supervisord you have to set 'daemon off' in your nginx.conf
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
RUN mkdir /etc/nginx/ssl
COPY ./ssl /etc/nginx/ssl
COPY ./php7.4/nginx/default.conf /etc/nginx/sites-available/default
COPY ./run.sh ./
COPY ./php7.4/supervisord.conf /etc/supervisor/conf.d/supervisord.conf
COPY ./php7.4/php.ini /etc/php/7.4/fpm/conf.d/custom-php.ini
RUN sed -i 's/user = www-data/user = 1000/g' /etc/php/7.4/fpm/pool.d/www.conf
RUN sed -i 's/group = www-data/group = 1000/g' /etc/php/7.4/fpm/pool.d/www.conf
#RUN chmod -R 775 /var/www/html/storage
# Make permissions play nice
RUN usermod -u 1000 www-data
RUN chown -R 1000:1000 /var/www
What works in Ubuntu 20 Desktop:
I run docker-compose up, the webserver_awesome container goes up and it gets an IP address automatically (as it should);
if I inspect that container and put that container's ip address inside /etc/hosts like so:
xxx.xxx.xx.xx myawesomesite.xyz
then I can access myawesomesite.xyz in the browser and it works fine. I can access it via port 80 or 443 or any port that may be configured inside my nginx sites configs (see Dockerfile)
If you look at docker-compose you will see that I am not publishing any ports for webserver_awesome container and yet it is accessible from the host OS via the container's ip
The reason why I like this is because I can have many of these docker-compose instances, one for each php project that I'm working on, and I can then map the ips of those webserver containers inside /etc/hosts to top level domains like myawesomesite.xyz, anothersite.xyz, yetanother.xyz and I can access all sites at the same time on port 80 or 443 without conflicts.
Side note:
What usually is shown as general practice with local docker dev enviroments is that they publish port 8080 to the host and access the app via localhost:8080, and that is just not very good if you want to work on multiple projects at the same time and use port 443 for each one, and also many third party apis don't accept localhost as a domain or any other port except 443.
What doesn't work in Ubuntu 20 WSL2:
If I run docker-compose up just like on Ubuntu 20 Desktop, I can't ping the container's ip from inside Ubuntu 20 WSL (even though the docker-compose up command was ran from inside Ubuntu 20 WSL too.
Also, if I put the ip in the /etc/hosts file of Ubuntu 20 WSL, I can't access the site. It just hangs forever.
So my question is, why is networking working in one way on native Ubuntu 20 Desktop and why is it working differently on Ubuntu 20 over WSL (even though the tests I did were done from the command line from inside Ubuntu in both cases)
I've 4 containers configured like the following (docker-compose.yml):
version: '3'
networks:
my-ntwk:
ipam:
config:
- subnet: 172.20.0.0/24
services:
f-app:
image: f-app
tty: true
container_name: f-app
hostname: f-app.info.my
ports:
- "22:22"
networks:
my-ntwk:
ipv4_address: 172.20.0.5
extra_hosts:
- "f-db.info.my:172.20.0.6"
- "p-app.info.my:172.20.0.7"
- "p-db.info.my:172.20.0.8"
depends_on:
- f-db
- p-app
- p-db
f-db:
image: f-db
tty: true
container_name: f-db
hostname: f-db.info.my
networks:
my-ntwk:
ipv4_address: 172.20.0.6
p-app:
image: p-app
tty: true
container_name: p-app
hostname: p-app.info.my
networks:
my-ntwk:
ipv4_address: 172.20.0.7
p-db:
image: p-db
tty: true
container_name: prod-db
hostname: p-db.info.my
networks:
my-ntwk:
ipv4_address: 172.20.0.8
Each image is build by the same Dockerfile :
FROM openjdk:8
RUN apt-get update && \
apt-get install -y openssh-server
EXPOSE 22
RUN useradd -s /bin/bash -p $(openssl passwd -1 myuser) -d /home/nf2/ -m myuser
ENTRYPOINT service ssh start && bash
Now I want to be able to connect from f-app to any other machine without typing the password when running this line : ssh myuser#f-db.info.my.
I know that I need to exchange ssh-keys between the servers (thats not a problem). My problem is how to do it with docker containers and when (build or runtime)!
For doing ssh without password you to need to create passwordless user along with configuring SSH keys in the container, plus you will also need to add ssh keys in the sources container plus public key should be added in the authorized of the destination container.
Here is the working Dockerfile
FROM openjdk:7
RUN apt-get update && \
apt-get install -y openssh-server vim
EXPOSE 22
RUN useradd -rm -d /home/nf2/ -s /bin/bash -g root -G sudo -u 1001 ubuntu
USER ubuntu
WORKDIR /home/ubuntu
RUN mkdir -p /home/nf2/.ssh/ && \
chmod 0700 /home/nf2/.ssh && \
touch /home/nf2/.ssh/authorized_keys && \
chmod 600 /home/nf2/.ssh/authorized_keys
COPY ssh-keys/ /keys/
RUN cat /keys/ssh_test.pub >> /home/nf2/.ssh/authorized_keys
USER root
ENTRYPOINT service ssh start && bash
docker-compose will remain same, here is the testing script that you can try.
#!/bin/bash
set -e
echo "start docker-compose"
docker-compose up -d
echo "list of containers"
docker-compose ps
echo "starting ssh test from f-db to f-app"
docker exec -it f-db sh -c "ssh -i /keys/ssh_test ubuntu#f-app"
For further detail, you can try the above working example docker-container-ssh
git clone git#github.com:Adiii717/docker-container-ssh.git
cd docker-container-ssh;
./test.sh
You can replace the keys as these were used for testing purpose only.
If you are using docker compose an easy choice is to forward SSH agent like that:
something:
container_name: something
volumes:
- $SSH_AUTH_SOCK:/ssh-agent # Forward local machine SSH key to docker
environment:
SSH_AUTH_SOCK: /ssh-agent
ssh-forwarding on macOS hosts - instead of mounting the path of $SSH_AUTH_SOCK, you have to mount this path - /run/host-services/ssh-auth.sock
or you can do it like:
It's a harder problem if you need to use SSH at build time. For example if you're using git clone, or in my case pip and npm to download from a private repository.
The solution I found is to add your keys using the --build-arg flag. Then you can use the new experimental --squash command (added 1.13) to merge the layers so that the keys are no longer available after removal. Here's my solution:
Build command
$ docker build -t example --build-arg ssh_prv_key="$(cat ~/.ssh/id_rsa)" --build-arg ssh_pub_key="$(cat ~/.ssh/id_rsa.pub)" --squash .
Dockerfile
FROM openjdk:8
ARG ssh_prv_key
ARG ssh_pub_key
RUN apt-get update && \
apt-get install -y \
git \
openssh-server \
libmysqlclient-dev
# Authorize SSH Host
RUN mkdir -p /root/.ssh && \
chmod 0700 /root/.ssh && \
ssh-keyscan github.com > /root/.ssh/known_hosts
# Add the keys and set permissions
RUN echo "$ssh_prv_key" > /root/.ssh/id_rsa && \
echo "$ssh_pub_key" > /root/.ssh/id_rsa.pub && \
chmod 600 /root/.ssh/id_rsa && \
chmod 600 /root/.ssh/id_rsa.pub
RUN apt-get update && \
apt-get install -y openssh-server && \
apt-get install -y openssh-client
EXPOSE 22
RUN useradd -s /bin/bash -p $(openssl passwd -1 myuser) -d /home/nf2/ -m myuser
ENTRYPOINT service ssh start && bash
If you're using Docker 1.13+ and/or have experimental features on you can append --squash to the build command which will merge the layers, removing the SSH keys and hiding them from docker history.
I am running docker (via docker-compose) and can't run varnishadm from within the container. The error produced is:
Cannot open /var/lib/varnish/4f0dab1efca3/_.vsm: No such file or directory
Could not open shared memory
I have tried searching on the 'shared memory' issue and _.vsm with no luck. It seems that the _.vsm is not created at all and /var/lib/varnish/ inside the container is empty.
I have tried a variety of -T settings without any luck.
Why run varnishadm?
The root of why I need to run varnishadm is to reload varnish while saving the cache. My backup backup backup option is to set up varnish as a service. We are on an old version of Varnish for the time being.
How am I starting docker?
CMD varnishd -F -f /etc/varnish/varnish.vcl \
-s malloc,1G \
-a :80
Full Dockerfile
FROM ubuntu:12.04
RUN apt-get update \
&& apt-get upgrade -y \
&& apt-get install wget dtrx varnish -y \
&& apt-get install pkg-config autoconf autoconf-archive automake libtool python-docutils libpcre3 libpcre3-dev xsltproc make -y \ && rm -rf /var/lib/apt/lists/*
RUN export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig/
RUN wget https://github.com/varnishcache/varnish-cache/archive/varnish-
3.0.2.tar.gz --no-check-certificate \
&& dtrx -n varnish-3.0.2.tar.gz
WORKDIR /varnish-3.0.2/varnish-cache-varnish-3.0.2/
RUN cd /varnish-3.0.2/varnish-cache-varnish-3.0.2/ && ./autogen.sh &&
cd /varnish-3.0.2/varnish-cache-varnish-3.0.2/ && ./configure && make install
RUN cd / && wget --no-check-certificate https://github.com/Dridi/libvmod-querystring/archive/v0.3.tar.gz && dtrx -n ./v0.3.tar.gz
WORKDIR /v0.3/libvmod-querystring-0.3
RUN ./autogen.sh && ./configure VARNISHSRC=/varnish-3.0.2/varnish-cache-varnish-3.0.2/ && make install
RUN cp /usr/local/lib/varnish/vmods/* /usr/lib/varnish/vmods/
WORKDIR /etc/varnish/
CMD varnishd -F -f /etc/varnish/varnish.vcl \
-s malloc,1G \
-a :80
EXPOSE 80
Full docker-compose
version: "3"
services:
varnish:
build: ./
ports:
- "8000:80"
volumes:
- ./default.vcl:/etc/varnish/varnish.vcl
- ./devicedetect.vcl:/etc/varnish/devicedetect.vcl
restart: unless-stopped