Docker: Swarm worker nodes not finding locally built image - docker

Maybe I missed something, but I made a local docker image. I have a 3 node swarm up and running. Two workers and one manager. I use labels as a constraint. When I launch a service to one of the workers via the constraint it works perfectly if that image is public.
That is, if I do:
docker service create --name redis --network my-network --constraint node.labels.myconstraint==true redis:3.0.7-alpine
Then the redis service is sent to one of the worker nodes and is fully functional. Likewise, if I run my locally built image WITHOUT the constraint, since my manager is also a worker, it gets scheduled to the manager and runs perfectly well. However, when I add the constraint it fails on the worker node, from docker service ps 2l30ib72y65h I see:
... Shutdown Rejected 14 seconds ago "No such image: my-customized-image"
Is there a way to make the workers have access to the local images on the manager node of the swarm? Does it use a specific port that might not be open? If not, what am I supposed to do - run a local repository?

The manager node doesn't share out the local images from itself. You need to spin up a registry server (or user hub.docker.com). The effort needed for that isn't very significant:
# first create a user, updating $user for your environment:
if [ ! -d "auth" ]; then
mkdir -p auth
fi
touch auth/htpasswd
chmod 666 auth/htpasswd
docker run --rm -it \
-v `pwd`/auth:/auth \
--entrypoint htpasswd registry:2 -B /auth/htpasswd $user
chmod 444 auth/htpasswd
# then spin up the registry service listening on port 5000
docker run -d -p 5000:5000 --restart=always --name registry \
-v `pwd`/auth/htpasswd:/auth/htpasswd:ro \
-v `pwd`/registry:/var/lib/registry \
-e "REGISTRY_AUTH=htpasswd" \
-e "REGISTRY_AUTH_HTPASSWD_REALM=Local Registry" \
-e "REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd" \
-e "REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY=/var/lib/registry" \
registry:2
# then push your image
docker login localhost:5000
docker tag my-customized-image localhost:5000/my-customized-image
docker push localhost:5000/my-customized-image
# then spin up the service with the new image name
# replace registryhost with ip/hostname of your registry Docker host
docker service create --name custom --network my-network \
--constraint node.labels.myconstraint==true --with-registry-auth \
registryhost:5000/my-customized-image

For me, this step-by-step guide worked. However, it is insecure:
# Start your registry
$ docker run -d -p 5000:5000 --name registry registry:2
# Tag the image so that it points to your registry
$ docker tag my_existing_image localhost:5000/myfirstimage
# Push it to local registry/repo
$ docker push localhost:5000/myfirstimage
# For verification you can use this command:
$ curl -X GET http://localhost:5000/v2/_catalog
# It will print out all images on repo.
# On private registry machine add additional parameters to enable insecure repo:
ExecStart=/usr/bin/dockerd --insecure-registry IP_OF_CURRENT_MACHINE:5000
# Flush changes and restart Docker:
$ systemctl daemon-reload
$ systemctl restart docker.service
# On client machine we should say docker that this private repo is insecure, so create or modifile the file '/etc/docker/daemon.json':
{ "insecure-registries":["hostname:5000"] }
# Restart docker:
$ systemctl restart docker.service
# On swarm mode, you need to point to that registry, so use host name instead, for example: hostname:5000/myfirstimage

Images have to be downloaded to the local cache on each node. The reason is that if you store all of your images on one node only and that node goes down, swarm would have no way to spawn new tasks (containers) on the other nodes.
I personally just pull a copy of all the images on each node before starting the services. That can be done in a bash script or Makefile (eg below)
pull:
#for node in $$NODE_LIST; do
OPTS=$$(docker-machine config $$node)
set -x
docker $$OPTS pull postgres:9.5.2
docker $$OPTS pull elasticsearch:2.3.3
docker $$OPTS pull schickling/beanstalkd
docker $$OPTS pull gliderlabs/logspout
etc ...
set +x
done

Related

Can't save file on remote Jupyter server running in docker container

I'm trying to work in Jupyter Lab run via Docker on a remote machine, but can't save any of the files I open.
I'm working with a Jupyter Docker Stack. I've installed docker on my remote machine and successfully pulled the image.
I set up port forwarding in my ~/.ssh/config file:
Host mytunnel
HostName <remote ip>
User root
ForwardAgent yes
LocalForward 8888 localhost:8888
When I fire up the container, I use the following script:
docker run \
-p 8888:8888 \
-e JUPYTER_ENABLE_LAB=yes \
-v "${PWD}":/home/jovyan/work jupyter/tensorflow-notebook
The container is running:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c8fc3c720af1 jupyter/tensorflow-notebook "tini -g -- start-no…" 8 minutes ago Up 8 minutes 0.0.0.0:8888->8888/tcp, :::8888->8888/tcp adoring_khorana
I get the regular Jupyter url back:
http://127.0.0.1:8888/lab?token=<token>
But when I access the server in my browser, the Save option is disabled.
I've tried some of the solutions proposed elsewhere in SO, but no luck.
Is this something about connecting over SSH? The Jupyter server thinks it's not a secure connection?
It is possible that the problem is related to the SSH configuration, but I think is more probably related to a permission issue with your volume mount.
Please, try reviewing your docker container logs looking for permissions related errors. You can do that using the following:
docker container logs <container id>
See the output provided by your docker run command too.
In addition, try opening a shell in the container:
docker exec -it <container id> /bin/bash
and see if you are able to create a file in the default work directory:
touch /home/jovyan/work/test_file
Finally, the Jupyter docker stacks repository has a troubleshooting page almost entirely devoted to permissions issues.
Consider especially the solutions provided in the Additional tips and troubleshooting commands for permission-related errors and, as suggested, try providing launching the container with your OS user:
docker run \
-p 8888:8888 \
-e JUPYTER_ENABLE_LAB=yes \
--user "$(id -u)" --group-add users \
-v "${PWD}":/home/jovyan/work jupyter/tensorflow-notebook
After that, as suggested in the mentioned documentation as well, see if the container is properly mounted using the following command:
docker inspect <container_id>
In the obtained result note the value of the RW field which indicates whether the volume is writable (true) or not (false).

why do i keep seeing nginx index.html on localhost when i run my docker image

I installed and run nginx on my linux machine to understand the configurations etc. After a while i decided to remove it safely by following this thread in order to use it in docker
By following this documentaion i run this command
sudo docker run --name ngix -d -p 8080:80 pillalexakis/myrestapi:01
And i saw ngix's homepage at localhost
Then i deleted all ngix images & stopped all containers and i also run this command
sudo docker system prune -a
But now restarted my service by this command
sudo docker run -p 192.168.2.9:7777:8085 phillalexakis/myfirstapi:01 and i keep seeing at localhost ngix index.html
How can i totally remove it ?
Note: I'm new with docker and i might have missed a lot of things. Let me know what extra docker commands should i run in order provide better information.
Assuming your host have been preparing as below
your files (index.html, js, etc) under folder - /myhost/nginx/html
your nginx configuration - /myhost/nginx/nginx.conf
Solution
map your files (call volume) on the fly from outside docker image via docker cli
This is the command
docker run -it --rm -d -p 8080:80 --name web \
-v /myhost/nginx/html:/usr/share/nginx/html \
-v /myhost/nginx/nginx.conf:/etc/nginx/nginx.conf \
nginx
copy your files into docker image by build your own docker image via Dockerfile
This is your Dockerfile under /myhost/nginx
FROM nginx:latest
COPY ./html/index.html /usr/share/nginx/html/index.html
This is the command to build your docker image
cd /myhost/nginx
docker build -t pillalexakis/nginx .
This is the command to run your docker image
docker run -it --rm -d -p 8080:80 --name web \
pillalexakis/nginx

How to use docker to run sqli-labs(a web application) on Windows?

I try to use Docker to run sqli-labs, I use the command:
docker pull acgpiano/sqli-labs
to pull the images, and after I use the command:
docker run -dt --name sqli -p 80:80 --rm acgpiano/sqli-labs
I visit the http://localhost, but my browser shows me 404..
here is the screenshot:
enter image description here
enter image description here
Why I can not get the right page?
Because the page is not accessible on your machines localhost but on your containers localhost.
You can access it if you go to localhost inside your container or on your machines IP.
EDIT:
How to access localhost inside the container:
# I ran the same commands as you
docker pull acgpiano/sqli-labs
docker run -dt --name sqli -p 80:80 --rm acgpiano/sqli-labs
# First get the container id with
docker ps -a
# I got this output:
# CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
# d91384617370 acgpiano/sqli-labs "/run.sh" 3 minutes ago Up 3 minutes 0.0.0.0:80->80/tcp, 3306/tcp sqli
# Then execute bash command as root for this container id
docker exec -it -u root d91384617370 bash
# Inside the container update apt-get and get curl (or wget)
apt-get update -y
apt-get install curl -y
# Go to localhost - the page will be printed in your terminal
curl localhost
How to access it in windows 10 (tested with Docker version 18.03.1-ce, build 9ee9f40), I'm using powershell and docker for windows
docker pull acgpiano/sqli-labs
docker run -dt --name sqli -p 80:80 --rm acgpiano/sqli-labs
# In powershell get the hostname - copy it to your browser - http://your_hostname or just your_hostname
hostname
# Or run ipconfig to find your IP - copy it to your browser http://your_ip or your_ip
ipconfig | findstr [0-9].\.
I would also recommend this example from the docs, great way to get started.

Run bitcoind with bitcoind.conf in docker

I know docker, but less about bitcoind.
Now I want to use this docker image to start my own test environment:
The description tells me:
docker volume create --name=bitcoind-data
docker run -v bitcoind-data:/bitcoin --name=bitcoind-node -d \
-p 8333:8333 \
-p 127.0.0.1:8332:8332 \
kylemanna/bitcoind
Now I want to now how I have to add my bitcoind.conf?
This isn't provided anywere? Can I use it at container startup or docker exec?
The repository contains a documentation file dedicated to your issue: https://github.com/kylemanna/docker-bitcoind/blob/master/docs/config.md

cannot run container after commit changes

Just basic and simple steps illustrating what I have tried:
docker pull mysql/mysql-server
sudo docker run -i -t mysql/mysql-server:latest /bin/bash
yum install vi
vi /etc/my.cnf -> bind-address=0.0.0.0
exit
docker ps
docker commit new_image_name
docker run --name mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=secret -d new_image_name
docker ps -a STATUS - Exited (1)
Please let me know what I did wrong.
Instead of trying to modify an existing image, try and use (for testing) MYSQL_ROOT_HOST=%.
That would allow root login from any IP. (As seen in docker-library/mysql issue 241)
sudo docker run --name mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=123456 -e MYSQL_ROOT_HOST=% -d mysql/mysql-server:latest
The README mentions:
By default, MySQL creates the 'root'#'localhost' account.
This account can only be connected to from inside the container, requiring the use of the docker exec command as noted under Connect to MySQL from the MySQL Command Line Client.
To allow connections from other hosts, set this environment variable.
As an example, the value "172.17.0.1", which is the default Docker gateway IP, will allow connections from the Docker host machine.

Resources