Docker swarm did not update service - docker

I use docker stack deploy deploy my python service.
First, I edit code.
then
docker build . -f Dockerfile -t my_service:$TAG
docker tag my_service:$TAG register.xxx.com:5000/my_service:$TAG
When I use docker run -p 9000:9000 register.xxx.com:5000/my_service:$TAG
It's worked.
But, when I use docker stack deploy -c docker-compose.yml my_service_stack
The service still is running old code.
The part of docker-compose.yaml:
web:
image: register.xxx.com:5000/my_service:v0.0.12
depends_on:
- redis
- db
- rabbit
links:
- redis
- db
- rabbit
volumes:
- web_service_data:/home
networks:
- webnet
v0.0.12 == $TAG
Dockerfile:
```
FROM python:3.6.4
RUN useradd -ms /bin/bash gmt
RUN mkdir -p /home/logs
WORKDIR /home/gmt/src
COPY /src/requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple/
COPY /src .
RUN cat /home/gmt/src/setting/env.yaml
ENV PYTHONPATH=/home/gmt/src
CMD ["gunicorn", "-c", "/home/gmt/src/gunicornconf.py", "run:app"]
```
So, why?

I don't see that you actually pushed your image from your build server to your registry. I'll assume you're doing that after build and before deploy.
You should not be using a volume for code. That volume will overwrite your /home in the container with the contents of the volume, which are likely stale. Using/storing code in volumes is an anti-pattern.
You don't need links:, they are legacy.
depends_on: is not used in swarm.
You should not store logs in the container, you should have them sent to stdout and stderr.

Related

Running docker-compose container from docker hub

I have created 2 containers locally and pushed them into docker hub to use on a VM, one is an Angular App and the other is a Django REST API with a db:
Angular App
Dockerfile
FROM node:latest as node
WORKDIR /app
COPY . .
RUN npm install npm#8.11.0 --legacy-peer-deps
RUN npm run build --prod
FROM nginx:alpine
COPY --from=node /app/dist/bom-e-barato /usr/share/nginx/html
I created the image by doing docker build -t andreclerigo/bom_e_barato:latest . then pushed it with docker push andreclerigo/bom_e_barato:latest and then I can run it on my VM by doing docker run -d -p 80:80 andreclerigo/bom_e_barato:latest
Django REST API
Dockerfile
FROM python:3.8.10
ENV PYTHONUNBUFFERED 1
RUN mkdir /rest-api-scraper
WORKDIR /rest-api-scraper
ADD . /rest-api-scraper/
RUN pip install -r requirements.txt
docker-compose.yml
version: '3'
services:
web:
build: .
command: bash -c "python backend/manage.py makemigrations && python backend/manage.py migrate && python backend/manage.py runserver 0.0.0.0:8000"
container_name: rest-api-scraper
volumes:
- .:/rest-api-scraper
ports:
- "8000:8000"
image: andreclerigo/rest-api-scraper:latest
I created the image by doing docker-compose build, then I pushed it to docker hub by doing docker-compose push to run locally I can do docker-compose up
Question
What steps do I need to take to pull this image and run the the image (docker-compose up) on my VM?

How do I make my VS Code dev/remote container port accessible to localhost?

I have a GraphQL application that run inside a container. If I run docker compose build followed by docker compose up I can connect to it via localhost:9999/graphql. Inside the dockerfile the port forwarding is 9999:80. When I run the docker container ls command I can see the ports are forewarded as expected.
I'd like to running this in a VS Code remote container. Selecting Open folder in remote container gives me the option of selecting either the dockerfile or the docker-compose file to build the container. I've tried both options and neither allows me to access the GraphQL playground from localhost. Running from docker-compose I can see that the ports appear to be forwarded in the same manner as if I ran docker compose up but I can't access the site.
Where am I going wrong?
Update: If I run docker compose up on the container that is built by vs code, I can connect to localhost and the graphql playground.
FROM docker.removed.local/node
MAINTAINER removed
WORKDIR /opt/app
COPY package.json /opt/app/package.json
COPY package-lock.json /opt/app/package-lock.json
COPY .npmrc /opt/app/.npmrc
RUN echo "nameserver 192.168.11.1" > /etc/resolv.conf && npm ci
RUN mkdir -p /opt/app/logs
# Setup a path for using local npm packages
RUN mkdir -p /opt/node_modules
ENV PATH /opt/node_modules/.bin:$PATH
COPY ./ /opt/app
EXPOSE 80
ENV NODE_PATH /opt:/opt/app:$NODE_PATH
ARG NODE_ENV
VOLUME ["/opt/app"]
CMD ["forever", "-o", "/opt/app/logs/logs.log", "-e", "/opt/app/logs/error.log", "-a", "server.js"]
version: '3.5'
services:
server:
build: .
container_name: removed-data-graph
command: nodemon --ignore 'public/*' --legacy-watch src/server.js
image: docker.removed.local/removed-data-graph:local
ports:
- "9999:80"
volumes:
- .:/opt/app
- /opt/app/node_modules/
#- ${LOCAL_PACKAGE_DIR}:/opt/node_modules
depends_on:
- redis
networks:
- company-network
environment:
- NODE_ENV=dev
redis:
container_name: redis
image: redis
networks:
- company-network
ports:
- "6379:6379"
networks:
company-network:
name: company-network

container keep exiting right after running it

docker-compose.yaml:
web:
build: .
command: ./main
ports:
- "8888:3412"
volumes:
- .:/code
links:
- redis
redis:
image: redis
Dockerfile:
FROM golang:1.6
ADD main.go .
EXPOSE 3412
ENTRYPOINT /go
RUN go build main.go
so after running docker run -d imagename, there is no running container
also docker logs containername doesn't show anything
ENTRYPOINT /go
is equivalent to running /bin/sh -c /go
go is actually a directory in your container, so it will fail, because shell cannot execute a directory.
remove the background flag -d and use docker run imagename and you will see this error
What you probably want is:
ENTRYPOINT /usr/local/go/bin/go to use go as an executable from the container.
Or even better:
ENTRYPOINT ["/usr/local/go/bin/go"], so you would be able to pass arguments to go.

Docker - run image from docker-compose

I have set up a docker-compose.yml file that runs a web service along with postgres.
It works nicely when I run it with docker-compose up.
docker-compose.yml:
version: '3'
services:
db:
image: postgres
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
Dockerfile:
FROM python:3
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD . /code/
CMD ["python", "manage.py", "runserver"]
Is there any way to construct an image out of the services?
I tried it with docker-compose build, but running the created image simply freezes the terminal.
Thanks!
docker-compose is a container orchestration tool, albeit a simple one , and not a bundler of multiple images and preferences into one. In fact, such a thing does not even exists.
What happens when you run docker-compose up is that it effectively runs docker-compose build for those images that need to be built, web in your example, and then effectively replaces the build: . with image: web and executes the configuration as defined by the compose file.
So if you were to run docker-compose build manually and wanted to run the same configuration you have in the compose file manually, you would need to something along the lines of (in order)
run docker-compose build or docker build -t web . to build the web image
run docker run --name db postgres
run docker run --name web -v .:/code -p 8000:8000 web python manage.py runserver 0.0.0.0:8000

docker copy file from one container to another?

Here is what I want to do:
docker-compose build
docker-compose $COMPOSE_ARGS run --rm task1
docker-compose $COMPOSE_ARGS run --rm task2
docker-compose $COMPOSE_ARGS run --rm combine-both-task2
docker-compose $COMPOSE_ARGS run --rm selenium-test
And a docker-compose.yml that looks like this:
task1:
build: ./task1
volumes_from:
- task1_output
command: ./task1.sh
task1_output:
image: alpine:3.3
volumes:
- /root/app/dist
command: /bin/sh
# essentially I want to copy task1 output into task2 because they each use different images and use different tech stacks...
task2:
build: ../task2
volumes_from:
- task2_output
- task1_output:ro
command: /bin/bash -cx "mkdir -p task1 && cp -R /root/app/dist/* ."
So now all the required files are in task2 container... how would I start up a web server and expose a port with the content in task2?
I am stuck here... how do I access the stuff from task2_output in my combine-tasks/Dockerfile:
combine-both-task2:
build: ../combine-tasks
volumes_from:
- task2_output
In recent versions of docker, named volumes replace data containers as the easy way to share data between containers.
docker volume create --name myshare
docker run -v myshare:/shared task1
docker run -v myshare:/shared -p 8080:8080 task2
...
Those commands will set up one local volume, and the -v myshare:/shared argument will make that share available as the folder /shared inside each of each container.
To express that in a compose file:
version: '2'
services:
task1:
build: ./task1
volumes:
- 'myshare:/shared'
task2:
build: ./task2
ports:
- '8080:8080'
volumes:
- 'myshare:/shared'
volumes:
myshare:
driver: local
To test this out, I made a small project:
- docker-compose.yml (above)
- task1/Dockerfile
- task1/app.py
- task2/Dockerfile
I used node's http-server as task2/Dockerfile:
FROM node
RUN npm install -g http-server
WORKDIR /shared
CMD http-server
and task1/Dockerfile used python:alpine, to show two different stacks writing and reading.
FROM python:alpine
WORKDIR /app
COPY . .
CMD python app.py
here's task1/app.py
import time
count = 0
while True:
fname = '/shared/{}.txt'.format(count)
with open(fname, 'w') as f:
f.write('content {}'.format(count))
count = count + 1
time.sleep(10)
Take those four files, and run them via docker compose up in the directory with docker-compose.yml - then visit $DOCKER_HOST:8080 to see a steadily updated list of files.
Also, I'm using docker version 1.12.0 and compose version 1.8.0 but this should work for a few versions back.
And be sure to check out the docker docs for details I've probably missed here:
https://docs.docker.com/engine/tutorials/dockervolumes/
For me the best way to copy file from or to container is using docker cp for example:
If you want copy schema.xml from apacheNutch container to solr container then:
docker cp apacheNutch:/root/nutch/conf/schema.xml /tmp/schema.xml
server/solr/configsets/nutch/
docker cp /tmp/schema.xml
solr:/opt/solr-8.1.1/server/solr/configsets/nutch/conf

Resources