Running docker-compose container from docker hub - docker

I have created 2 containers locally and pushed them into docker hub to use on a VM, one is an Angular App and the other is a Django REST API with a db:
Angular App
Dockerfile
FROM node:latest as node
WORKDIR /app
COPY . .
RUN npm install npm#8.11.0 --legacy-peer-deps
RUN npm run build --prod
FROM nginx:alpine
COPY --from=node /app/dist/bom-e-barato /usr/share/nginx/html
I created the image by doing docker build -t andreclerigo/bom_e_barato:latest . then pushed it with docker push andreclerigo/bom_e_barato:latest and then I can run it on my VM by doing docker run -d -p 80:80 andreclerigo/bom_e_barato:latest
Django REST API
Dockerfile
FROM python:3.8.10
ENV PYTHONUNBUFFERED 1
RUN mkdir /rest-api-scraper
WORKDIR /rest-api-scraper
ADD . /rest-api-scraper/
RUN pip install -r requirements.txt
docker-compose.yml
version: '3'
services:
web:
build: .
command: bash -c "python backend/manage.py makemigrations && python backend/manage.py migrate && python backend/manage.py runserver 0.0.0.0:8000"
container_name: rest-api-scraper
volumes:
- .:/rest-api-scraper
ports:
- "8000:8000"
image: andreclerigo/rest-api-scraper:latest
I created the image by doing docker-compose build, then I pushed it to docker hub by doing docker-compose push to run locally I can do docker-compose up
Question
What steps do I need to take to pull this image and run the the image (docker-compose up) on my VM?

Related

Docker for Windows - Volumes not working correctly

I am starting to work with Docker for Windows, but I can't make volumes to work with docker-compose.
First, I've created a simple Dockerfile:
FROM node:latest
WORKDIR /usr/src/app
COPY . /usr/src/app
RUN npm install nodemon -g
Then, a docker-compose.yml:
version: '3'
services:
nodeServer:
build: .
volumes:
- './:/usr/src/app'
command: bash -c "npm run start"
When a volume is declared in the `docker-compose.yml´, it doesn't work.
But, when I try to bind a volume through the command line like this:
docker build .
docker run -it -v ${PWD}:/usr/src/app d0d9397e9194 bash
It works. I can't understand the difference between these two approaches.
I've checked more than once my configurations:

Docker Image Contains files that Docker Container doesn't

I have a Dockerfile that contains steps that create a directory and runs an angular build script outputing to that directory. This all seems to run correctly. However when the container runs, the built files and directory are not there.
If I run a shell in the image:
docker run -it pnb_web sh
# cd /code/static
# ls
assets favicon.ico index.html main.js main.js.map polyfills.js polyfills.js.map runtime.js runtime.js.map styles.js styles.js.map vendor.js vendor.js.map
If I exec a shell in the container:
docker exec -it ea23c7d30333 sh
# cd /code/static
sh: 1: cd: can't cd to /code/static
# cd /code
# ls
Dockerfile api docker-compose.yml frontend manage.py mysite.log pnb profiles requirements.txt settings.ini web_variables.env
david#lightning:~/Projects/pnb$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ea23c7d30333 pnb_web "python3 manage.py r…" 13 seconds ago Up 13 seconds 0.0.0.0:8000->8000/tcp pnb_web_1_267d3a69ec52
This is my dockerfile:
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
RUN curl -sL https://deb.nodesource.com/setup_10.x | bash -
RUN apt install nodejs
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD . /code/
RUN mkdir /code/static
WORKDIR /code/frontend
RUN npm install -g #angular/cli
RUN npm install
RUN ng build --outputPath=/code/static
and associated docker-compose:
version: '3'
services:
db:
image: postgres
web:
build:
context: .
dockerfile: Dockerfile
working_dir: /code
env_file:
- web_variables.env
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
In the second example, the static directory has never been created or built into. I thought that a container is an instance of an image. How can the container be missing files from the image?
You're confusing build-time and run-time, along playing with Volumes.
Remember that host mount has priority over FS provided by the running container, so even your built image has assets, they are going to be overwritten by .services.web.volumes because you're mounting the host filesystem that overwrites the build result.
If you try to avoid volumes mounting you'll notice that everything is working as expected.

Run Gatsby with docker compose

I am trying to run Gatsby with Docker Compose.
From what I understand the Gatsby site is running in my docker container.
I map port 8000 of the container to port 8000 on my localhost.
But when looking on localhost:8000 I am not getting my gatsby site.
I use the following Dockerfile to build the image with docker build -t nxtra/gatsby .:
FROM node:8.12.0-alpine
WORKDIR /project
COPY ./package.json /project/package.json
COPY ./.entrypoint/entrypoint.sh /entrypoint.sh
RUN apk update \
&& apk add bash \
&& chmod +x /entrypoint.sh \
&& npm set progress=false \
&& npm install -g yarn gatsby-cli
EXPOSE 8000
ENTRYPOINT [ "/entrypoint.sh" ]
entrypoints.sh contains:
#!/bin/bash
yarn install
gatsby develop
docker-compose.yml ran with docker-compose up
version: '3.7'
services:
gatsby:
image: nxtra/gatsby
ports:
- "8000:8000"
volumes:
- ./:/project
tty: true
docker ps shows that port 8000 is forwarded 0.0.0.0:8000->8000/tcp.
Inspecting my container with docker inspect --format='{{.Config.ExposedPorts}}' id confirms the exposure of the port -> map[8000/tcp:{}]
docker tops on the container shows the following processes are running in the container:
18465 root 0:00 {entrypoint.sh} /bin/bash /entrypoint.sh
18586 root 0:11 node /usr/local/bin/gatsby develop
18605 root 0:00 /usr/local/bin/node /project/node_modules/jest-worker/build/child.js
18637 root 0:00 /bin/bash
Dockerfile and docker-compose.yml are situated in the root of my Gatsby project.
My project is running correctly when I run it without docker gatsby develop.
What am I doing wrong to get the Gatsby site that runs in my container to be visible on localhost:8000?
My issue was that Gatsby was only listening to requests within the container, like this answer suggests. Make sure you've configured Gatsby for the host 0.0.0.0. Take this (somewhat hacky) setup as an example:
Dockerfile
FROM node:alpine
RUN npm install --global gatsby-cli
docker-compose.yml
version: "3.7"
services:
gatsby:
build:
context: .
dockerfile: Dockerfile
entrypoint: gatsby
volumes:
- .:/app
develop:
build:
context: .
dockerfile: Dockerfile
command: gatsby develop -H 0.0.0.0
ports:
- "8000:8000"
volumes:
- .:/app
You can run Gatsby commands from a container:
docker-compose run gatsby info
Or run the development server:
docker-compose up develop

Docker - run image from docker-compose

I have set up a docker-compose.yml file that runs a web service along with postgres.
It works nicely when I run it with docker-compose up.
docker-compose.yml:
version: '3'
services:
db:
image: postgres
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
Dockerfile:
FROM python:3
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD . /code/
CMD ["python", "manage.py", "runserver"]
Is there any way to construct an image out of the services?
I tried it with docker-compose build, but running the created image simply freezes the terminal.
Thanks!
docker-compose is a container orchestration tool, albeit a simple one , and not a bundler of multiple images and preferences into one. In fact, such a thing does not even exists.
What happens when you run docker-compose up is that it effectively runs docker-compose build for those images that need to be built, web in your example, and then effectively replaces the build: . with image: web and executes the configuration as defined by the compose file.
So if you were to run docker-compose build manually and wanted to run the same configuration you have in the compose file manually, you would need to something along the lines of (in order)
run docker-compose build or docker build -t web . to build the web image
run docker run --name db postgres
run docker run --name web -v .:/code -p 8000:8000 web python manage.py runserver 0.0.0.0:8000

Docker swarm did not update service

I use docker stack deploy deploy my python service.
First, I edit code.
then
docker build . -f Dockerfile -t my_service:$TAG
docker tag my_service:$TAG register.xxx.com:5000/my_service:$TAG
When I use docker run -p 9000:9000 register.xxx.com:5000/my_service:$TAG
It's worked.
But, when I use docker stack deploy -c docker-compose.yml my_service_stack
The service still is running old code.
The part of docker-compose.yaml:
web:
image: register.xxx.com:5000/my_service:v0.0.12
depends_on:
- redis
- db
- rabbit
links:
- redis
- db
- rabbit
volumes:
- web_service_data:/home
networks:
- webnet
v0.0.12 == $TAG
Dockerfile:
```
FROM python:3.6.4
RUN useradd -ms /bin/bash gmt
RUN mkdir -p /home/logs
WORKDIR /home/gmt/src
COPY /src/requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple/
COPY /src .
RUN cat /home/gmt/src/setting/env.yaml
ENV PYTHONPATH=/home/gmt/src
CMD ["gunicorn", "-c", "/home/gmt/src/gunicornconf.py", "run:app"]
```
So, why?
I don't see that you actually pushed your image from your build server to your registry. I'll assume you're doing that after build and before deploy.
You should not be using a volume for code. That volume will overwrite your /home in the container with the contents of the volume, which are likely stale. Using/storing code in volumes is an anti-pattern.
You don't need links:, they are legacy.
depends_on: is not used in swarm.
You should not store logs in the container, you should have them sent to stdout and stderr.

Resources