Adding varnish to existing node Dockerfile - docker

I have the following Dockerfile
FROM node:10-alpine
RUN mkdir -p /home/node/app/node_modules && chown -R node:node /home/node/app
WORKDIR /home/node/app
COPY package*.json ./
USER node
RUN npm install
COPY --chown=node:node . .
EXPOSE 8080
CMD [ "node", "app.js" ]
Now I would like to add the Varnish cache and consider this repo docker-varnish how can I organise both together?
UPDATE 1
Once I run this command docker compose build it show following information, but I don't see anything related to varnish
[+] Building 4.6s (11/11) FINISHED
=> [internal] load build definition from Dockerfile 0.1s => => transferring dockerfile: 362B 0.0s => [internal] load .dockerignore 0.1s => => transferring context: 174B 0.0s => [internal] load metadata for docker.io/library/node:10-alpine 4.0s => [internal] load build context 0.1s => => transferring context: 21.58kB 0.0s => [1/6] FROM docker.io/library/node:10-alpine#sha256:dc98dac24efd4254f75976c40bce46944697a110d06ce7fa47e7268470cf2e28
0.0s => CACHED [2/6] RUN mkdir -p /home/node/app/node_modules && chown -R node:node /home/node/app
0.0s => CACHED [3/6] WORKDIR /home/node/app 0.0s => CACHED [4/6] COPY package*.json ./ 0.0s => CACHED [5/6] RUN npm install 0.0s => [6/6] COPY --chown=node:node . . 0.1s => exporting to image 0.1s => => exporting layers 0.0s => => writing image sha256:7eec4ec76dbff93f8b0ebc6e03051331709d5f55a641be379a3e00697eabde70
0.0s => => naming to docker.io/library/test_project_node
Am I doing things right?

You could use docker compose to orchestrate multiple containers, as described here: https://www.varnish-software.com/developers/tutorials/running-varnish-docker/#6-docker-compose .
This uses the official Varnish images, rather than the one you suggested.
Here's an example of such a docker-compose.yml file that is used by the docker compose command:
version: "3"
services:
varnish:
image: varnish:stable
container_name: varnish
volumes:
- "./default.vcl:/etc/varnish/default.vcl"
ports:
- "80:80"
tmpfs:
- /var/lib/varnish:exec
environment:
- VARNISH_SIZE=2G
depends_on:
- "node"
node:
build: ./
container_name: node
ports:
- "8080:8080"
This docker-compose.yml file assumes that it is located in the same folder as the Dockerfile for your Node container. It also assumes that default.vcl is also located in that folder.
Run docker compose up to bootstrap the stack.

Related

Docker file is not able to be created. Stops in the requirments.txt step

I have these files:
version: "3"
services:
iris-classifier-uplink:
# if something fails or the server is restarted, the container will restart
restart: always
container_name: text_classification-uplink
image: text_classification-uplink
build:
# build the iris clasifier image from the Dockerfile in the current directory
context: .
in docker-compose.yml
I also have
FROM python:3.8
ADD requirements.txt /
RUN pip install -r /requirements.txt
ADD text_classification.py /
ENV PYTHONUNBUFFERED=1
CMD [ "python", "./text_classification.py" ]
in Dockerfile.txt
and finally
tensorflow-datasets==4.6.0
vega-datasets==0.9.0
scikit-learn==1.0.2
autograd==1.5
numpy==1.21.6
in requirements.txt
I also have my python program called "text_classification.py" also in the folder along with the above three files.
When I attempt to make the docker file, I get this output:
C:\Vineet\Research>docker build . -f Dockerfile.txt
[+] Building 6.7s (7/8)
=> [internal] load build definition from Dockerfile.txt 0.0s
=> => transferring dockerfile: 36B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/python:3.8 0.8s
=> CACHED [1/4] FROM docker.io/library/python:3.8#sha256:089d758211770a2dd03ecc4 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 7.61kB 0.0s
=> [2/4] ADD requirements.txt / 0.0s
=> ERROR [3/4] RUN pip install -r /requirements.txt 5.7s
------
> [3/4] RUN pip install -r /requirements.txt:
#6 2.749 Collecting en-core-web-sm# https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.4.1/en_core_web_sm-3.4.1-py3-none-any.whl
#6 3.292 Downloading https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.4.1/en_core_web_sm-3.4.1-py3-none-any.whl (12.8 MB)
#6 5.139 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 12.8/12.8 MB 7.1 MB/s eta 0:00:00
#6 5.145 ERROR: jaxlib-0.3.22+cuda11.cudnn805-cp37-cp37m-manylinux2014_x86_64.whl is not a supported wheel on this platform.
#6 5.584 WARNING: You are using pip version 22.0.4; however, version 22.3.1 is available.
#6 5.584 You should consider upgrading via the '/usr/local/bin/python -m pip install --upgrade pip' command.
From what I see, the error is in the requirements.txt file, but I am not sure what is causing the problem.

Dockerfile cannot run a container using "docker-compose-up --build" command

Dockerfile cannot run a container using "docker-compose-up --build" command
When I run Dockerfile using the "docker-compose up --build" command, the file not found is output, and the container is not running.
Dockerfile, docker-compose.yaml, directory and result is below.
Docker version :
\server>docker --version
Docker version 20.10.14, build a224086
Dockerfile :
FROM openjdk:14-jdk-alpine3.10
RUN mkdir -p /app/workspace/config && \
mkdir -p /app/workspace/lib && \
mkdir -p /app/workspace/bin
WORKDIR /app/workspace
VOLUME /app/workspace
COPY ./bin ./bin
COPY ./config ./config
COPY ./lib ./lib
RUN chmod 774 /app/workspace/bin/*.sh
EXPOSE 6969
WORKDIR /app/workspace/bin
ENTRYPOINT ./startServer.sh
docker-compose.yaml:
version: '3'
services:
server:
container_name: cn-server
build:
context: ./server/
dockerfile: Dockerfile
ports:
- "6969:6969"
volumes:
- ${SERVER_HOST_DIR}:/app/workspace
networks:
- backend
networks:
backend:
driver: bridge
directories :
enter image description here
"docker-compose up --build" command execution result :
Building server
[+] Building 3.7s (13/13) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 425B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/openjdk:14-jdk-alpine3.10 2.0s
=> [internal] load build context 0.0s
=> => transferring context: 239B 0.0s
=> CACHED [1/8] FROM docker.io/library/openjdk:14-jdk-alpine3.10#sha256:b8082268ef46d44ec70fd5a64c71d445492941813ba9d68049be6e63a0da542f 0.0s
=> [2/8] RUN mkdir -p /app/workspace/config && mkdir -p /app/workspace/lib && mkdir -p /app/workspace/bin 0.4s
=> [3/8] WORKDIR /app/workspace 0.1s
=> [4/8] COPY ./bin ./bin 0.1s
=> [5/8] COPY ./config ./config 0.1s
=> [6/8] COPY ./lib ./lib 0.1s
=> [7/8] RUN chmod 774 /app/workspace/bin/*.sh 0.5s
=> [8/8] WORKDIR /app/workspace/bin 0.1s
=> exporting to image 0.2s
=> => exporting layers 0.2s
=> => writing image sha256:984554c9d7d9b3312fbe2dc76b4c7381e93cebca3a808ca16bd9e3777d42f919 0.0s
=> => naming to docker.io/library/docker_cn-server 0.0s
Creating cn-server ... done
Attaching to cn-server
cn-server | /bin/sh: ./startServer.sh: not found
cn-server exited with code 127
Also bin, config, lib directories are not created in host volume directory and no files.
Please tell me what I was wrong or what I used wrong.
Thank you.
There are two obvious problems here, both related to Docker volumes.
In your Dockerfile, you switch to WORKDIR /app/workspace and do some work there, but then in the Compose setup, you bind-mount a host directory over all of /app/workspace. This causes all of the work in the Dockerfile to be lost, and replaces the code in the image with unpredictable content from the host. In the docker-compose.yml file you should delete the volumes: block. You should be able to reduce what you've shown to as little as
version: '3.8'
services:
server:
build: ./server
ports:
- '6969:6969'
The second problem is in the Dockerfile itself. You declare VOLUME /app/workspace fairly early on. This is unnecessary, though, and its most visible effect is to cause later RUN commands in that directory to have no effect. So in particular your RUN chmod ... command isn't happening. Deleting the VOLUME line can help with that. (You also don't need to RUN mkdir directories you're about to COPY into the image.)
FROM openjdk:14-jdk-alpine3.10
WORKDIR /app/workspace
COPY ./bin ./bin
COPY ./config ./config
COPY ./lib ./lib
RUN chmod 0755 bin/*.sh
EXPOSE 6969
WORKDIR /app/workspace/bin
CMD ["./startServer.sh"]
There are other potential problems depending on the content of the startServer.sh file. I'd expect this to be a shell script and its first line to be a "shebang" line, #!/bin/sh. If it explicitly names GNU Bash or another shell, that won't be present in an Alpine-based image. If you're working on a Windows-based system and the file has DOS line endings, that will also cause an error.

How do you get a directory listing to output during build in Docker 20.10.3? (March 2022) [duplicate]

This question already has answers here:
Why is docker build not showing any output from commands?
(6 answers)
Closed 11 months ago.
The answers here don't seem to work. The answer here also doesn't work. I suspect something has changed about Docker's build engine since then.
My Dockerfile:
FROM node:16.14.2-alpine
WORKDIR /usr/src/app
COPY package.json yarn.lock ./
RUN yarn
COPY dist .
EXPOSE $SEEDSERV_PORT
RUN pwd
RUN echo "output"
RUN ls -alh
RUN contents="$(ls -1 /usr/src/app)" && echo $contents
# CMD ["node","server.js"]
ENTRYPOINT ["tail", "-f", "/dev/null"]
Which gives this output from build:
✗ docker build --progress auto --build-arg SEEDSERV_PORT=9999 -f build/api/Dockerfile .
[+] Building 2.1s (14/14) FINISHED
=> [internal] load build definition from Dockerfile 0.1s
=> => transferring dockerfile: 37B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/node:16.14.2-alpine 1.9s
=> [internal] load build context 0.0s
=> => transferring context: 122B 0.0s
=> [1/9] FROM docker.io/library/node:16.14.2-alpine#sha256:da7ef512955c906b6fa84a02295a56d0172b2eb57e09286ec7abc02cfbb4c726 0.0s
=> CACHED [2/9] WORKDIR /usr/src/app 0.0s
=> CACHED [3/9] COPY package.json yarn.lock ./ 0.0s
=> CACHED [4/9] RUN yarn 0.0s
=> CACHED [5/9] COPY dist . 0.0s
=> CACHED [6/9] RUN pwd 0.0s
=> CACHED [7/9] RUN echo "output" 0.0s
=> CACHED [8/9] RUN ls -alh 0.0s
=> CACHED [9/9] RUN contents="$(ls -1 /usr/src/app)" && echo $contents 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:d1dd7ac452ecacc803eed2bb1deff654c3296a5576b6f418dbd07c5f2e644f1a 0.0s
Adding --progress plain gives slightly different output but not what I'm looking for, e.g.:
#11 [7/9] RUN echo "output"
#11 sha256:634e07d201926b0f70289515fcf4a7303cac3658aeddebfa9552fc3054ed4ace
#11 CACHED
How can I get a directory listing during build in 20.10.3? I can exec into the running container but that's a lot more work.
If your build is cached, there's no output from the run to show. You need to include --no-cache to run the command again for any output to display, and also include --progress plain to output to the console.

Docker build for vue project module not found error

I am trying to build a vue app docker image but I keep getting this error
Module not found: Error: Can't resolve './src/main.js' in '/app'
I am using the a multi-stage dockerfile
FROM node:15.4 as build
WORKDIR /app
COPY ./frontend/package.json .
RUN npm install
COPY . .
RUN npm run build
FROM nginx:1.19
COPY ./frontend/nginx/nginx.conf /etc/nginx/nginx.conf
COPY --from=build /app/dist /usr/share/nginx/html
Here's the script section of the package.json file
"scripts": {
"serve": "vue-cli-service serve",
"build": "vue-cli-service build",
"lint": "vue-cli-service lint"
},
This is a detailed log of the build process
[+] Building 26.9s (14/16)
=> [internal] load build definition from frontend.dockerfile 0.0s
=> => transferring dockerfile: 293B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 34B 0.0s
=> [internal] load metadata for docker.io/library/nginx:1.19 2.5s
=> [internal] load metadata for docker.io/library/node:15.4 2.4s
=> [auth] library/node:pull token for registry-1.docker.io 0.0s
=> [auth] library/nginx:pull token for registry-1.docker.io 0.0s
=> [build 1/6] FROM docker.io/library/node:15.4#sha256:a76eb778d162f8fd96138d9ca7dbd14b8916c201775a97d2f2aa22e9f13eb105 0.0s
=> [stage-1 1/3] FROM docker.io/library/nginx:1.19#sha256:df13abe416e37eb3db4722840dd479b00ba193ac6606e7902331dcea50f4f1f2 0.0s
=> [internal] load build context 8.5s
=> => transferring context: 166.03MB 8.4s
=> CACHED [build 2/6] WORKDIR /app 0.0s
=> CACHED [build 3/6] COPY ./frontend/package.json . 0.0s
=> CACHED [build 4/6] RUN npm install 0.0s
=> [build 5/6] COPY . . 7.4s
=> ERROR [build 6/6] RUN npm run build 8.3s
------
> [build 6/6] RUN npm run build:
#14 2.913
#14 2.913 > frontend#0.1.0 build
#14 2.913 > vue-cli-service build
#14 2.913
#14 4.668 All browser targets in the browserslist configuration have supported ES module.
I move the frontend folder from the project to stand alone and I am able to build the image and run a container based on it successfully.
What could possibly be wrong here?
I try to give you some help. First of all I assume your nginx.conf file is located in the upper nginx folder. I see two in your repo. If this is the case you are copying one layer to deep. I suggest to change your file as follows with this order:
FROM node:alpine as build
WORKDIR /app
COPY ./package*.json ./
RUN npm install
COPY "frontend/target/dist" ./
FROM nginx:alpine as production
COPY ./frontend/nginx/nginx.conf /etc/nginx/conf.d/nginx.conf
RUN rm -rf /usr/share/nginx/html/*
COPY --from=build /app /usr/share/nginx/html
What I changed was the layer of the folders you are trying to add and the asterix (*) in package.json. If you are defining a frontend before, you are also searching it in the copied part, which is not existing in the nginx image. Also as you see you only need /app instead of /app/dist. Additional your /etc... config path was wrongly referenced. Give it a shot now. It should work like this with the given information I had.
Add in your .dockerignore file only those lines:
node_modules
dist

docker-compose cannot attach container to network if full project name given

I am running docker desktop(version 4.0.1) in Mac and inside that I am trying my hand in docker compose file for a very simple rest api.
Below is my Dockerfile:
FROM tiangolo/uvicorn-gunicorn-fastapi:python3.8
COPY . /app
WORKDIR /app
RUN pip install -r ./requirements/requirements.txt
CMD uvicorn --host 0.0.0.0 application:app --reload
And below is my docker-compose.yml file:
version: '3'
services:
db:
image: mongo
ports:
- "27017:27017"
volumes:
- ./server_setup.js:/docker-entrypoint-initdb.d/mongo-init.js:ro
container_name: server
web_app:
build:
dockerfile: Dockerfile
context: .
image: mongo-based-fastapi-app
depends_on:
- db
ports:
- "8000:8000"
volumes:
mongo_data:
When I am inside the project directory and run this command : docker-compose --project-directory . -f ./docker-compose.yml -p test up --build -d web_app everything works file .
However when I replace the command with the full path to the project like so : docker-compose --project-directory /Users/subhayanbhattacharya/Studies/HobbyProjects/mongo-book-store-fastapi -f /Users/subhayanbhattacharya/Studies/HobbyProjects/mongo-book-store-fastapi/docker-compose.yml -p /Users/subhayanbhattacharya/Studies/HobbyProjects/mongo-book-store-fastapi up --build -d web_app, I get the below error message:
[+] Building 2.9s (10/10) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 221B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 79B 0.0s
=> [internal] load metadata for docker.io/tiangolo/uvicorn-gunicorn-fastapi:python3.8 2.4s
=> [auth] tiangolo/uvicorn-gunicorn-fastapi:pull token for registry-1.docker.io 0.0s
=> [internal] load build context 0.2s
=> => transferring context: 265.17kB 0.2s
=> [1/4] FROM docker.io/tiangolo/uvicorn-gunicorn-fastapi:python3.8#sha256:ff437138e8a38edb8f6070c8eefdd570faf1de4dbd0df644c7632160b2b62eb2 0.0s
=> CACHED [2/4] COPY . /app 0.0s
=> CACHED [3/4] WORKDIR /app 0.0s
=> CACHED [4/4] RUN pip install -r ./requirements/requirements.txt 0.0s
=> exporting to image 0.1s
=> => exporting layers 0.0s
=> => writing image sha256:5dd081b255c320871c1211985e25f3da664a1be9e5ce4340375213f3a2840ea0 0.0s
=> => naming to docker.io/library/mongo-based-fastapi-app 0.0s
Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them
[+] Running 1/2
⠿ Network /users/subhayanbhattacharya/studies/hobbyprojects/mongo-book-store-fastapi_default Created 0.1s
⠙ Container server Creating 0.1s
Error response from daemon: container 3157afc2465fe3e8baf6db0aec875155ff62e3714296064794a003076e24aa93 is not connected to the network users/subhayanbhattacharya/studies/hobbyprojects/mongo-book-store-fastapi_default
Can someone please tell where I am going wrong ?
Many thanks in advance.

Resources