I have got a docker-compose.yml file:
version: '2'
services:
web:
build: .
command: npm run dev
volumes:
- .:/usr/app
- /usr/app/node_modules
ports:
- "8080:8080"
expose:
- "8080"
And a Dockerfile
FROM node:7.7.2-alpine
WORKDIR /usr/app
COPY package.json .
RUN npm install --quiet
COPY . .
Now I want to add cypress (https://www.cypress.io/) to run test by running:
npm install --save-dev cypress
But maybe it doesn't work because I can't see the cypress folder.
After installing cypress, I run
/node_module/.bin/cypress open
I can't see cypress open.
So now I don't know how to add cypress to my docker to run testing on my host by cypress.
If you're using docker-compose, the cleaner solution is to just use a separate, dedicated Cypress Docker container, so your docker-compose.yml becomes:
version: '2'
services:
web:
build: .
entrypoint: npm run dev
volumes:
- .:/usr/app
- /usr/app/node_modules
ports:
- "8080:8080"
cypress:
image: "cypress/included:3.2.0"
depends_on:
- web
environment:
- CYPRESS_baseUrl=http://web:8080
working_dir: /e2e
volumes:
- ./:/e2e
The e2e directory should contain your cypress.json file and your integration/spec.js file. Your package.json file doesn't have to include Cypress at all because it's baked into the Cypress Docker image (cypress/included).
For more details, I wrote a comprehensive tutorial on using Docker Compose with Cypress:
"End-to-End Testing Web Apps: The Painless Way"
Running into a similar issue with a similar set up
The way I temporarily fixed it was by manually going into the folder containing my node_modules folder and running node_modules/.bin/install, from there you should be able to open it with node_modules/.bin/open or $(npm bin)/cypress open.
Tried setting up a separate cypress container on my docker-compose as such
cypress:
build:
context: .
dockerfile: docker/cypress
depends_on:
- node
volumes:
- .:/code
with the dockerfile being Cypress's prebuilt docker-container
Was able to get docker-compose exec cypress node_modules/.bin/cypress verify to work, but when I try to open Cypress it just hangs.
Hope this helps OP, but hope someone can provide a more concrete answer that will help us run Cypress fully through docker
You can also use an already existing cypress image on docker hub and build it in docker compose. Personnaly I avoid adding it directly in compose, I create a separate docker file for cypress.
Related
Situation:
Currently, i have a setup of two containers
testdata container
e2e container
The e2e container is based on cypress image and does run it's testsuite against an external url (on the internet) for example https://foobar.com
On that url a app runs which does some AJAX requests to foobar.com/api
I would like to point every request to the /api done by the app hosted on foobar.com to the testdata container.
So GET https:foobar.com/api/test needs to go to testdata:8080/api/test
Biggest question: is this even possible? Because the app itself is not hosted locally in docker. And if so: How? Any ideas?
Curl from testData container works:
curl "testdata:8080/api/test"
setup
Docker compose file
testdata:
image:
wiremock/wiremock
restart: always
command: [--enable-stub-cors]
container_name: wiremock-server
volumes:
- ./testData/mappings:/home/wiremock/mappings
- ./testData/__files:/home/wiremock/__files
ports:
- 8080:8080
e2e:
image: cypress
build: ./e2e
container_name: cypress
depends_on:
- testdata
command: npx cypress run --no-exit
volumes:
- ./e2e/cypress:/app/cypress
- ./e2e/cypress.config.js:/app/cypress.config.js
e2e docker file
FROM cypress/base:14
#FROM cypress/browsers:node18.6.0-chrome105-ff104
WORKDIR /app
COPY package.json .
COPY package-lock.json .
ENV CI=1
RUN npm ci
RUN npx cypress verify
testData docker file
FROM wiremock/wiremock
I've got a simple Node / React project. I'm trying to use Docker to create two containers, one for the server, and one for the client, each with their own Dockerfile in the appropriate directory.
docker-compose.yml
version: '3.9'
services:
client:
image: node:14.15-buster
build:
context: ./src
dockerfile: Dockerfile.client
ports:
- '3000:3000'
- '45799:45799'
volumes:
- .:/app
tty: true
server:
image: node:14.15-buster
build:
context: ./server
dockerfile: Dockerfile.server
ports:
- '3001:3001'
volumes:
- .:/app
depends_on:
- redis
links:
- redis
tty: true
redis:
container_name: redis
image: redis
ports:
- '6379'
src/Dockerfile.client
FROM node:14.15-buster
# also the directory you land in on ssh
WORKDIR /app
CMD cd /app && \
yarn && \
yarn start:client
server/Dockerfile.server
FROM node:14.15-buster
# also the directory you land in on ssh
WORKDIR /app
CMD cd /app && \
yarn && \
yarn start:server
After building and starting the containers, both containers run the same command, seemingly at random. Either both run yarn start:server or yarn start:client. The logs clearly detail duplicate startup commands and ports being used. Requests to either port 3000 (client) or 3001 (server) confirm that the same one is being used in both containers. If I change the command in both Dockerfiles to echo the respective filename (Dockerfile.server! or Dockerfile.client!), startup reveals only one Dockerfile being used for both containers. I am also running the latest version of Docker on Mac.
What is causing docker-compose to use the same Dockerfile for both containers?
After a lengthy and painful bout of troubleshooting, I narrowed the issue down to duplicate image references. image: node:14.15-buster for each service in docker-compose.yml and FROM node:14.15-buster in each Dockerfile.
Why this would cause this behavior is unclear, but after removing the image references in docker-compose.yml and rebuilding / restarting, everything works as expected.
When you run docker-compose build with both image and build properties set on a service, it will build an image according to the build property and then tag the image according to the image property.
In your case, you have two services building different images and tagging them with the same tag node:14.15-buster. One will overwrite the other.
This probably has the additional unintended consequence of causing your next image to be built on top of the previously built image instead of the true node:14.15-buster.
Then when you start the service, both containers will use the image tagged node:14.15-buster.
From the docs:
If you specify image as well as build, then Compose names the built image with the webapp and optional tag specified in image
From my understanding,
Dockerfile is like the config/recipe for creating the image, while docker-compose is used to easily create multiple containers which may have relationship, and avoid creating the containers by docker command repeatedly.
There are two files.
Dockerfile
FROM node:lts-alpine
WORKDIR /server
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3030
CMD ["npm", "run", "dev"]
docker-compose.yml
version: '2.1'
services:
test-db:
image: mysql:5.7
environment:
- MYSQL_ALLOW_EMPTY_PASSWORD=true
- MYSQL_USER=admin
- MYSQL_PASSWORD=12345
- MYSQL_DATABASE=test-db
volumes:
- ./db-data:/var/lib/mysql
ports:
- 3306:3306
test-web:
environment:
- NODE_ENV=local
#- DEBUG=*
- PORT=3030
image: node:lts-alpine
build: ./
command: >
npm run dev
volumes:
- ./:/server
ports:
- "3030:3030"
depends_on:
- test-db
Question 1
When I run docker-compose up --build
a. The image will be built based on Dockerfile
b. What's then?
Question 2
test-db:
image: mysql:5.7
test-web:
environment:
- NODE_ENV=local
#- DEBUG=*
- PORT=3030
image: node:lts-alpine
I am downloading the image for dockerhub with above code, but why and when do I need the image created in --build?
Question 3
volumes:
- ./db-data:/var/lib/mysql
Is this line means that the data is supposed to store at memory in location /var/lib/mysql, while I copy this data in my working directory ./db-data?
Update
Question 4
build: ./
What is this line for?
It is recommended to go through the Getting Started, most of your questions would be solved.
Let me try to highlight some of those to you.
The difference between Dockerfile and Compose file
Docker can build images automatically by reading the instructions from a Dockerfile
Compose is a tool for defining and running multi-container Docker applications
The main difference is Dockerfile is used to build an image while Compose is to build and run an application.
You have to build an image by Dockerfile then run it by Compose
After you run docker-compose up --build the image is built and cached in your system, then Compose would start the containers defined by docker-compose.yml
If you specify the image then it would be download while built if specify the build: ./
Volumes are the preferred mechanism for persisting data generated by and used by Docker containers., Images are read-only, and all editing for a container would be destroyed after it's deleted, so you have to use Volumes if you want to persistent data.
Remember, Doc is always your friend.
I need to configure docker-compose.yml in a way that will invalidate the local image's docker cache, based on a certain file's checksum.
If it's not possible, I'd like to be able to somehow version the docker-compose.yml or Dockerfile, so that it would rebuild the Docker image of a specific service. I'd want to avoid pushing images to DockerHub. Unless it's an absolute the only solution.
At all costs, I want to avoid bash scripts and in general, writing imperative logic. I'm also not interested in CLI solutions, like passing additional flags to docker-compose up command.
Context:
We use docker-compose during the development of our application.
Our app has also a Dockerfile for building the app localy. We don't push docker images into DockerHub, we just have Dockerfile locally and in docker-compose.yml we declare the sourcecode and package.json (a file that for nodeJS applications use to declare dependencies) as volumes. Now sometimes, we modify the package.json, and docker-compose up throws an error, because the image is already built locally and the previous built doesn't contain the new dependencies. I'd want to be able to tell docker-compose.yml to automatically build a new image if there have been any changes to package.json file since we pull dependencies during the build stage.
docker-compose.yml
version: "3.8"
services:
web:
build:
context: .
ports:
- "8000:8000"
command: npx nodemon -L app.js
volumes:
- ./app:/usr/src/app
- /usr/src/app/node_modules
env_file:
- .env
depends_on:
- mongo
mongo:
image: mongo:latest
container_name: mongo_db
volumes:
- ./config/init.sh:/docker-entrypoint-initdb.d/init.sh
- ./config/mongod.conf:/etc/mongod.conf
- ./logs:/var/log/mongodb/
- ./db:/data/db
env_file:
- .env
ports:
- "27017:27017"
restart: on-failure:5
command: ["mongod", "-f", "/etc/mongod.conf"]
volumes:
db-data:
mongo-config:
Dockerfile:
FROM node:14.15.1
RUN mkdir -p /usr/src/app
# Create app directory
WORKDIR /usr/src/app
#Install app dependencies
COPY package.json /usr/src/app
# Install dependencies
RUN npm install
EXPOSE 8000
CMD ["node", "/app/app.js"]
I currently have three docker containers running:
Docker container for the front-end web app (exposed on port 8080)
Docker container for the back-end server (exposed on port 5000)
Docker container for my MongoDB database.
All three containers are working perfectly and when I visit http://localhost:8080, I can interact with my web application with no issues.
I'm trying to set up a fourth Cypress container that will run my end to end tests for my app. Unfortunately, this Cypress container throws the below error, when it attempts to run my Cypress tests:
cypress | Cypress could not verify that this server is running:
cypress |
cypress | > http://localhost:8080
cypress |
cypress | We are verifying this server because it has been configured as your `baseUrl`.
cypress |
cypress | Cypress automatically waits until your server is accessible before running tests.
cypress |
cypress | We will try connecting to it 3 more times...
cypress | We will try connecting to it 2 more times...
cypress | We will try connecting to it 1 more time...
cypress |
cypress | Cypress failed to verify that your server is running.
cypress |
cypress | Please start this server and then run Cypress again.
First potential issue (which I've fixed)
The first potential issue is described by this SO post, which is that when Cypress starts, my application is not ready to start responding to requests. However, in my Cypress Dockerfile, I'm currently sleeping for 10 seconds before I run my cypress command as shown below. These 10 seconds are more than adequate since I'm able to access my web app from the web browser before the npm run cypress-run-chrome command executes. I understand that the Cypress documentation has some fancier solutions for waiting on http://localhost:8080 but for now, I know for sure that my app is ready for Cypress to start executing tests.
ENTRYPOINT sleep 10; npm run cypress-run-chrome
Second potential issue (which I've fixed)
The second potential issue is described by this SO post, which is that the Docker container's /etc/hosts file does not contain the following line. I've also rectified that issue and it doesn't seem to be the problem.
127.0.0.1 localhost
Does anyone know why my Cypress Docker container can't seem to connect to my web app that I can reach from my web browser on http://localhost:8080?
Below is my Dockerfile for my Cypress container
As mentioned by the Cypress documentation about Docker, the cypress/included image already has an existing entrypoint. Since I want to sleep for 10 seconds before running my own Cypress command specified in my package.json file, I've overridden ENTRYPOINT in my Dockerfile as shown below.
FROM cypress/included:3.4.1
COPY hosts /etc/
WORKDIR /e2e
COPY package*.json ./
RUN npm install --production
COPY . .
ENTRYPOINT sleep 10; npm run cypress-run-chrome
Below is the command within my package.json file that corresponds to npm run cypress-run-chrome.
"cypress-run-chrome": "NODE_ENV=test $(npm bin)/cypress run --config video=false --browser chrome",
Below is my docker-compose.yml file that coordinates all 4 containers.
version: '3'
services:
web:
build:
context: .
dockerfile: ./docker/web/Dockerfile
container_name: web
restart: unless-stopped
ports:
- "8080:8080"
volumes:
- .:/home/node/app
- node_modules:/home/node/app/node_modules
depends_on:
- server
environment:
- NODE_ENV=testing
networks:
- app-network
db:
build:
context: .
dockerfile: ./docker/db/Dockerfile
container_name: db
restart: unless-stopped
volumes:
- dbdata:/data/db
ports:
- "27017:27017"
networks:
- app-network
server:
build:
context: .
dockerfile: ./docker/server/Dockerfile
container_name: server
restart: unless-stopped
ports:
- "5000:5000"
volumes:
- .:/home/node/app
- node_modules:/home/node/app/node_modules
networks:
- app-network
depends_on:
- db
command: ./wait-for.sh db:27017 -- nodemon -L server.js
cypress:
build:
context: .
dockerfile: Dockerfile
container_name: cypress
restart: unless-stopped
volumes:
- .:/e2e
depends_on:
- web
networks:
- app-network
networks:
app-network:
driver: bridge
volumes:
dbdata:
node_modules:
Below is what my hosts file looks like which is copied into the Cypress Docker container.
127.0.0.1 localhost
Below is what my cypress.json file looks like.
{
"baseUrl": "http://localhost:8080",
"integrationFolder": "cypress/integration",
"fileServerFolder": "dist",
"viewportWidth": 1200,
"viewportHeight": 1000,
"chromeWebSecurity": false,
"projectId": "3orb3g"
}
localhost in Docker is always "this container". Use the names of the service blocks in the docker-compose.yml as hostnames, i.e., http://web:8080
(Note that I copied David Maze's answer from the comments)