I've noticed this is quite common issue when working with containerized Cypress.
I've found one topic here but resetting
settings isn't really a real solution. In some cases may be.
I'm using docker-compose to manage build of my containers:
...
other services
...
nginx:
build:
context: ./services/nginx
dockerfile: Dockerfile
restart: always
ports:
- 80:80
depends_on:
- users
- client
cypress:
build:
context: ./services/cypress
dockerfile: Dockerfile
depends_on:
- nginx
here's my cypress.json:
{
"baseUrl": "http://172.17.0.1",
"video": false
}
I know it's recommended to refer to service directly like this "http://nginx" but it never worked for me and referring to it by IP worked when I used non-containerized Cypress. Now I'm using Cypress in a container to make it consistent with all other services but Cypress is giving me a hard time. I'm not including volumes because so far I didn't see reason for including them. I don't need to persist any data at this point.
Dockerfile:
FROM cypress/base:10.18.0
RUN mkdir /app
WORKDIR /app
ADD cypress_dev /app
RUN npm i --save-dev cypress#4.9.0
RUN $(npm bin)/cypress verify
RUN ["npm", "run", "cypress:e2e"]
package.json:
{
"scripts": {
"cypress:e2e": "cypress run"
}
}
I'd very much appreciated any guidance. Please do ask for more info if I haven't provided enough.
Related
I am currently writing a webapp - java backend, react front end and have been deploying via a docker compose file. I've made changes and when I try to run them via yarn build for my front end server and starting my back end server with maven, the changes appear. However, when running with docker, the changes aren't there.
I've been using the docker compose up and docker compose down commands and I even run docker system prune -a after stopping my docker containers via the docker compose down command but my new changes aren't showing. I'd appreciate any guidance on what I'm doing wrong to help show my changes.
I also have docker desktop and have manually gone and deleted all of the volumes, containers and images so that they have to be regenerated. Running the build commands to specify ignoring cache didn't help either.
I also deleted the .m2 folder so that this gets generated (my understanding is that this is the cache store for the backend). My changes are mainly on the front end but since my front end container depends on this, I thought regenerating the back-end container may have a knock on effect that may help.
I would greatly appreciate any help, please do let me know if there's anything else to help with context. The changes involve removing a search bar and some text, both of which are commented out in the code but still appear whilst I also add another button which doesn't show up.
My docker compose file is below as follows:
services:
mysqldb:
# image: mysql:5.7
build: ./Database
restart: unless-stopped
env_file: ./.env
environment:
- MYSQL_ROOT_PASSWORD=$MYSQLDB_ROOT_PASSWORD
- MYSQL_DATABASE=$MYSQLDB_DATABASE
ports:
- $MYSQLDB_LOCAL_PORT:$MYSQLDB_DOCKER_PORT
volumes:
- db:/var/lib/mysql
networks:
- backend
app_backend:
depends_on:
- mysqldb
build: ./
restart: on-failure
env_file: ./.env
ports:
- $SPRING_LOCAL_PORT:$SPRING_DOCKER_PORT
environment:
SPRING_APPLICATION_JSON: '{
"spring.datasource.url" : "jdbc:mysql://mysqldb:$MYSQLDB_DOCKER_PORT/$MYSQLDB_DATABASE?useSSL=false&allowPublicKeyRetrieval=true&serverTimezone=UTC",
"spring.datasource.username" : "$MYSQLDB_USER",
"spring.datasource.password" : "$MYSQLDB_ROOT_PASSWORD",
"spring.jpa.properties.hibernate.dialect" : "org.hibernate.dialect.MySQL5InnoDBDialect",
"spring.jpa.hibernate.ddl-auto" : "update"
}'
volumes:
- .m2:/root/.m2
stdin_open: true
tty: true
networks:
- backend
- frontend
app_frontend:
depends_on:
- app_backend
build:
../MyProjectFrontEnd
restart: on-failure
ports:
- 80:80
networks:
- frontend
volumes:
db:
networks:
backend:
frontend:
Since the issue is on the front end, I've also attached the dockerfile for the front end below:
FROM node:16.13.0-alpine AS react-build
WORKDIR /MyProjectFrontEnd
RUN yarn cache clean
RUN yarn install
COPY . ./
RUN yarn
RUN yarn build
# Stage 2 - the production environment
FROM nginx:alpine
COPY nginx.conf /etc/nginx/conf.d/default.conf
COPY /build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Update - the cache in the browser was storing some (rookie error) however, not all the changes are still being loaded
If your source code is in the same folder (usually) of your Dockerfile, you can be sure that your last source code will be built and deployed. This feature is one of the cornerstones, which is the base of docker. If this would be failing, it would be the end of the world.
These kind of errors are not related to the docker core. Usually is something at application level and/or its development:
Libraries mistake
Developer mistake
Functional test mistake
Load Balancer mistake
Advice
docker-compose and windows are for development stage. For deployment on real environments for real users, you should use linux and some tool like Kubernetes.
I am very new to docker (apologies in advance for errors in my terminology / gaps in my knowledge) and have three services where one depends on one to finish before it can be built.
The repo is set up to have them both as submodules with the below docker file composing them.
version: "3"
services:
db:
image: postgres:12.3
restart: always
volumes:
- db_data:/var/lib/postgresql/data
environment:
POSTGRES_DB: ${DB_NAME}
POSTGRES_INITDB_ARGS: "--data-checksums"
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
actions:
build:
context: ./action-handlers
dockerfile: Dockerfile.stg
depends_on:
- hasura
environment:
HASURA_GRAPHQL_ENDPOINT: http://hasura:8080/v1/graphql
HASURA_GRAPHQL_ADMIN_SECRET: my-super-secret-password
ENVIRONMENT: ${ENVIRONMENT}
NODE_ENV: ${NODE_ENV}
PORT: 5000
hasura:
ports:
- 8080:8080
- 9691:9691
build:
context: ./hasura
dockerfile: .docker/Dockerfile.stg
depends_on:
- db
environment:
ACTION_BASE_URL: http://actions:5000
HASURA_GRAPHQL_ACTIONS_HANDLER_WEBHOOK_BASEURL: http://actions:5000
HASURA_GRAPHQL_ADMIN_SECRET: my-super-secret-password
HASURA_GRAPHQL_CONTAINER_HOST_PORT: 8080
HASURA_GRAPHQL_ENABLE_CONSOLE: "true"
HASURA_GRAPHQL_UNAUTHORIZED_ROLE: "public"
DB_NAME: $DB_NAME
HASURA_GRAPHQL_DATABASE_URL: "postgres://postgres:postgres#db:5432/$DB_NAME"
volumes:
db_data:
The actions are an extension of Hasura that require Hasura to be up and running before they can be properly set up. Here is what the docker file looks like:
FROM node
WORKDIR /app
COPY package*.json .
RUN npm install
COPY . .
CMD ["npm", "run", "graphql", "&&", "npm", "run", "start"]
The yarn graphql file downloads the graphql schema from Hasura using graphql-codegen.
Is it possible to orchestrate docker to wait for the Hasura instance to be ready before building the actions? Or do I need a bash script, and if so what would that look like and be run? What I am looking for is a solution where npm run graphql is continuously rerun until it is able to download the graphql schema from Hasura, then run npm run start.
I am a little out of my depth so any insights or tips are appreciated. I have tried storing the graphql schema locally (so I don't need to wait for Hasura to be ready to get it) however this doesn't work practically as I need Hasura and the actions to be in sync (hence get the schema from Hasura at build time). I have also reach out to the team at graphql-coden and they mention there is no CLI flags or config that allow their code to keep retrying to download the schema until it is ready.
It's hacky, but you could revert your compose-file's version to 2.1 and then use the old and deprecated "depends-on" together with a health-check that can issue everything from an SQL-command to a CURL command to prove the readiness of the dependent container.
I was able to solve this by using the accepted solution for this question: Keep retrying yarn script until it passes.
I've got a simple Node / React project. I'm trying to use Docker to create two containers, one for the server, and one for the client, each with their own Dockerfile in the appropriate directory.
docker-compose.yml
version: '3.9'
services:
client:
image: node:14.15-buster
build:
context: ./src
dockerfile: Dockerfile.client
ports:
- '3000:3000'
- '45799:45799'
volumes:
- .:/app
tty: true
server:
image: node:14.15-buster
build:
context: ./server
dockerfile: Dockerfile.server
ports:
- '3001:3001'
volumes:
- .:/app
depends_on:
- redis
links:
- redis
tty: true
redis:
container_name: redis
image: redis
ports:
- '6379'
src/Dockerfile.client
FROM node:14.15-buster
# also the directory you land in on ssh
WORKDIR /app
CMD cd /app && \
yarn && \
yarn start:client
server/Dockerfile.server
FROM node:14.15-buster
# also the directory you land in on ssh
WORKDIR /app
CMD cd /app && \
yarn && \
yarn start:server
After building and starting the containers, both containers run the same command, seemingly at random. Either both run yarn start:server or yarn start:client. The logs clearly detail duplicate startup commands and ports being used. Requests to either port 3000 (client) or 3001 (server) confirm that the same one is being used in both containers. If I change the command in both Dockerfiles to echo the respective filename (Dockerfile.server! or Dockerfile.client!), startup reveals only one Dockerfile being used for both containers. I am also running the latest version of Docker on Mac.
What is causing docker-compose to use the same Dockerfile for both containers?
After a lengthy and painful bout of troubleshooting, I narrowed the issue down to duplicate image references. image: node:14.15-buster for each service in docker-compose.yml and FROM node:14.15-buster in each Dockerfile.
Why this would cause this behavior is unclear, but after removing the image references in docker-compose.yml and rebuilding / restarting, everything works as expected.
When you run docker-compose build with both image and build properties set on a service, it will build an image according to the build property and then tag the image according to the image property.
In your case, you have two services building different images and tagging them with the same tag node:14.15-buster. One will overwrite the other.
This probably has the additional unintended consequence of causing your next image to be built on top of the previously built image instead of the true node:14.15-buster.
Then when you start the service, both containers will use the image tagged node:14.15-buster.
From the docs:
If you specify image as well as build, then Compose names the built image with the webapp and optional tag specified in image
I have a little vueJS app runnig on docker.
When i run the app via yarn serve it runs fine, also it does in docker.
My problem is hot reloading will not work.
My Dockerfile:
FROM node:12.2.0-alpine
WORKDIR /app
COPY package.json /app/package.json
RUN npm install
RUN npm install #vue/cli -g
CMD ["npm", "run", "serve"]
My docker-compose.yml:
version: '3.7'
services:
client:
container_name: client
build:
context: .
dockerfile: Dockerfile
volumes:
- '.:/app'
- '/app/node_modules'
ports:
- '8082:8080'
Does anyone can see the mistake i did?
I found a solution:
I added the following to my compose file:
environment:
- CHOKIDAR_USEPOLLING=true
What has worked for me in the past is to use this in the docker-compose.yml file:
frontend:
build:
context: .
dockerfile: vuejs.Dockerfile
# command to start the development server
command: npm run serve
# ------------------ #
volumes:
- ./frontend:/app
- /app/node_modules # <---- this enables a much faster start/reload
ports:
- "8080:8080"
environment:
- CHOKIDAR_USEPOLLING=true # <---- this enables the hot reloading
Also expose 8080 port
FROM node:12.2.0-alpine
EXPOSE 8080 # add this line in docker file.
WORKDIR /app
COPY package.json /app/package.json
RUN npm install
RUN npm install #vue/cli -g
CMD ["npm", "run", "serve"]
Docker compose as
version: '3.7'
services:
client:
container_name: client
build:
context: .
dockerfile: Dockerfile
volumes:
- '.:/app'
- '/app/node_modules'
ports:
- '8080:8080'
server will be running in localhost:8080
One of the answers above suggests setting an environment variable for the chokidar polling. According to this issue you can set the polling options to true in vue.config.js.
module.exports = {
configureWebpack: {
devServer: {
port: 3000,
// https://github.com/vuejs-templates/webpack/issues/378
watchOptions: {
poll: true,
},
},
}
};
Additionally, make sure that the volume you are mounting is correct as per your working dir, etc. to ensure that the files are watched correctly.
For me it was the working on Windows + Docker Desktop. After switching to WSL2 + Docker Desktop the hot reload worked again without needed to do additionally work / variables.
I have got a docker-compose.yml file:
version: '2'
services:
web:
build: .
command: npm run dev
volumes:
- .:/usr/app
- /usr/app/node_modules
ports:
- "8080:8080"
expose:
- "8080"
And a Dockerfile
FROM node:7.7.2-alpine
WORKDIR /usr/app
COPY package.json .
RUN npm install --quiet
COPY . .
Now I want to add cypress (https://www.cypress.io/) to run test by running:
npm install --save-dev cypress
But maybe it doesn't work because I can't see the cypress folder.
After installing cypress, I run
/node_module/.bin/cypress open
I can't see cypress open.
So now I don't know how to add cypress to my docker to run testing on my host by cypress.
If you're using docker-compose, the cleaner solution is to just use a separate, dedicated Cypress Docker container, so your docker-compose.yml becomes:
version: '2'
services:
web:
build: .
entrypoint: npm run dev
volumes:
- .:/usr/app
- /usr/app/node_modules
ports:
- "8080:8080"
cypress:
image: "cypress/included:3.2.0"
depends_on:
- web
environment:
- CYPRESS_baseUrl=http://web:8080
working_dir: /e2e
volumes:
- ./:/e2e
The e2e directory should contain your cypress.json file and your integration/spec.js file. Your package.json file doesn't have to include Cypress at all because it's baked into the Cypress Docker image (cypress/included).
For more details, I wrote a comprehensive tutorial on using Docker Compose with Cypress:
"End-to-End Testing Web Apps: The Painless Way"
Running into a similar issue with a similar set up
The way I temporarily fixed it was by manually going into the folder containing my node_modules folder and running node_modules/.bin/install, from there you should be able to open it with node_modules/.bin/open or $(npm bin)/cypress open.
Tried setting up a separate cypress container on my docker-compose as such
cypress:
build:
context: .
dockerfile: docker/cypress
depends_on:
- node
volumes:
- .:/code
with the dockerfile being Cypress's prebuilt docker-container
Was able to get docker-compose exec cypress node_modules/.bin/cypress verify to work, but when I try to open Cypress it just hangs.
Hope this helps OP, but hope someone can provide a more concrete answer that will help us run Cypress fully through docker
You can also use an already existing cypress image on docker hub and build it in docker compose. Personnaly I avoid adding it directly in compose, I create a separate docker file for cypress.