Situation:
Currently, i have a setup of two containers
testdata container
e2e container
The e2e container is based on cypress image and does run it's testsuite against an external url (on the internet) for example https://foobar.com
On that url a app runs which does some AJAX requests to foobar.com/api
I would like to point every request to the /api done by the app hosted on foobar.com to the testdata container.
So GET https:foobar.com/api/test needs to go to testdata:8080/api/test
Biggest question: is this even possible? Because the app itself is not hosted locally in docker. And if so: How? Any ideas?
Curl from testData container works:
curl "testdata:8080/api/test"
setup
Docker compose file
testdata:
image:
wiremock/wiremock
restart: always
command: [--enable-stub-cors]
container_name: wiremock-server
volumes:
- ./testData/mappings:/home/wiremock/mappings
- ./testData/__files:/home/wiremock/__files
ports:
- 8080:8080
e2e:
image: cypress
build: ./e2e
container_name: cypress
depends_on:
- testdata
command: npx cypress run --no-exit
volumes:
- ./e2e/cypress:/app/cypress
- ./e2e/cypress.config.js:/app/cypress.config.js
e2e docker file
FROM cypress/base:14
#FROM cypress/browsers:node18.6.0-chrome105-ff104
WORKDIR /app
COPY package.json .
COPY package-lock.json .
ENV CI=1
RUN npm ci
RUN npx cypress verify
testData docker file
FROM wiremock/wiremock
Related
I have a very basic node/express app with a dockerfile and a docker-compose file. When I run the docker container using
docker run -p 3000:3000 service:0.0.1 npm run dev
I can go to localhost:3000 and see my service. However, when I do:
docker-compose run server npm run dev
I can't see anything on localhost:3000, below are my files:
Dockerfile
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
EXPOSE 3000
docker-compose.yml
version: "3.7"
services:
server:
build: .
ports:
- "3000:3000"
image: service:0.0.1
environment:
- LOGLEVEL=debug
depends_on:
- db
db:
container_name: "website_service__db"
image: postgres
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pass
- POSTGRES_DB=website_service
also, everything is working fine from the terminal/docker side - no errors and services are running fine, i just cant access the node endpoints
tl;dr
docker-compose run --service-ports server npm run dev
// the part that changed is the new '--service-ports' argument
the issue was a missing docker-compose run argument --service-ports:
from these docs:
The second difference is that the docker-compose run command does not create any of the ports specified in the service configuration. This prevents port collisions with already-open ports. If you do want the service’s ports to be created and mapped to the host, specify the --service-ports flag:
I currently have three docker containers running:
Docker container for the front-end web app (exposed on port 8080)
Docker container for the back-end server (exposed on port 5000)
Docker container for my MongoDB database.
All three containers are working perfectly and when I visit http://localhost:8080, I can interact with my web application with no issues.
I'm trying to set up a fourth Cypress container that will run my end to end tests for my app. Unfortunately, this Cypress container throws the below error, when it attempts to run my Cypress tests:
cypress | Cypress could not verify that this server is running:
cypress |
cypress | > http://localhost:8080
cypress |
cypress | We are verifying this server because it has been configured as your `baseUrl`.
cypress |
cypress | Cypress automatically waits until your server is accessible before running tests.
cypress |
cypress | We will try connecting to it 3 more times...
cypress | We will try connecting to it 2 more times...
cypress | We will try connecting to it 1 more time...
cypress |
cypress | Cypress failed to verify that your server is running.
cypress |
cypress | Please start this server and then run Cypress again.
First potential issue (which I've fixed)
The first potential issue is described by this SO post, which is that when Cypress starts, my application is not ready to start responding to requests. However, in my Cypress Dockerfile, I'm currently sleeping for 10 seconds before I run my cypress command as shown below. These 10 seconds are more than adequate since I'm able to access my web app from the web browser before the npm run cypress-run-chrome command executes. I understand that the Cypress documentation has some fancier solutions for waiting on http://localhost:8080 but for now, I know for sure that my app is ready for Cypress to start executing tests.
ENTRYPOINT sleep 10; npm run cypress-run-chrome
Second potential issue (which I've fixed)
The second potential issue is described by this SO post, which is that the Docker container's /etc/hosts file does not contain the following line. I've also rectified that issue and it doesn't seem to be the problem.
127.0.0.1 localhost
Does anyone know why my Cypress Docker container can't seem to connect to my web app that I can reach from my web browser on http://localhost:8080?
Below is my Dockerfile for my Cypress container
As mentioned by the Cypress documentation about Docker, the cypress/included image already has an existing entrypoint. Since I want to sleep for 10 seconds before running my own Cypress command specified in my package.json file, I've overridden ENTRYPOINT in my Dockerfile as shown below.
FROM cypress/included:3.4.1
COPY hosts /etc/
WORKDIR /e2e
COPY package*.json ./
RUN npm install --production
COPY . .
ENTRYPOINT sleep 10; npm run cypress-run-chrome
Below is the command within my package.json file that corresponds to npm run cypress-run-chrome.
"cypress-run-chrome": "NODE_ENV=test $(npm bin)/cypress run --config video=false --browser chrome",
Below is my docker-compose.yml file that coordinates all 4 containers.
version: '3'
services:
web:
build:
context: .
dockerfile: ./docker/web/Dockerfile
container_name: web
restart: unless-stopped
ports:
- "8080:8080"
volumes:
- .:/home/node/app
- node_modules:/home/node/app/node_modules
depends_on:
- server
environment:
- NODE_ENV=testing
networks:
- app-network
db:
build:
context: .
dockerfile: ./docker/db/Dockerfile
container_name: db
restart: unless-stopped
volumes:
- dbdata:/data/db
ports:
- "27017:27017"
networks:
- app-network
server:
build:
context: .
dockerfile: ./docker/server/Dockerfile
container_name: server
restart: unless-stopped
ports:
- "5000:5000"
volumes:
- .:/home/node/app
- node_modules:/home/node/app/node_modules
networks:
- app-network
depends_on:
- db
command: ./wait-for.sh db:27017 -- nodemon -L server.js
cypress:
build:
context: .
dockerfile: Dockerfile
container_name: cypress
restart: unless-stopped
volumes:
- .:/e2e
depends_on:
- web
networks:
- app-network
networks:
app-network:
driver: bridge
volumes:
dbdata:
node_modules:
Below is what my hosts file looks like which is copied into the Cypress Docker container.
127.0.0.1 localhost
Below is what my cypress.json file looks like.
{
"baseUrl": "http://localhost:8080",
"integrationFolder": "cypress/integration",
"fileServerFolder": "dist",
"viewportWidth": 1200,
"viewportHeight": 1000,
"chromeWebSecurity": false,
"projectId": "3orb3g"
}
localhost in Docker is always "this container". Use the names of the service blocks in the docker-compose.yml as hostnames, i.e., http://web:8080
(Note that I copied David Maze's answer from the comments)
I have an application that relies on redis being up for its integration tests.
I run my integration tests like inside a docker container like so:
Dockerfile.test
FROM clementoh/openjdk:jdk8-gradle-5.2.1
WORKDIR /app
COPY . .
RUN ./gradlew test
I am trying to use Docker Compose to run my tests like so:
docker-compose.yml
version: '3'
services:
redis:
image: "redis:5.0.4"
web:
build:
context: .
dockerfile: Dockerfile.test
environment:
- SPRING_REDIS_HOST=redis
- SPRING_REDIS_PORT=6379
depends_on:
- redis
The issue I have is that docker compose wants to build the web service first before starting redis and subsequently the web service. At this point, redis is not yet up so the tests being run in Dockerfile.test fail.
Is it possible to run the building of the web service after redis is started?
You can use ENTRYPOINT or CMD to execute ./gradlew test.
Post that you can bring services up -
$ docker-compose up
This way redis service will be brought up always before the entrypoint or CMD of web service executes.
I'm interested in accessing the app running on one container by webdriverio to test. When i run locally, I do the following which works fine
yarn start // starts the app on htpp://localhost:3000
yarn test // runs wdio test which access the htpp://localhost:3000
webdriverio test example
it("check if submit button works", function(done) {
browser.url('http://0.0.0.0:3000');
var title = browser.getTitle();
browser.click('#submitButton');
console.log('App Title is: ' + title);
browser.pause(3000);
});
Dockerfile
FROM node:8.10.0
ADD . /app
WORKDIR /app
RUN yarn
CMD ["yarn", "start"]
docker-compose.yml
app:
build: .
command: "yarn start"
ports:
- 3000:3000
expose:
- "3000"
selenium:
image: 'selenium/standalone-chrome:3.11.0-californium'
expose:
- "4444"
links:
- app
log_driver: "none"
test:
build: .
command: "yarn test --host selenium"
links:
- selenium
I would like to run the app in one container and then also run the test which will access the app for testing
docker-compose up --build
As far as I know, this is possible. We used to have our test environment running in docker containers and the site is accessible in our local machine. Likewise, you may have to establish network connection between the containers.
I have got a docker-compose.yml file:
version: '2'
services:
web:
build: .
command: npm run dev
volumes:
- .:/usr/app
- /usr/app/node_modules
ports:
- "8080:8080"
expose:
- "8080"
And a Dockerfile
FROM node:7.7.2-alpine
WORKDIR /usr/app
COPY package.json .
RUN npm install --quiet
COPY . .
Now I want to add cypress (https://www.cypress.io/) to run test by running:
npm install --save-dev cypress
But maybe it doesn't work because I can't see the cypress folder.
After installing cypress, I run
/node_module/.bin/cypress open
I can't see cypress open.
So now I don't know how to add cypress to my docker to run testing on my host by cypress.
If you're using docker-compose, the cleaner solution is to just use a separate, dedicated Cypress Docker container, so your docker-compose.yml becomes:
version: '2'
services:
web:
build: .
entrypoint: npm run dev
volumes:
- .:/usr/app
- /usr/app/node_modules
ports:
- "8080:8080"
cypress:
image: "cypress/included:3.2.0"
depends_on:
- web
environment:
- CYPRESS_baseUrl=http://web:8080
working_dir: /e2e
volumes:
- ./:/e2e
The e2e directory should contain your cypress.json file and your integration/spec.js file. Your package.json file doesn't have to include Cypress at all because it's baked into the Cypress Docker image (cypress/included).
For more details, I wrote a comprehensive tutorial on using Docker Compose with Cypress:
"End-to-End Testing Web Apps: The Painless Way"
Running into a similar issue with a similar set up
The way I temporarily fixed it was by manually going into the folder containing my node_modules folder and running node_modules/.bin/install, from there you should be able to open it with node_modules/.bin/open or $(npm bin)/cypress open.
Tried setting up a separate cypress container on my docker-compose as such
cypress:
build:
context: .
dockerfile: docker/cypress
depends_on:
- node
volumes:
- .:/code
with the dockerfile being Cypress's prebuilt docker-container
Was able to get docker-compose exec cypress node_modules/.bin/cypress verify to work, but when I try to open Cypress it just hangs.
Hope this helps OP, but hope someone can provide a more concrete answer that will help us run Cypress fully through docker
You can also use an already existing cypress image on docker hub and build it in docker compose. Personnaly I avoid adding it directly in compose, I create a separate docker file for cypress.