TLDR: issues in running parallel cypress tests in docker containers at the same machine in jenkins.
I'm trying to run on single aws machine 2 docker instances of cypress to run different suite in parallel at the same time. I've encountered issues while seems that there's a collision on ports even though I've configured and exposed 2 unique and different ports on docker-compose.yml and on cypress.json files. he first container works but the the secnod one crashing on the error below:
✖ Verifying Cypress can run /home/my-user/.cache/Cypress/4.1.0/Cypress
→ Cypress Version: 4.1.0
Xvfb exited with a non zero exit code.
There was a problem spawning Xvfb.
This is likely a problem with your system, permissions, or installation of Xvfb.
----------
Error: _XSERVTransSocketUNIXCreateListener: ...SocketCreateListener() failed
_XSERVTransMakeAllCOTSServerListeners: server already running
(EE)
Fatal server error:
(EE) Cannot establish any listening sockets - Make sure an X server isn't already running(EE)
----------
Platform: linux (Ubuntu Linux - 18.04)
Cypress Version: 4.1.0
important note: I want to implementthe parallelization on my own and not use the feature --parallel in cypress , I need to implement it in house on the same machine only in encapsulated environment.
Any suggestions?
If I understood correctly all you need to do is to start cypress (in containers) with xvfb-run -a. E.g. xvfb-run -a npx cypress run --browser Chrome
So -a will assign next available port number, means you can run multiple cypress containers in parallel. Check http://elementalselenium.com/tips/38-headless
Related
I am trying to run a web application implemented with php, js, html together with Selenium to test functionality. I am using a python script together with pytest to run the selenium tests against the web application. The web application is already dockerized with a windows/servercore:ltsc2019 base image and the xampp application. If the dockerfile is build for the first time it succeeds. But if some changes are done and the container has to be rebuilded i get the following error:
Status: failed to copy files: failed to copy directory: chtimes \\?\Volume{06ac6f98-32bd-4e73-b16b-f982f49c7bd6}\xampp\htdocs\img: Access denied
on this Dockerfile step:
Step 2/7 : COPY . C:\\xampp\\htdocs
I am using the already dockerized web application (including docker-compose) and an existing gitlab pipeline. After adding selenium and python services to docker-compose I found out that they are only available on linux containers so I switched the docker engine to experimental to make use of the mixed mode. And that was working just fine for my pc at work (I tested it there). But then I used the same docker-compose file on my home pc and I got the error from above. Also on the gitlab-runner that runs the pipeline the same error occured after modifying the pipeline slightly (Added point 2-4).
Run web application container
docker-compose -f ./docker/stage/docker-compose-staging.yml --compatibility up --build -d
Switch daemon to linux container with powershell script
./docker-e2e-tests/SwitchDaemon.ps1
Run the test containers (Selenium, python)
docker-compose -f ./docker-e2e-tests/docker-compose.yml --compatibility up --build -d
Switch daemon back to windows container with powershell script
./docker-e2e-tests/SwitchDaemon.ps1
Same result here. First time it was fine. Second time I tried to COPY the changes into the container (step 1) I got the same error as above. And now the worst problem is, that the pipeline is not running anymore at all. I am even getting the same error now if I remove step 2-4.
I'm trying to run testcafe in our pipeline (Semaphore) using a docker image based on the official one, where the only additions are copying our tests inside it and install some other additional npm packages used by them. Those tests run against a test environment, that for security reasons can only be accessed either via VPN or a proxy. I'm using the --proxy flag but the test run fails with the message:
ERROR Unable to establish one or more of the specified browser connections
1 of 1 browser connections have not been established:
- chromium:headless
Hints:
- Use the "browserInitTimeout" option to allow more time for the browser to start. The timeout is set to 2 minutes for local browsers and 6 minutes for remote browsers.
- The error can also be caused by network issues or remote device failure. Make sure that the connection is stable and the remote device can be reached.
I'm trying to find out what's the problem, but as testcafe doesn't have a verbose mode, and the --dev flag doesn't seem to log anything anywhere; so I don't have any clue why it's not connecting. My test command is:
docker run --rm -v ~/results:/app/results --env HTTP_PROXY=$HTTP_PROXY $ECR_REPO:$TESTCAFE_IMAGE chromium:headless --proxy $HTTP_PROXY
If I try to run the tests without the proxy flag, they reach the test environment; can't run the tests as the page shown is not our app but a maintenace page served by default for connections outside the vpn or that doesn't come from the proxy.
If I go inside the testcafe container and run:
curl https://mytestserver.dev --proxy $HTTP_PROXY
it connects without any problem.
I've also tried to use firefox:headless instead of Chromium, but I've found out that it actually ignores the --proxy option altogether (reckon it's a bug).
We have a cypress container in that same pipeline going through that same proxy and it connects and runs the tests flawlessly.
Any insight about what the problem could be would be much appreciated.
I am currently despairing at the attempt of setting up a docker build step in Atlassian Bamboo.
For starters, I just want to create a build configuration that runs the hello-world image as a proof of confluence. So far, I have failed.
I have tried following the steps on https://confluence.atlassian.com/bamboo0609/using-bamboo/jobs-and-tasks/configuring-tasks/configuring-the-docker-task-in-bamboo , but to no avail.
My setup is this:
We have Bamboo installed on an Ubuntu server. I also installed Docker on that server and added the bamboo user to the docker usergroup and restarted the server to make sure the permission change takes effect. At this point, docker run hello-world works when I run it directly on the server. I can also confirm that this is the server that Bamboo runs on since Bamboo went offline whenever I restarted the server that I installed Docker on.
Then, I have added the docker capability to the server (the agent is the default agent, so it inherits this capability from the server). As the docker path, I have tried various things, none of which worked (aka, the following errors remained the same for each of these):
/snap/docker (the first folder that I found on a manual search)
/usr/bin/docker (the recommended path, though on inspecting the Ubuntu server I quickly found out that no docker folder exists under /usr/bin on the Ubuntu derver)
/var/snap/docker/common/var-lib-docker (the path that Docker returns as its Root Directory when I run docker info on the Ubuntu server)
/var/snap/docker (for good measure)
Now, for the runner, I have tried two different approaches.
First, I tried using a Docker runner with the following settings:
Command: Run a Docker container
Docker image: hello-world
This returns the following error message:
┊
Error occurred while running Task 'Hello World Docker Test(5)' of type com.atlassian.bamboo.plugins.bamboo-docker-plugin:task.docker.cli.com.atlassian.bamboo.task.TaskException: Failed to execute task
┊
Caused by: com.atlassian.bamboo.docker.DockerException: Error running Docker run command
┊
Caused by: com.atlassian.utils.process.ProcessException: Error executing /snap/docker run --volume /var/atlassian/application-data/bamboo/xml-data/build-dir/CAM-DOC-JOB1:/data --workdir /data --rm hello-world
┊
The second was just to run a shell runner for the command docker run hello-world, which returned the following error:
docker: not found
At this point, I feel like I'm out of ideas. Everything points towards Bamboo for some reason not finding Docker on the server, even though I can clearly confirm that it is there. I have tried various different approaches of telling Bamboo where to find Docker, but none of them have worked.
It's obvious that I'm doing something wrong, but I can't figure out what. Or maybe the problem lies in an entirely different direction altogether? Anyway, I would be grateful for any insight shared on this matter.
Okay, I found out what caused this strange behaviour.
The problem was that I installed Docker using sudo snap install docker, and apparently installing docker via snap causes problems with Bamboo.
So I got it to work using these simple steps:
[Server] Uninstalled Snap Docker using sudo snap remove docker
[Server] Reinstalled Docker using sudo apt install docker.io
[Bamboo] Changed the path to Docker in the Server Capabilities to /usr/bin/docker
After that, the hello-world image build succeeded and printed the expected output to the log.
I have a Laravel application which has some Integration Tests and this project has Dockerized using Docker Compose and it's consisted of 5 containers: php-fpm, mysql, redis, nginx and the workspace which have php-cli and composer installed in itself (just like Laradock). I want to run the tests while the test stage is running in my CI process. I have to mention that my CI Server is GitLab CI.
Basically, I run the tests on my local system by running the following commands in my terminal:
$ docker-compose up -d
Creating network "docker_backend" with driver "bridge"
Creating network "docker_frontend" with driver "bridge"
Creating redis ... done
Creating workspace ... done
Creating mysql ... done
Creating php-fpm ... done
Creating nginx ... done
$ docker-compose exec workspace bash
// now, I have logged in to workspace container
$ cd /var/www/app
$ phpunit
PHPUnit 6.5.13 by Sebastian Bergmann and contributors.
........ 8 / 8 (100%)
Time: 38.1 seconds, Memory: 28.00MB
OK (8 tests, 56 assertions)
Here is my question: How I can run these tests in test stage while there is no running container? What're the Best Practices in this case?
I also followed this documentation of GitLab, but it seems that is not OK to use Docker-in-Docker or Docker Socket Binding.
First, it is absolutely ok to run docker-in-docker with gitlab ci. This is a greate way if you dont want or dont need to dive into kubernetes. Sharing docker socket of course somehow lowers the isolation level, but as far as you mostly run your jobs on your VPS containers, I personally dont find this issue critical.
I've answered similar question some time ago in this post.
I can reach from my computer with web browser to xxxx.com:8089 which is running on a container too but different remote machine. Everything is fine. My cypress.json
"baseUrl": "http://xxxx.com:8089" is this.
I am trying to run docker container to run tests with Cypress :
docker run --rm --name burak --ipc="host" -w /hede -v /Users/kurhanb/Desktop/nameoftheProject:/hede' cypress /bin/bash -c cypress run --browser chrome && chmod -R 777 . --reporter mochawesome --reporter-options --reportDir=Users/kurhanb/Desktop/CypressTest overwrite=false
It gives me :
Cypress could not verify that the server set as your 'baseUrl' is running:
http://xxxx.com
Your tests likely make requests to this 'baseUrl' and these tests will fail if you don't boot your server.
So basically, I can reach from my computer but container can not ?
Cypress is giving you that error because at the time that Cypress starts, your application is not yet ready to start responding to requests.
Let's say your application takes 10 seconds to set up and start responding to requests. If you start your application and Cypress at the exact same time, Cypress will not be able to reach your application for the first ten seconds. Rather then wait indefinitely for the app to come online, Cypress will exit and tell you that your app is not yet online.
The way to fix this problem is to make sure that your web server is serving before Cypress ever turns on. The Cypress docs have some examples of how you can make this happen: https://docs.cypress.io/guides/guides/continuous-integration.html#Boot-your-server
I got the "Cypress failed to verify that your server is running" problem despite being able to access the site through a browser. The problem was that my
/etc/hosts
file did not include a line:
127.0.0.1 localhost
In your CICD pipeline, make sure your application is up and running before Cypress.
Using Vue.js app as example, use following commands in CICD pipeline scripts:
npm install -g wait-on
npm run serve & wait-on http://localhost:8080
cypress run
If you are trying to run your test on a preview/staging environment and you can reach it manually, the problem can be your proxy settings. Try to add the domain to / IP address to NO_PROXY:
set NO_PROXY="your_company_domain.com"
If you navigate to the Proxy Settings in the test runner, you should see the Proxy Server and a Proxy Bypass List, with the NO_PROXY values.
I hope it helps somebody. Happy coding.
I faced the same issue when i was working on a react application with gitlab ci and docker image. React application have default port as 3000, but some how docker image was assigning default port 5000. So i used cross-env to change default port of react app to 5000: -
Steps
download cross-env package
in package.json file under script change 'start': 'react-scripts start' to 'start': 'cross-env PORT=5000 react-scripts start'
Also, if working with docker, you need to make sure that your front-end and back-end are working. Cypress doesn't automatically starts your front-end server.
Steps should be
npm start
npm start < back-end>
npx cypress run or npx cypress open
I faced the same issue. I tried the suggestions above. I think this is a docker issue. Restarting docker fixed it for me.