testcafe failing to connect to server when using --proxy option - docker

I'm trying to run testcafe in our pipeline (Semaphore) using a docker image based on the official one, where the only additions are copying our tests inside it and install some other additional npm packages used by them. Those tests run against a test environment, that for security reasons can only be accessed either via VPN or a proxy. I'm using the --proxy flag but the test run fails with the message:
ERROR Unable to establish one or more of the specified browser connections
1 of 1 browser connections have not been established:
- chromium:headless
Hints:
- Use the "browserInitTimeout" option to allow more time for the browser to start. The timeout is set to 2 minutes for local browsers and 6 minutes for remote browsers.
- The error can also be caused by network issues or remote device failure. Make sure that the connection is stable and the remote device can be reached.
I'm trying to find out what's the problem, but as testcafe doesn't have a verbose mode, and the --dev flag doesn't seem to log anything anywhere; so I don't have any clue why it's not connecting. My test command is:
docker run --rm -v ~/results:/app/results --env HTTP_PROXY=$HTTP_PROXY $ECR_REPO:$TESTCAFE_IMAGE chromium:headless --proxy $HTTP_PROXY
If I try to run the tests without the proxy flag, they reach the test environment; can't run the tests as the page shown is not our app but a maintenace page served by default for connections outside the vpn or that doesn't come from the proxy.
If I go inside the testcafe container and run:
curl https://mytestserver.dev --proxy $HTTP_PROXY
it connects without any problem.
I've also tried to use firefox:headless instead of Chromium, but I've found out that it actually ignores the --proxy option altogether (reckon it's a bug).
We have a cypress container in that same pipeline going through that same proxy and it connects and runs the tests flawlessly.
Any insight about what the problem could be would be much appreciated.

Related

Running cypress tests in parallel on single machine gives error

TLDR: issues in running parallel cypress tests in docker containers at the same machine in jenkins.
I'm trying to run on single aws machine 2 docker instances of cypress to run different suite in parallel at the same time. I've encountered issues while seems that there's a collision on ports even though I've configured and exposed 2 unique and different ports on docker-compose.yml and on cypress.json files. he first container works but the the secnod one crashing on the error below:
✖ Verifying Cypress can run /home/my-user/.cache/Cypress/4.1.0/Cypress
→ Cypress Version: 4.1.0
Xvfb exited with a non zero exit code.
There was a problem spawning Xvfb.
This is likely a problem with your system, permissions, or installation of Xvfb.
----------
Error: _XSERVTransSocketUNIXCreateListener: ...SocketCreateListener() failed
_XSERVTransMakeAllCOTSServerListeners: server already running
(EE)
Fatal server error:
(EE) Cannot establish any listening sockets - Make sure an X server isn't already running(EE)
----------
Platform: linux (Ubuntu Linux - 18.04)
Cypress Version: 4.1.0
important note: I want to implementthe parallelization on my own and not use the feature --parallel in cypress , I need to implement it in house on the same machine only in encapsulated environment.
Any suggestions?
If I understood correctly all you need to do is to start cypress (in containers) with xvfb-run -a. E.g. xvfb-run -a npx cypress run --browser Chrome
So -a will assign next available port number, means you can run multiple cypress containers in parallel. Check http://elementalselenium.com/tips/38-headless

Why tests failing headlessly but not when browser is opened?

I've got failing tests that are failing only when running headlessly:
docker run -v $PWD:/e2e -w /e2e --network tdd_default cypress/included:4.11.0 --config baseUrl=http://nginx
I get:
And I've got few more like this one. One thing in common is that xhr line which is the line
where the form sends details to the backend. It fails.
Tests pass when I run tests using Electron:
./node_modules/.bin/cypress run --config-file cypress_dev.json
The screenshot indicates that it is a networking issue but I use the same docker network in both scenerios.
How can I troubleshoot this ?
Update:
Debug mode revealed:
It's very likely a routing issue with Nginx.

Cypress could not verify that the server set as your 'baseUrl' is running:

I can reach from my computer with web browser to xxxx.com:8089 which is running on a container too but different remote machine. Everything is fine. My cypress.json
"baseUrl": "http://xxxx.com:8089" is this.
I am trying to run docker container to run tests with Cypress :
docker run --rm --name burak --ipc="host" -w /hede -v /Users/kurhanb/Desktop/nameoftheProject:/hede' cypress /bin/bash -c cypress run --browser chrome && chmod -R 777 . --reporter mochawesome --reporter-options --reportDir=Users/kurhanb/Desktop/CypressTest overwrite=false
It gives me :
Cypress could not verify that the server set as your 'baseUrl' is running:
http://xxxx.com
Your tests likely make requests to this 'baseUrl' and these tests will fail if you don't boot your server.
So basically, I can reach from my computer but container can not ?
Cypress is giving you that error because at the time that Cypress starts, your application is not yet ready to start responding to requests.
Let's say your application takes 10 seconds to set up and start responding to requests. If you start your application and Cypress at the exact same time, Cypress will not be able to reach your application for the first ten seconds. Rather then wait indefinitely for the app to come online, Cypress will exit and tell you that your app is not yet online.
The way to fix this problem is to make sure that your web server is serving before Cypress ever turns on. The Cypress docs have some examples of how you can make this happen: https://docs.cypress.io/guides/guides/continuous-integration.html#Boot-your-server
I got the "Cypress failed to verify that your server is running" problem despite being able to access the site through a browser. The problem was that my
/etc/hosts
file did not include a line:
127.0.0.1 localhost
In your CICD pipeline, make sure your application is up and running before Cypress.
Using Vue.js app as example, use following commands in CICD pipeline scripts:
npm install -g wait-on
npm run serve & wait-on http://localhost:8080
cypress run
If you are trying to run your test on a preview/staging environment and you can reach it manually, the problem can be your proxy settings. Try to add the domain to / IP address to NO_PROXY:
set NO_PROXY="your_company_domain.com"
If you navigate to the Proxy Settings in the test runner, you should see the Proxy Server and a Proxy Bypass List, with the NO_PROXY values.
I hope it helps somebody. Happy coding.
I faced the same issue when i was working on a react application with gitlab ci and docker image. React application have default port as 3000, but some how docker image was assigning default port 5000. So i used cross-env to change default port of react app to 5000: -
Steps
download cross-env package
in package.json file under script change 'start': 'react-scripts start' to 'start': 'cross-env PORT=5000 react-scripts start'
Also, if working with docker, you need to make sure that your front-end and back-end are working. Cypress doesn't automatically starts your front-end server.
Steps should be
npm start
npm start < back-end>
npx cypress run or npx cypress open
I faced the same issue. I tried the suggestions above. I think this is a docker issue. Restarting docker fixed it for me.

Docker connection refused when started with -ti bash

I am new to docker and I tried to run the linuxconfig/lemp-php7 image. Everything worked fine and I could access the nginx web server installed on the container. To run this image I used this command:
sudo docker run linuxconfig/lemp-php7
When I tried to run the image with the following command to gain access over the container through bash I couldn't connect to nginx and I got the connection refused error message. Command: sudo docker run -ti linuxconfig/lemp-php7 bash
I tried this several times so I'm pretty sure it's not any kind of coincidence.
Why does this happen? Is this a problem specific to this particular image or is this a general problem. And how can I gain access to the shell of the container and access the web server at the same time?
I'd really like to understand this behavior to improve my general understanding of docker.
docker run runs the specified command instead of what that container would normally run. In your case, it appears to be supervisord, which presumably in turn runs the web server. So you're preventing any of that from happening.
My preferred method (except in cases where I'm trying to debug cases where the container won't even start properly) is to do the following after running the container normally:
docker exec -i -t $CONTAINER_ID /bin/bash

Unable to connect to docker hub from China

I'm getting the same thing every time trying to run busybox either with docker on fedora 20 or running boot2docker in VirtualBox:
[me#localhost ~]$ docker run -it busybox Unable to find image
'busybox:latest' locally Pulling repository busybox FATA[0105] Get
https://index.docker.io/v1/repositories/library/busybox/images: read
tcp 162.242.195.84:443: i/o timeout
I can open https://index.docker.io/v1/repositories/library/busybox/images in a browser and sometimes without using a vpn tunnel so tried to set a proxy in the network settings to the proxy provided by Astrill when using VPN sharing but it will always time out.
Currently in China where there basically is no Internet due to the firewall, npm, git and wget seem to use the Astrill proxy in the terminal (when setting it in network setting of Fedora 20) but somehow I either can't get the docker daemon to use it or something else is wrong.
It seems the answer was not so complicated according to the following documentation (had read it before but thought setting proxy in network settings ui would take care of it)
So added the following to /etc/systemd/system/docker.service.d/http-proxy.conf (after creating the docker.service.d directory and conf file):
[Service]
Environment="HTTP_PROXY=http://localhost:3213/"
Environment="HTTPS_PROXY=http://localhost:3213/"
In the Astrill app (I'm sure other provider application provide something similar) there is an option for vpn sharing which will create a proxy; it can be found under settings => vpn sharing.
For git, npm and wget setting the proxy in the ui (gnome-control-center => Network => network proxy) is enough but when doing a sudo it's better to do a sudo su, set the env and then run the command needing a proxy, for example:
sudo su
export http_proxy=http://localhost:3213/
export ftp_proxy=http://localhost:3213/
export all_proxy=socks://localhost:3213/
export https_proxy=http://localhost:3213/
export no_proxy=localhost,127.0.0.0/8,::1
export NO_PROXY="/var/run/docker.sock"
npm install -g ...
I'd like to update the solution for people who still encounter this issue today
I don't know the details, but when using the wireguard protocol on Astrill, docker build and docker run will use the VPN. If for some reason it doesn't work, try restarting the docker service sudo service docker restart while the VPN is active
Hope it helps, I just wasted one hour trying to figure out why it stopped working

Resources