I've got failing tests that are failing only when running headlessly:
docker run -v $PWD:/e2e -w /e2e --network tdd_default cypress/included:4.11.0 --config baseUrl=http://nginx
I get:
And I've got few more like this one. One thing in common is that xhr line which is the line
where the form sends details to the backend. It fails.
Tests pass when I run tests using Electron:
./node_modules/.bin/cypress run --config-file cypress_dev.json
The screenshot indicates that it is a networking issue but I use the same docker network in both scenerios.
How can I troubleshoot this ?
Update:
Debug mode revealed:
It's very likely a routing issue with Nginx.
Related
I'm trying to run testcafe in our pipeline (Semaphore) using a docker image based on the official one, where the only additions are copying our tests inside it and install some other additional npm packages used by them. Those tests run against a test environment, that for security reasons can only be accessed either via VPN or a proxy. I'm using the --proxy flag but the test run fails with the message:
ERROR Unable to establish one or more of the specified browser connections
1 of 1 browser connections have not been established:
- chromium:headless
Hints:
- Use the "browserInitTimeout" option to allow more time for the browser to start. The timeout is set to 2 minutes for local browsers and 6 minutes for remote browsers.
- The error can also be caused by network issues or remote device failure. Make sure that the connection is stable and the remote device can be reached.
I'm trying to find out what's the problem, but as testcafe doesn't have a verbose mode, and the --dev flag doesn't seem to log anything anywhere; so I don't have any clue why it's not connecting. My test command is:
docker run --rm -v ~/results:/app/results --env HTTP_PROXY=$HTTP_PROXY $ECR_REPO:$TESTCAFE_IMAGE chromium:headless --proxy $HTTP_PROXY
If I try to run the tests without the proxy flag, they reach the test environment; can't run the tests as the page shown is not our app but a maintenace page served by default for connections outside the vpn or that doesn't come from the proxy.
If I go inside the testcafe container and run:
curl https://mytestserver.dev --proxy $HTTP_PROXY
it connects without any problem.
I've also tried to use firefox:headless instead of Chromium, but I've found out that it actually ignores the --proxy option altogether (reckon it's a bug).
We have a cypress container in that same pipeline going through that same proxy and it connects and runs the tests flawlessly.
Any insight about what the problem could be would be much appreciated.
I have been trying to figure out an issue which only occurs in my TestCafe docker environment but not in local environment. For that I would like to debug TestCafe docker either in inbuilt chromium or firefox. I followed the discussion here but it didn't work for me.
This is the command I use to run my docker container:
docker run --net=host -e NODE_OPTIONS="--inspect-brk=0.0.0.0:9229" -v `pwd`:/tests -v `pwd`/reporter:/reporters -w /reporters -e userEmail=admin#test.com -e userPass=password -e urlPort=9000 --env-file=.env testcafe 'firefox' '/tests/uitests/**/concurrentTests/logintest.js' --disable-page-caching -s takeOnFails=true --reporter 'html:result.html',spec,'xunit:res.xml'
Running the above with -p 9229:9229 or without this is what I see:
Debugger listening on ws://0.0.0.0:9229/66cce714-31f4-45be-aed2-c50411d18319
For help, see: https://nodejs.org/en/docs/inspector
The when I go to the link ws://0.0.0.0:9229/66cce714-31f4-45be-aed2-c50411d18319 on the Chrome/Firefox browser then nothing happens. Also, chrome://inspect/#devices this is empty
My expectation:
I would like to see the webpage in the browser so that I know what's happening behind the scene. Also, I would like to see cookies and other API calls being done.
Please suggest how to deal with this situation.
It seems, node inspection doesn't work well with the host network for some reason. Try to remove the --net=host option and add the -p 127.0.0.1:9229:9229 one. A contained node process should then appear in DevTools (at chrome://inspect) under the 'Remote Target #LOCALHOST' section.
Also, you need to remove the -e NODE_OPTIONS="--inspect-brk=0.0.0.0:9229" option and add the --inspect-brk=0.0.0.0:9229 flag after testcafe/testcafe to avoid the 'Starting inspector on 0.0.0.0:9229 failed: address already in use' error.
When you see the Debugger listening on ws://0.0.0.0:9229/66cce714-31f4-45be-aed2-c50411d18319 message (or similar), navigate to the http://localhost:9229/json URL in your browser and find the devtoolsFrontendURL:
Copy and paste it to your browser to start your debugging session:
I'm a bit new to Docker, but I recently build a container running an old version of Docker and an even older version of JSPWiki (2.2.33 - yes, THAT old). This is in order to decommission an old VM that this is running on.
When I run the following command, my container launches interactively and then I can manually launch my tomcat application and navigate to the Wiki:
docker run -it -v wikifiles:/apps/wikifiles -p 127.0.0.1:80:8080 wikitest:1.1 /bin/bash
When I just try to launch the container with the startup script, it fails...even though it's the same exact script.
docker run -it -v wikifiles:/apps/wikifiles -p 127.0.0.1:80:8080 wikitest:1.1 /usr/local/apache-tomcat-5.5.17/bin/startup.sh
If I include docker --log-level "debug" run ... to see what's going on, I get:
Using CATALINA_BASE: /usr/local/apache-tomcat-5.5.17
Using CATALINA_HOME: /usr/local/apache-tomcat-5.5.17
Using CATALINA_TMPDIR: /usr/local/apache-tomcat-5.5.17/temp
Using JRE_HOME: /usr/lib/jvm/java-8-openjdk-amd64/jre/
EBU[0001] Error resize: Error response from daemon: bad file descriptor: unknown
DEBU[0001] [hijack] End of stdout
EBU[0001] Error resize: Error response from daemon: Container cbe278063c2389f2e3ad86ccb8944df5a600bb079d74e27e5a9cd1bb1e36ac2d is not running
I'm not even sure what to look at from here. Any help would be appreciated.
Thanks!
I can reach from my computer with web browser to xxxx.com:8089 which is running on a container too but different remote machine. Everything is fine. My cypress.json
"baseUrl": "http://xxxx.com:8089" is this.
I am trying to run docker container to run tests with Cypress :
docker run --rm --name burak --ipc="host" -w /hede -v /Users/kurhanb/Desktop/nameoftheProject:/hede' cypress /bin/bash -c cypress run --browser chrome && chmod -R 777 . --reporter mochawesome --reporter-options --reportDir=Users/kurhanb/Desktop/CypressTest overwrite=false
It gives me :
Cypress could not verify that the server set as your 'baseUrl' is running:
http://xxxx.com
Your tests likely make requests to this 'baseUrl' and these tests will fail if you don't boot your server.
So basically, I can reach from my computer but container can not ?
Cypress is giving you that error because at the time that Cypress starts, your application is not yet ready to start responding to requests.
Let's say your application takes 10 seconds to set up and start responding to requests. If you start your application and Cypress at the exact same time, Cypress will not be able to reach your application for the first ten seconds. Rather then wait indefinitely for the app to come online, Cypress will exit and tell you that your app is not yet online.
The way to fix this problem is to make sure that your web server is serving before Cypress ever turns on. The Cypress docs have some examples of how you can make this happen: https://docs.cypress.io/guides/guides/continuous-integration.html#Boot-your-server
I got the "Cypress failed to verify that your server is running" problem despite being able to access the site through a browser. The problem was that my
/etc/hosts
file did not include a line:
127.0.0.1 localhost
In your CICD pipeline, make sure your application is up and running before Cypress.
Using Vue.js app as example, use following commands in CICD pipeline scripts:
npm install -g wait-on
npm run serve & wait-on http://localhost:8080
cypress run
If you are trying to run your test on a preview/staging environment and you can reach it manually, the problem can be your proxy settings. Try to add the domain to / IP address to NO_PROXY:
set NO_PROXY="your_company_domain.com"
If you navigate to the Proxy Settings in the test runner, you should see the Proxy Server and a Proxy Bypass List, with the NO_PROXY values.
I hope it helps somebody. Happy coding.
I faced the same issue when i was working on a react application with gitlab ci and docker image. React application have default port as 3000, but some how docker image was assigning default port 5000. So i used cross-env to change default port of react app to 5000: -
Steps
download cross-env package
in package.json file under script change 'start': 'react-scripts start' to 'start': 'cross-env PORT=5000 react-scripts start'
Also, if working with docker, you need to make sure that your front-end and back-end are working. Cypress doesn't automatically starts your front-end server.
Steps should be
npm start
npm start < back-end>
npx cypress run or npx cypress open
I faced the same issue. I tried the suggestions above. I think this is a docker issue. Restarting docker fixed it for me.
I am new to docker and I tried to run the linuxconfig/lemp-php7 image. Everything worked fine and I could access the nginx web server installed on the container. To run this image I used this command:
sudo docker run linuxconfig/lemp-php7
When I tried to run the image with the following command to gain access over the container through bash I couldn't connect to nginx and I got the connection refused error message. Command: sudo docker run -ti linuxconfig/lemp-php7 bash
I tried this several times so I'm pretty sure it's not any kind of coincidence.
Why does this happen? Is this a problem specific to this particular image or is this a general problem. And how can I gain access to the shell of the container and access the web server at the same time?
I'd really like to understand this behavior to improve my general understanding of docker.
docker run runs the specified command instead of what that container would normally run. In your case, it appears to be supervisord, which presumably in turn runs the web server. So you're preventing any of that from happening.
My preferred method (except in cases where I'm trying to debug cases where the container won't even start properly) is to do the following after running the container normally:
docker exec -i -t $CONTAINER_ID /bin/bash