I have been trying to figure out an issue which only occurs in my TestCafe docker environment but not in local environment. For that I would like to debug TestCafe docker either in inbuilt chromium or firefox. I followed the discussion here but it didn't work for me.
This is the command I use to run my docker container:
docker run --net=host -e NODE_OPTIONS="--inspect-brk=0.0.0.0:9229" -v `pwd`:/tests -v `pwd`/reporter:/reporters -w /reporters -e userEmail=admin#test.com -e userPass=password -e urlPort=9000 --env-file=.env testcafe 'firefox' '/tests/uitests/**/concurrentTests/logintest.js' --disable-page-caching -s takeOnFails=true --reporter 'html:result.html',spec,'xunit:res.xml'
Running the above with -p 9229:9229 or without this is what I see:
Debugger listening on ws://0.0.0.0:9229/66cce714-31f4-45be-aed2-c50411d18319
For help, see: https://nodejs.org/en/docs/inspector
The when I go to the link ws://0.0.0.0:9229/66cce714-31f4-45be-aed2-c50411d18319 on the Chrome/Firefox browser then nothing happens. Also, chrome://inspect/#devices this is empty
My expectation:
I would like to see the webpage in the browser so that I know what's happening behind the scene. Also, I would like to see cookies and other API calls being done.
Please suggest how to deal with this situation.
It seems, node inspection doesn't work well with the host network for some reason. Try to remove the --net=host option and add the -p 127.0.0.1:9229:9229 one. A contained node process should then appear in DevTools (at chrome://inspect) under the 'Remote Target #LOCALHOST' section.
Also, you need to remove the -e NODE_OPTIONS="--inspect-brk=0.0.0.0:9229" option and add the --inspect-brk=0.0.0.0:9229 flag after testcafe/testcafe to avoid the 'Starting inspector on 0.0.0.0:9229 failed: address already in use' error.
When you see the Debugger listening on ws://0.0.0.0:9229/66cce714-31f4-45be-aed2-c50411d18319 message (or similar), navigate to the http://localhost:9229/json URL in your browser and find the devtoolsFrontendURL:
Copy and paste it to your browser to start your debugging session:
Related
I've been using for years a containerized version of a web-application on my development laptop. Usually I do something like
docker run -it -d --rm -h app.localhost my-app
and, having added app.localhost to my hosts file, going to http://app.localhost everything works. Yesterday an update came for docker and I'm no longer able to do that. Running the image with the same command line options and trying to connect to the application I get a browser error page and checking the logs in the container shows no request at all got to the web server. Running curl http://app.localhost in a terminal works fine, and I've been able to fix the problem changing the my command line options to
docker run -it -d --rm -p 80:80 -h app.localhost my-app
i.e. explicitly exposing port 80.
Can anyone explain what went wrong? And why would curl and my web browser behave differently?
Edit: to clarify: I'm referring to an update of the docker packages for my OS (Ubuntu 18 if that matters).
I started the Deep Water Docker container (CPU mode) on my Mac as described in the docs (https://github.com/h2oai/deepwater/blob/master/README.md):
docker run -it --rm -p 54321:54321 -p 8080:8080 -v $PWD:/host opsh2oai/h2o-deepwater-cpu
It starts correctly and without errors, but I cannot access the H2O UI at http://172.17.0.2:54321 ...
There is also a hint in the logs:
If you have trouble connecting, try SSH tunneling from your local machine 1. Open a terminal and run 'ssh -L 55555:localhost:54321 root#172.17.0.2'
2. Point your browser to http://localhost:55555
But this is also not working...
I use Docker CE Version 17.06.0-ce-mac19.
Any ideas what to do?
Here are the complete logs of starting H2O:
When you have started the docker image you have to start H2O manually. You do that with
java -jar /opt/h2o.jar &
For more info on this, please see https://github.com/h2oai/deepwater#pre-release-docker-image
In the side note: Please post the log, I can't tell what went wrong from this. It's possible that your Nvidia driver is too old.
I am new to docker and I tried to run the linuxconfig/lemp-php7 image. Everything worked fine and I could access the nginx web server installed on the container. To run this image I used this command:
sudo docker run linuxconfig/lemp-php7
When I tried to run the image with the following command to gain access over the container through bash I couldn't connect to nginx and I got the connection refused error message. Command: sudo docker run -ti linuxconfig/lemp-php7 bash
I tried this several times so I'm pretty sure it's not any kind of coincidence.
Why does this happen? Is this a problem specific to this particular image or is this a general problem. And how can I gain access to the shell of the container and access the web server at the same time?
I'd really like to understand this behavior to improve my general understanding of docker.
docker run runs the specified command instead of what that container would normally run. In your case, it appears to be supervisord, which presumably in turn runs the web server. So you're preventing any of that from happening.
My preferred method (except in cases where I'm trying to debug cases where the container won't even start properly) is to do the following after running the container normally:
docker exec -i -t $CONTAINER_ID /bin/bash
I am new with selenium docker. I want to create a Chrome/Firefox node with capabilities (Selenium Grid). How to add capabilities when I add a Selenium Node docker container?
I found this command so far...
docker run -d --link selenium-hub:hub selenium/node-firefox:2.53.0
but I don't know how to add capabilities on it. Already use this command but not working.
docker run -d --link selenium-hub:hub selenium/node-firefox:2.53.0 -browser browserName=firefox,version=3.6,maxInstances=5,platform=LINUX
Solved... adding SE_OPTS will help you to set capabilites
docker run -d -e SE_OPTS="-browser browserName=chromeku,version=56.0,maxInstances=3,platform=WINDOWS" --link selenium-hub:hub selenium/node-chrome:2.53.0
There are multiple ways of doing this and SE_OPTS is one of them, however for me it complicated what I was trying to accomplish. Using SE_OPTS forced me to set capabilities I didn't want to change, otherwise they would be reset to blank/null
I wanted to do:
SE_OPTS=-browser applicationName=Testing123
but I was forced to do:
SE_OPTS=-browser applicationName=Testing123,browserName=firefox,maxInstances=1,version=59.0.1
Another way to set capabilities is to supply your own config.json
-nodeConfig /path/config.json
You can find a default config.json
Or you can start the node container and copy the current one from it
docker cp <containerId>:/opt/selenium/config.json /host/path/target
You can also take a look at entry_point.sh, either on github or on the running container:
/opt/bin/entry_point.sh
You can run bash on the node container via:
sudo docker exec -i -t <container> bash
This will let you see how SE_OPTS is used and how config.json is generated. Note config.json is generated only if you don't supply one.
/opt/bin/generate_config
By examining generate_config you can see quite a few ENV vars such as:
FIREFOX_VERSION, NODE_MAX_INSTANCES, NODE_APPLICATION_NAME etc.
This leads to the third way to set capabilities which is to set the environment variables being used by generate_config, in my case APPLICATION_NODE_NAME
docker run -d -e "NODE_APPLICATION_NAME=Testing123"
Finally, when using SE_OPTS be careful not to accidentally change values. Specifically, the browser version. You can see by looking at entry_point.sh the browser version is calculated.
FIREFOX_VERSION=$( firefox -version | cut -d " " -f 3 )
If you change it to something else you will not get the results you are looking for.
I'm running a docker toolbox installed on my Windows 7 computer, I'm trying to access the docker from outside (win desktop) so I can get a GUI App working (let say we will test with Firefox)
As you all know, docker does not come with Server X, so I've found the solution is to install xcygwin to perform X11 tasks and then run the container via ssh so it could be displayed on my Windows.
The problem is I cant set the display right, I do:
export DISPLAY=:0.0
And tried few other options like exporting to the IP of the host, exporting to localhost etc.. then I SSH into my docker by running:
docker-machine ssh default -X
(the -X supposed to activate the x11 forwarding if I'm not wrong)
Now, I get into my docker, so simply I try to run my magic firefox container by running something like this:
docker run --rm -e DISPLAY=$DISPLAY devurandom/firefox
I get the expected Error: cannot open display.
Right! I didnt set the display in my docker! so I did :
DOCKER=0.0
I get Cannot open display: 0.0!
I also tried this one which I couldnt figure where the Path is comming from but well..
docker run -ti -v /tmp/serverX:/tmp/ServerX -e DISPLAY=$DISPLAY ...
No luck, does anyone know how to fix this?