I have seen many similar issues to this but none seem to resolve or describe my exact issue.
I have configured an azure devops pipeline to use a container like below:
container:
image: ptrthomas/karate-chrome
options: --cap-add=SYS_ADMIN
I have uploaded the contents of the example from the jobserver demo to a repository and then run the following:
steps:
- script: mvn clean test -DargLine='-Dkarate.env=docker' -Dtest=WebRunner
It is my understanding (and I can see from the logs) that the files are loaded into the container and the script command is being executed inside the container. So that script command is the equivalent of docker exec -it -w /src karate mvn clean test -DargLine='-Dkarate.env=docker' -Dtest=WebRunner just without having to exec into the container.
When I run the example locally it executes the tests with no issues but in azure dev ops it fails at the point the tests actually start running, throwing this error:
14:16:37.388 [main] ERROR com.intuit.karate - karate.org.apache.http.conn.HttpHostConnectException: Connect to
localhost:9222 [localhost/127.0.0.1] failed: Connection refused
(Connection refused), http call failed after 2 milliseconds for url:
http://localhost:9222/json 14:16:39.388 [main] DEBUG
com.intuit.karate.shell.Command - attempt #4 waiting for http to be
ready at: http://localhost:9222/json 14:16:39.391 [main] DEBUG
com.intuit.karate - request: 5 > GET http://localhost:9222/json 5 >
Host: localhost:9222 5 > Connection: Keep-Alive 5 > User-Agent:
Apache-HttpClient/4.5.13 (Java/1.8.0_275) 5 > Accept-Encoding:
gzip,deflate
Looking at other issues there have been suggestions to specify the driver in the feature files with this line:
* configure driver = { type: 'chrome', executable: 'chrome' }
but a) that hasn't worked for me and b) shouldn't the karate-chrome docker image render this configuration unnecessary as it should be no different than the container I run locally?
Any help appreciated!
Thanks
Only thing I can think of is that the Azure config does not call the ENTRYPOINT of the image.
Maybe you should try to create a container from scratch (that does extensive logging) and see what happens. Use the Karate one as a reference.
Related
I have a code repo in which I want to support both ES 5 and ES 7 versions. So the code is written in a way to detect which ES version is being run via env variables and use the connectors accordingly. While writing integration tests for this that run in the gitlab ci, I want to run both ES5 and ES7 to test both pieces of connectors. I have defined my ES images like so
- name: docker.elastic.co/elasticsearch/elasticsearch:5.6.0
alias: elastic
command:
- /bin/env
- 'discovery.type=single-node'
- 'xpack.security.enabled=false'
- /bin/bash
- bin/es-docker
- name: docker.elastic.co/elasticsearch/elasticsearch:7.10.2
alias: elastic7
command:
- port 9202
- /bin/env
- 'discovery.type=single-node'
- 'xpack.security.enabled=false'
- 'http.port=9202'
- /bin/bash
- bin/es-docker
But my ES7 is not able to run on the given port and i get errors like so in my coverage stage :
*** WARNING: Service runner-ssasmqme-project-5918-concurrent-0-934a00b2fdd833db-docker.elastic.co__elasticsearch__elasticsearch-4 probably didn't start properly.
Health check error:
service "runner-ssasmqme-project-5918-concurrent-0-934a00b2fdd833db-docker.elastic.co__elasticsearch__elasticsearch-4-wait-for-service" timeout
Health check container logs:
Service container logs:
2023-02-07T07:59:39.562478117Z /usr/local/bin/docker-entrypoint.sh: line 37: exec: port 9202: not found
How do I run multiple ES versions on different ports in my gitlab ci ?
Question was:
I would like to run Karate UI tests using the driverTarget options to test my Java Play app which is running locally during the same job with sbt run.
I have a simple assertion to check for a property but whenever the tests runs I keep getting "description":"TypeError: Cannot read property 'getAttribute' of null This is my karate-config.js:
if (env === 'ci') {
karate.log('using environment:', env);
karate.configure('driverTarget',
{
docker: 'justinribeiro/chrome-headless',
showDriverLog: true
});
}
This is my test scenario:
Scenario: test 1: some test
Given driver 'http://localhost:9000'
waitUntil("document.readyState == 'complete'")
match attribute('some selector', 'some attribute') == 'something'
My guess is that because justinribeiro/chrome-headless is running in its own container, localhost:9000 is different in the container compared to what's running outside of it.
Is there any workaround for this? thanks
A docker container cannot talk to localhost port as per what was posted: "My guess is that because justinribeiro/chrome-headless is running in its own container, localhost:9000 is different in the container compared to what's running outside of it."
To get around this and have docker container communicate with running app on localhost port use command host.docker.internal
Change to make:
From: Given driver 'http://localhost:9000'.
To: Given driver 'http://host.docker.internal:9000'
Additionally, I was able to use the ptrthomas/karate-chrome image in CI (GITLAB) by inserting the following inside my gitlab-ci.yml file
stages:
- uiTest
featureOne:
stage: uiTest
image: docker:latest
cache:
paths:
- .m2/repository/
services:
- docker:dind
script:
- docker run --name karate --rm --cap-add=SYS_ADMIN -v "$PWD":/karate -v
"$HOME"/.m2:/root/.m2 ptrthomas/karate-chrome &
- sleep 45
- docker exec -w /karate karate mvn test -DargLine='-Dkarate.env=docker' Dtest=testParallel
allow_failure: true
artifacts:
paths:
- reports
- ${CLOUD_APP_NAME}.log
my karate-config.js file looks like
if (karate.env == 'docker') {
karate.configure('driver', {
type: 'chrome',
showDriverLog: true,
start: false,
beforeStart: 'supervisorctl start ffmpeg',
afterStop: 'supervisorctl stop ffmpeg',
videoFile: '/tmp/karate.mp4'
});
}
I'm using Gitlab CI-CD to build some projects using a single Runner (for now) on Docker (the runner itself is a docker container, so I guess this is Docker in Docker..)
My problem is that I can't use my own nexus/npm repository while building...
npm install --registry=http://153.89.23.53:8082/repository/npm-all
npm ERR! code EHOSTUNREACH
npm ERR! errno EHOSTUNREACH
npm ERR! request to http://153.89.23.53:8082/repository/npm-all/typescript/-/typescript-3.6.5.tgz failed, reason: connect EHOSTUNREACH 153.89.23.53:8082
The same runner on another server works perfectly, but it doens't work if running on the same server hosting the Nexus (everything is container-based)
The Gitlab runner is using the host network.
If I connect to the Runner and try to ping 153.89.23.53:8082 (Nexus), it works
root#62591008a000:/# wget http://153.89.23.53:8082
--2020-07-13 09:56:16-- http://153.89.23.53:8082/
Connecting to 153.89.23.53:8082... connected.
HTTP request sent, awaiting response... 200 OK
Length: 7952 (7.8K) [text/html]
Saving to: 'index.html'
index.html 100%[===========================================================================================>] 7.77K --.-KB/s in 0s
2020-07-13 09:56:16 (742 MB/s) - 'index.html' saved [7952/7952]
So I guess the problem occurs in the "second docker container", the one used inside the runner... but I have no idea what I should change.
Note : I could probably set the gitlab runner to join the nexus network and use internal IPs, but this would break the scripts if the runner is started on other servers...
Ok, I found the solution..
There is a network_mode settings that can be set in the runner configuration. Default value is bridge, not host..
**config.toml**
[runners.docker]
...
volumes = ["/cache"]
network_mode = "host"
So there are a lot of posts around this subject, but none of which seems to help.
I have an application running on a wildfly server inside a docker container.
And for some reason I cannot connect my remote debugger to it.
So, it is a wildfly 11 server that has been started with this command:
/opt/jboss/wildfly/bin/standalone.sh -b 0.0.0.0 -bmanagement 0.0.0.0 -c standalone.xml --debug 9999;
And in my standalone.xml I have this:
<socket-binding name="management-http" interface="management" port="${jboss.management.http.port:9990}"/>
The console output seems promising:
Listening for transport dt_socket at address: 9999
I can even access the admin console with the credentials admin:admin on localhost:9990/console
However IntelliJ refuses to connect... I've creates a remote JBoss Server configuration that in the server tab points to localhost with management port 9990.
And in the startup/connection tab I've entered 9999 as remote socket port.
The docker image has exposed the ports 9999 and 9990, and the docker-compose file binds those ports as is.
Even with all of this IntelliJ throws this message when trying to connect:
Error running 'remote':
Unable to open debugger port (localhost:9999): java.io.IOException "handshake failed - connection prematurally closed"
followed by
Error running 'remote':
Unable to connect to the localhost:9990, reason:
com.intellij.javaee.process.common.WrappedException: java.io.IOException: java.net.ConnectException: WFLYPRT0053: Could not connect to remote+http://localhost:9990. The connection failed
I'm completely lost as to what the issue might be...
Interessting addition is that after intelliJ fails, if I invalidate caches and restart then wildfly reprints the message saying that it is listening on port 9999
In case someone else in the future comes to this thread with he same issue, I found this solution here:
https://github.com/jboss-dockerfiles/wildfly/issues/91#issuecomment-450192272
Basically, apparart from the --debug parameter, you also need to pass *:8787
Dockerfile:
CMD ["/opt/jboss/wildfly/bin/standalone.sh", "-b", "0.0.0.0", "-bmanagement", "0.0.0.0", "--debug", "*:8787"]
docker-compose:
ports:
- "8080:8080"
- "8787:8787"
- "9990:9990"
command: /opt/jboss/wildfly/bin/standalone.sh -b 0.0.0.0 -bmanagement 0.0.0.0 --debug *:8787
I have not tested the docker-compose solution, as my solution was on dockerfile.
Not sure if this can be seen as an answer since it goes around the problem.
But the way I solved this, was by adding a "pure" remote configuration in intelliJ instead of jboss remote. This means that it won't automagically deploy, but I'm fine with that
In CircleCI I run an app that I would like to run the tests against:
test:
pre:
# run app
- ./gradlew bootRun -Dgrails.env=dev:
background: true
- sleep 40
override:
- ./gradlew test
On localhost the app is accessible on http://localhost:8080. I can see the app start up on CircleCI.
I thought that I would change the host localhost:
machine:
# Override /etc/hosts
hosts:
localhost: 127.0.0.1
My tests work locally correctly. On CircleCI they always end up without connection when calling new HttpPost("http://localhost:8080/api"); with this error:
org.apache.http.conn.HttpHostConnectException at SendMessageSpec.groovy:44
Caused by: java.net.ConnectException at SendMessageSpec.groovy:44
I had to increase the sleep time to something unreasonably big. - sleep 480
I think I'll have a look at how to block tests until the app is started.