I am trying to run a gitlab pipeline to run my testcases using maven image. My testcases uses testcontainer. But somehow when I am trying to run testcontainer inside maven image it's not working. I tried couple of solutions provided on different online but nothing worked.
.gillab-ci.yml
services:
- docker:dind
variables:
# Instruct Testcontainers to use the daemon of DinD.
DOCKER_HOST: "tcp://docker:2375"
# Instruct Docker not to start over TLS.
DOCKER_TLS_CERTDIR: ""
# Improve performance with overlayfs.
DOCKER_DRIVER: overlay2
test:
image: maven:3.8.2-jdk-11
stage: test
script:
- chmod -R 777 /var/
- mvn -e test -f project_folder/pom.xml
Error I am getting is :
0:58:18.653 [main] DEBUG
org.testcontainers.dockerclient.DockerClientProviderStrategy -
EnvironmentAndSystemPropertyClientProviderStrategy: failed with
exception TimeoutException (Timeout waiting for result with
exception). Root cause ConnectException (Connection refused
(Connection refused))
10:58:18.654 [main] DEBUG
org.testcontainers.dockerclient.DockerClientProviderStrategy -
UnixSocketClientProviderStrategy: failed with exception
InvalidConfigurationException (Could not find unix domain socket).
Root cause NoSuchFileException (/var/run/docker.sock)
10:58:18.655 [main] ERROR
org.testcontainers.dockerclient.DockerClientProviderStrategy - Could
not find a valid Docker environment. Please check configuration.
Attempted configurations were:
10:58:18.655 [main] ERROR
org.testcontainers.dockerclient.DockerClientProviderStrategy -
EnvironmentAndSystemPropertyClientProviderStrategy: failed with
exception TimeoutException (Timeout waiting for result with
exception). Root cause ConnectException (Connection refused
(Connection refused))
10:58:18.655 [main] ERROR
org.testcontainers.dockerclient.DockerClientProviderStrategy -
UnixSocketClientProviderStrategy: failed with exception
InvalidConfigurationException (Could not find unix domain socket).
Root cause NoSuchFileException (/var/run/docker.sock)
10:58:18.655 [main] ERROR
org.testcontainers.dockerclient.DockerClientProviderStrategy - As no
valid configuration was found, execution cannot continue
10:58:18.658 [main] ERROR ***********.***Service - Error in starting
Container : java.lang.IllegalStateException: Could not find a valid
Docker environment.
It seems like your service is not exposed correctly. I believe this is a TLS issue with your docker socket. Have you tried starting your DIND service explicitly disabling TLS?
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
services:
- name: docker:20-dind
alias: docker
command: ["--tls=false"] # this right here
Related
I have seen many similar issues to this but none seem to resolve or describe my exact issue.
I have configured an azure devops pipeline to use a container like below:
container:
image: ptrthomas/karate-chrome
options: --cap-add=SYS_ADMIN
I have uploaded the contents of the example from the jobserver demo to a repository and then run the following:
steps:
- script: mvn clean test -DargLine='-Dkarate.env=docker' -Dtest=WebRunner
It is my understanding (and I can see from the logs) that the files are loaded into the container and the script command is being executed inside the container. So that script command is the equivalent of docker exec -it -w /src karate mvn clean test -DargLine='-Dkarate.env=docker' -Dtest=WebRunner just without having to exec into the container.
When I run the example locally it executes the tests with no issues but in azure dev ops it fails at the point the tests actually start running, throwing this error:
14:16:37.388 [main] ERROR com.intuit.karate - karate.org.apache.http.conn.HttpHostConnectException: Connect to
localhost:9222 [localhost/127.0.0.1] failed: Connection refused
(Connection refused), http call failed after 2 milliseconds for url:
http://localhost:9222/json 14:16:39.388 [main] DEBUG
com.intuit.karate.shell.Command - attempt #4 waiting for http to be
ready at: http://localhost:9222/json 14:16:39.391 [main] DEBUG
com.intuit.karate - request: 5 > GET http://localhost:9222/json 5 >
Host: localhost:9222 5 > Connection: Keep-Alive 5 > User-Agent:
Apache-HttpClient/4.5.13 (Java/1.8.0_275) 5 > Accept-Encoding:
gzip,deflate
Looking at other issues there have been suggestions to specify the driver in the feature files with this line:
* configure driver = { type: 'chrome', executable: 'chrome' }
but a) that hasn't worked for me and b) shouldn't the karate-chrome docker image render this configuration unnecessary as it should be no different than the container I run locally?
Any help appreciated!
Thanks
Only thing I can think of is that the Azure config does not call the ENTRYPOINT of the image.
Maybe you should try to create a container from scratch (that does extensive logging) and see what happens. Use the Karate one as a reference.
When I deploy my pipeline always faile with this error, but when i retry the job the deploy pass.
This could be a error in the instances when try to download the image?
Command exited with status 1.
(no stdout)
=== stderr ===
Cannot connect to the Docker daemon at tcp://localhost:2375. Is the docker daemon running?
- --- - --- - --- - --- - --- - --- ---- - --- - -
I think it's the shared runners' fault try checking this issue on gitlab
Thanks for asking the questions.
Please provide the docker permission to your gitlab-runner user
sudo usermod -aG docker gitlab-runner
I facing an issue on my Molecule Test. I have begin to study this tool 2 days ago for information.
on a Ubuntu VM running with Vagrant,I have create a role and initialze Molecule's folder and create a testinfra test file ( with the docker provider ).
The error is when my task's role are running, at the step of checking service running, it failed.
fatal: [instance]: FAILED! => {"changed": false, "msg": "Could not find the requested service httpd: "}
I was design to simply install 2 packages including httpd on a Centos Image.
When im loggin directly to the Molecule VM ( so through docker ), when i simply type systemctl the error message is
Failed to get D-Bus connection: Operation not permitted
As adviced Geerlingguy, i have specify volume mapped on cgroup folder
platforms:
- name: instance
#image: docker.io/pycontribs/centos:7
image: geerlingguy/docker-${MOLECULE_DISTRO:-centos7}-ansible:latest
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
The error is not related to Testinfra but only the docker built image.
Could someone help me to understand why this error message ?
Is that because im on a VirtualBox ran by Vagrant ?
Thanks all for reading :-)
I have added that on my mocule.yml file config according molecule documentation ( https://molecule.readthedocs.io/en/latest/examples.html#docker ) :
platforms:
- name: instance
#image: docker.io/pycontribs/centos:7
image: geerlingguy/docker-centos7-ansible:latest
capabilities:
- SYS_ADMIN
command: /sbin/init
systemctl working fine now
Local env:
MacOS 10.14.6
Docker Desktop 2.0.1.2
Docker Engine 19.03.2
Compose Engine 1.24.1
Test containers 1.12.1
I'm using Elastic search in an app, and I want to be able to use TestContainers in my integration tests. Sample code in a Play Framework app that uses ElasticSearch testcontainer:
#BeforeAll
public static void setup() {
private static final ElasticsearchContainer ES = new ElasticsearchContainer();
ES.start();
}
This works when testing locally, but I want to be able to run this inside a Docker container to run on my CI server. I'm getting this exception when running the tests inside the Docker container:
[warn] o.t.u.RegistryAuthLocator - Failure when attempting to lookup auth config (dockerImageName: alpine:3.5, configFile: /root/.docker/config.json. Falling back to docker-java default behaviour. Exception message: /root/.docker/config.json (No such file or directory)
[warn] o.t.u.RegistryAuthLocator - Failure when attempting to lookup auth config (dockerImageName: quay.io/testcontainers/ryuk:0.2.3, configFile: /root/.docker/config.json. Falling back to docker-java default behaviour. Exception message: /root/.docker/config.json (No such file or directory)
?? Checking the system...
? Docker version should be at least 1.6.0
? Docker environment should have more than 2GB free disk space
[warn] o.t.u.RegistryAuthLocator - Failure when attempting to lookup auth config (dockerImageName: docker.elastic.co/elasticsearch/elasticsearch:7.1.1, configFile: /root/.docker/config.json. Falling back to docker-java default behaviour. Exception message: /root/.docker/config.json (No such file or directory)
[error] d.e.c.1.1] - Could not start container
org.testcontainers.containers.ContainerLaunchException: Timed out waiting for URL to be accessible (http://172.17.0.1:32911/ should return HTTP [200])
at org.testcontainers.containers.wait.strategy.HttpWaitStrategy.waitUntilReady(HttpWaitStrategy.java:197)
at org.testcontainers.containers.wait.strategy.AbstractWaitStrategy.waitUntilReady(AbstractWaitStrategy.java:35)
at org.testcontainers.containers.GenericContainer.waitUntilContainerStarted(GenericContainer.java:675)
at org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:332)
at org.testcontainers.containers.GenericContainer.lambda$doStart$0(GenericContainer.java:285)
at org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:81)
at org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:283)
at org.testcontainers.containers.GenericContainer.start(GenericContainer.java:272)
at controllers.HomeControllerTest.setup(HomeControllerTest.java:56)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
I've read the instructions here: https://www.testcontainers.org/supported_docker_environment/continuous_integration/dind_patterns/
So my docker-compose.yml looks like (note: I've been testing with another ES container as seen commented out below, but I have not been using it with this test)($INSTANCE is a random 16 char string for a particular test run):
version: '3'
services:
# elasticsearch:
# container_name: elasticsearch_${INSTANCE}
# image: docker.elastic.co/elasticsearch/elasticsearch:6.7.2
# ports:
# - 9200:9200
# - 9300:9300
# command: elasticsearch -E transport.host=0.0.0.0
# logging:
# driver: 'none'
# environment:
# ES_JAVA_OPTS: "-Xms750m -Xmx750m"
mainapp:
container_name: mainapp_${INSTANCE}
image: test_image:${INSTANCE}
stop_signal: SIGKILL
stdin_open: true
tty: true
working_dir: $PWD
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- $PWD:$PWD
environment:
ES_JAVA_OPTS: "-Xms1G -Xmx1G"
command: /bin/bash /projectfolder/build/tests/wrapper.sh
I've also tried running my tests with this command but received the same error:
docker run -it --rm -v $PWD:$PWD -w $PWD -v /var/run/docker.sock:/var/run/docker.sock test_image:68F75D8FD4C7003772C7E52B87B774F5 /bin/bash /testproject/build/tests/wrapper.sh
I tried creating a postgres container the same way inside my testing container and had no issues. I've also tried making a GenericContainer with the Elasticsearch image with no luck.
I don't think this is a connection issue because if I run curl 172.17.0.1:{port printed to test console} from inside my test container, I do get a valid elastic search response with status code 200, so it almost seems like its timing out trying to connect even though the connection is there.
Thanks.
I'm trying to launch filebeat using docker-compose (I intend to add other services later on) but every time I execute the docker-compose.yml file, the filebeat service always ends up with the following error:
filebeat_1 | 2019-08-01T14:01:02.750Z ERROR instance/beat.go:877 Exiting: 1 error: setting 'filebeat.prospectors' has been removed
filebeat_1 | Exiting: 1 error: setting 'filebeat.prospectors' has been removed
I discovered the error by accessing the docker-compose logs.
My docker-compose file is as simple as it can be at the moment. It simply calls a filebeat Dockerfile and launches the service immediately after.
Next to my Dockerfile for filebeat I have a simple config file (filebeat.yml), which is copied to the container, replacing the default filebeat.yml.
If I execute the Dockerfile using the docker command, the filebeat instance works just fine: it uses my config file and identifies the "output.json" file as well.
I'm currently using version 7.2 of filebeat and I know that the "filebeat.prospectors" isn't being used. I also know for sure that this specific configuration isn't coming from my filebeat.yml file (you'll find it below).
It seems that, when using docker-compose, the container is accessing another configuration file instead of the one that is being copied to the container, by the Dockerfile, but so far I haven't been able to figure it out how, why and how can I fix it...
Here's my docker-compose.yml file:
version: "3.7"
services:
filebeat:
build: "./filebeat"
command: filebeat -e -strict.perms=false
The filebeat.yml file:
filebeat.inputs:
- paths:
- '/usr/share/filebeat/*.json'
fields_under_root: true
fields:
tags: ['json']
output:
logstash:
hosts: ['localhost:5044']
The Dockerfile file:
FROM docker.elastic.co/beats/filebeat:7.2.0
COPY filebeat.yml /usr/share/filebeat/filebeat.yml
COPY output.json /usr/share/filebeat/output.json
USER root
RUN chown root:filebeat /usr/share/filebeat/filebeat.yml
RUN mkdir /usr/share/filebeat/dockerlogs
USER filebeat
The output I'm expecting should be similar to the following, which comes from the successful executions I'm getting when I'm executing it as a single container.
The ERROR is expected because I don't have logstash configured at the moment.
INFO crawler/crawler.go:72 Loading Inputs: 1
INFO log/input.go:148 Configured paths: [/usr/share/filebeat/*.json]
INFO input/input.go:114 Starting input of type: log; ID: 2772412032856660548
INFO crawler/crawler.go:106 Loading and starting Inputs completed. Enabled inputs: 1
INFO log/harvester.go:253 Harvester started for file: /usr/share/filebeat/output.json
INFO pipeline/output.go:95 Connecting to backoff(async(tcp://localhost:5044))
ERROR pipeline/output.go:100 Failed to connect to backoff(async(tcp://localhost:5044)): dial tcp [::1]:5044: connect: cannot assign requested address
INFO pipeline/output.go:93 Attempting to reconnect to backoff(async(tcp://localhost:5044)) with 1 reconnect attempt(s)
ERROR pipeline/output.go:100 Failed to connect to backoff(async(tcp://localhost:5044)): dial tcp [::1]:5044: connect: cannot assign requested address
INFO pipeline/output.go:93 Attempting to reconnect to backoff(async(tcp://localhost:5044)) with 2 reconnect attempt(s)
I managed to figure out what the problem was.
I needed to map the location of the config file and logs directory in the docker-compose file, using the volumes tag:
version: "3.7"
services:
filebeat:
build: "./filebeat"
command: filebeat -e -strict.perms=false
volumes:
- ./filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml
- ./filebeat/logs:/usr/share/filebeat/dockerlogs
Finally I just had to execute the docker-compose command and everything start working properly:
docker-compose -f docker-compose.yml up -d