When I deploy my pipeline always faile with this error, but when i retry the job the deploy pass.
This could be a error in the instances when try to download the image?
Command exited with status 1.
(no stdout)
=== stderr ===
Cannot connect to the Docker daemon at tcp://localhost:2375. Is the docker daemon running?
- --- - --- - --- - --- - --- - --- ---- - --- - -
I think it's the shared runners' fault try checking this issue on gitlab
Thanks for asking the questions.
Please provide the docker permission to your gitlab-runner user
sudo usermod -aG docker gitlab-runner
Related
I have seen many similar issues to this but none seem to resolve or describe my exact issue.
I have configured an azure devops pipeline to use a container like below:
container:
image: ptrthomas/karate-chrome
options: --cap-add=SYS_ADMIN
I have uploaded the contents of the example from the jobserver demo to a repository and then run the following:
steps:
- script: mvn clean test -DargLine='-Dkarate.env=docker' -Dtest=WebRunner
It is my understanding (and I can see from the logs) that the files are loaded into the container and the script command is being executed inside the container. So that script command is the equivalent of docker exec -it -w /src karate mvn clean test -DargLine='-Dkarate.env=docker' -Dtest=WebRunner just without having to exec into the container.
When I run the example locally it executes the tests with no issues but in azure dev ops it fails at the point the tests actually start running, throwing this error:
14:16:37.388 [main] ERROR com.intuit.karate - karate.org.apache.http.conn.HttpHostConnectException: Connect to
localhost:9222 [localhost/127.0.0.1] failed: Connection refused
(Connection refused), http call failed after 2 milliseconds for url:
http://localhost:9222/json 14:16:39.388 [main] DEBUG
com.intuit.karate.shell.Command - attempt #4 waiting for http to be
ready at: http://localhost:9222/json 14:16:39.391 [main] DEBUG
com.intuit.karate - request: 5 > GET http://localhost:9222/json 5 >
Host: localhost:9222 5 > Connection: Keep-Alive 5 > User-Agent:
Apache-HttpClient/4.5.13 (Java/1.8.0_275) 5 > Accept-Encoding:
gzip,deflate
Looking at other issues there have been suggestions to specify the driver in the feature files with this line:
* configure driver = { type: 'chrome', executable: 'chrome' }
but a) that hasn't worked for me and b) shouldn't the karate-chrome docker image render this configuration unnecessary as it should be no different than the container I run locally?
Any help appreciated!
Thanks
Only thing I can think of is that the Azure config does not call the ENTRYPOINT of the image.
Maybe you should try to create a container from scratch (that does extensive logging) and see what happens. Use the Karate one as a reference.
I am trying to run a gitlab pipeline to run my testcases using maven image. My testcases uses testcontainer. But somehow when I am trying to run testcontainer inside maven image it's not working. I tried couple of solutions provided on different online but nothing worked.
.gillab-ci.yml
services:
- docker:dind
variables:
# Instruct Testcontainers to use the daemon of DinD.
DOCKER_HOST: "tcp://docker:2375"
# Instruct Docker not to start over TLS.
DOCKER_TLS_CERTDIR: ""
# Improve performance with overlayfs.
DOCKER_DRIVER: overlay2
test:
image: maven:3.8.2-jdk-11
stage: test
script:
- chmod -R 777 /var/
- mvn -e test -f project_folder/pom.xml
Error I am getting is :
0:58:18.653 [main] DEBUG
org.testcontainers.dockerclient.DockerClientProviderStrategy -
EnvironmentAndSystemPropertyClientProviderStrategy: failed with
exception TimeoutException (Timeout waiting for result with
exception). Root cause ConnectException (Connection refused
(Connection refused))
10:58:18.654 [main] DEBUG
org.testcontainers.dockerclient.DockerClientProviderStrategy -
UnixSocketClientProviderStrategy: failed with exception
InvalidConfigurationException (Could not find unix domain socket).
Root cause NoSuchFileException (/var/run/docker.sock)
10:58:18.655 [main] ERROR
org.testcontainers.dockerclient.DockerClientProviderStrategy - Could
not find a valid Docker environment. Please check configuration.
Attempted configurations were:
10:58:18.655 [main] ERROR
org.testcontainers.dockerclient.DockerClientProviderStrategy -
EnvironmentAndSystemPropertyClientProviderStrategy: failed with
exception TimeoutException (Timeout waiting for result with
exception). Root cause ConnectException (Connection refused
(Connection refused))
10:58:18.655 [main] ERROR
org.testcontainers.dockerclient.DockerClientProviderStrategy -
UnixSocketClientProviderStrategy: failed with exception
InvalidConfigurationException (Could not find unix domain socket).
Root cause NoSuchFileException (/var/run/docker.sock)
10:58:18.655 [main] ERROR
org.testcontainers.dockerclient.DockerClientProviderStrategy - As no
valid configuration was found, execution cannot continue
10:58:18.658 [main] ERROR ***********.***Service - Error in starting
Container : java.lang.IllegalStateException: Could not find a valid
Docker environment.
It seems like your service is not exposed correctly. I believe this is a TLS issue with your docker socket. Have you tried starting your DIND service explicitly disabling TLS?
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
services:
- name: docker:20-dind
alias: docker
command: ["--tls=false"] # this right here
I am having this error, after running skaffold dev.
Step 1/6 : FROM node:current-alpine3.11
exiting dev mode because first build failed: unable to stream build output: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:35889->192.168.49.1:53: i/o timeout. Please fix the Dockerfile and try again..
Here is skaffold.yml
apiVersion: skaffold/v2beta11
kind: Config
metadata:
name: *****
build:
artifacts:
- image: 127.0.0.1:32000/auth
context: auth
docker:
dockerfile: Dockerfile
deploy:
kubectl:
manifests:
- infra/k8s/auth-depl.yaml
local:
push: false
artifacts:
- image: 127.0.0.1:32000/auth
context: auth
docker:
dockerfile: Dockerfile
sync:
manual:
- src: "src/**/*.ts"
dest: .
I have tried all possible solutions I saw online, including adding 8.8.8.8 as the DNS, but the error still persists. I am using Linux and running ubuntu, I am also using Minikube locally. Please assist.
This is a Community Wiki answer, posted for better visibility, so feel free to edit it and add any additional details you consider important.
In this case:
minikube delete && minikube start
solved the problem but you can start from restarting docker daemon. Since this is Minikube cluster and Skaffold uses for its builds Minikube's Docker daemon, as suggested by Brian de Alwis in his comment, you may start from:
minikube stop && minikube start
or
minikube ssh
su
systemctl restart docker
I searched for similar errors and in many cases e.g. here or in this thread, setting up your DNS to something reliable like 8.8.8.8 may also help:
sudo echo "nameserver 8.8.8.8" >> /etc/resolv.conf
in case you use Minikube you should first:
minikube ssh
su ### to become root
and then run:
echo "nameserver 8.8.8.8" >> /etc/resolv.conf
The following error message:
Please fix the Dockerfile and try again
may be somewhat misleading in similar cases as Dockerfile is probably totally fine, but as we can read in other part:
lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:35889->192.168.49.1:53: i/o timeout.
it's definitely related with failing DNS lookup. This is well described here as well known issue.
Get i/o timeout
Get https://index.docker.io/v1/repositories//images: dial tcp: lookup on :53: read udp :53: i/o timeout
Description
The DNS resolver configured on the host cannot resolve the registry’s
hostname.
GitHub link
N/A
Workaround
Retry the operation, or if the error persists, use another DNS
resolver. You can do this by updating your /etc/resolv.conf file
with these or other DNS servers:
nameserver 8.8.8.8 nameserver 8.8.4.4
I facing an issue on my Molecule Test. I have begin to study this tool 2 days ago for information.
on a Ubuntu VM running with Vagrant,I have create a role and initialze Molecule's folder and create a testinfra test file ( with the docker provider ).
The error is when my task's role are running, at the step of checking service running, it failed.
fatal: [instance]: FAILED! => {"changed": false, "msg": "Could not find the requested service httpd: "}
I was design to simply install 2 packages including httpd on a Centos Image.
When im loggin directly to the Molecule VM ( so through docker ), when i simply type systemctl the error message is
Failed to get D-Bus connection: Operation not permitted
As adviced Geerlingguy, i have specify volume mapped on cgroup folder
platforms:
- name: instance
#image: docker.io/pycontribs/centos:7
image: geerlingguy/docker-${MOLECULE_DISTRO:-centos7}-ansible:latest
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
The error is not related to Testinfra but only the docker built image.
Could someone help me to understand why this error message ?
Is that because im on a VirtualBox ran by Vagrant ?
Thanks all for reading :-)
I have added that on my mocule.yml file config according molecule documentation ( https://molecule.readthedocs.io/en/latest/examples.html#docker ) :
platforms:
- name: instance
#image: docker.io/pycontribs/centos:7
image: geerlingguy/docker-centos7-ansible:latest
capabilities:
- SYS_ADMIN
command: /sbin/init
systemctl working fine now
I want to run this docker hub image locally: https://hub.docker.com/r/jhipster/jhipster-sample-app (which normally runs with a npm start and gradlew) in W10home using Docker ToolBox (and it works fine)
I followed the instructions at: https://www.jhipster.tech/docker-compose/
and try to run a: $ docker-compose -f jhipster-sample-app/prod.yml up , but it gives me this error (although the image is there):
usuario#DESKTOP-GTCQCAR MINGW64 /c/Program Files/Docker Toolbox
$ docker-compose -f jhipster-sample-app/prod.yml up
ERROR: .FileNotFoundError: [Errno 2] No such file or directory: '.\\jhipster-sample-app/prod.yml'
NOTE: I also tried changing the tag, but with the same result. Why is it not finding the image that is for sure there?
I also tried to Quick launch: Run a simple jhipster application directly with Docker, in development profile: $ docker container run -d -p 8080:8080 -e SPRING_PROFILES_ACTIVE=dev jhipster/jhipster-sample-app
But, I could not access to the application at http://localhost:8080 (though the container is created and running).
I even try to run it: $ docker run jhipster/jhipster-sample-app getting this error:
2019-01-31 09:33:05.215 INFO 1 --- [ main]
i.g.j.s.JhipsterSampleApplicationApp : Starting JhipsterSampleApplicationApp on 596e926cb096 with PID 1 (/app.war started by root in /)
2019-01-31 09:33:05.252 INFO 1 --- [ main] i.g.j.s.JhipsterSampleApplicationApp : The following profiles are active: prod
2019-01-31 09:33:37.773 ERROR 1 --- [ main] com.zaxxer.hikari.pool.HikariPool : Hikari - Exception during pool initialization.
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
But I can run other images like $ docker run hello-world
So I feel kind of lost here and I do not know what I'm doing wrong. Thanks all! I'm new to Docker.
To run https://hub.docker.com/r/jhipster/jhipster-sample-app, you need to start the other containers such as the database. These are not packaged in the app container.
git clone https://github.com/jhipster/jhipster-sample-app.git
cd jhipster-sample-app
docker-compose -f src/main/docker/app.yml up -d
This will load the config from app.yml and start both the app and database containers.