I'm new to concourse and trying to set it up in my environment. I'm running Ubuntu 18.04 on Virtualbox 6.1.4 r136177 on Windows machine. I managed to get the node running and concourse worker set up, and I was able to access my concourse dashboard successfully. The problem occurred when I was trying to run a simple hello world pipeline as outlined on this page : https://concourse-ci.org/hello-world-example.html
The error says :
[31mERRO [0m[0004] check failed: get remote image: Get https://index.docker.io/v2/: dial tcp: lookup index.docker.io on [::1]:53: read udp [::1]:55989->[::1]:53: read: connection refused
Googling for similar error indicates that virtualbox might not be able to connect to docker repository. So I proceed with installing docker to my system and run the following command :
sudo docker run hello-world
But this this time docker successfully pulled the image. So I think it is not an issue with my virtualbox. Have anyone experienced the same issue and found a solution?
UPDATES
The following question inspire me to build my own registry :
How to use a local docker image as resource in concourse-docker
I have configured my local docker registry, and have verified that it does work by pulling my image from my own registry. So I configured a simple concourse pipeline to use my registry by modifying the hello world example :
---
jobs:
- name: job
public: true
plan:
- task: simple-task
config:
platform: linux
image_resource:
type: docker-image
source:
repository: 127.0.0.1:5000/busybox
tag: latest
insecure_registries: [ "127.0.0.1:5000" ]
run:
path: echo
args: ["Hello, world!"]
But then I run into the following error :
resource script '/opt/resource/check []' failed: exit status 1
stderr:
failed to ping registry: 2 error(s) occurred:
* ping https: Get https://127.0.0.1:5000/v2: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers
* ping http: Get http://127.0.0.1:5000/v2: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers
That 127.0.0.1 is likely referring to the IP of the check container, not the machine where Concourse is running as a worker (unless you have houdini as the container strategy). Try getting the actual IP of the machine running docker and try that.
I faced the same problem. In my case, concourse worker was installed on a qemu VM inside proxmox.
When starting a job with fly-t tutorials trigger-job --job hello-world/hello-world-job --watch command (given in tutorial), worker answered ERRO[0030] checking origin busybox failed: initialize transport: Get "https://index.docker.io/v2/": dial tcp xx.xx.xx.xx:443: i/o timeout.
It means that worker can't reach any DNS server.
There are two ways to solve this problem.
First option: run everything through docker-compose. docker-compose.yml has setting for worker: CONCOURSE_GARDEN_DNS_PROXY_ENABLE: "true". And all works fine. However, I tried to specify same setting when running worker directly inside VM (without docker), and this did not fix the problem.
Second option (without docker):
Use this settings for your worker:
CONCOURSE_RUNTIME=containerd
CONCOURSE_CONTAINERD_EXTERNAL_IP=192.168.1.106
CONCOURSE_CONTAINERD_DNS_SERVER=192.168.1.1
CONCOURSE_CONTAINERD_ALLOW_HOST_ACCESS=true
CONCOURSE_CONTAINERD_DNS_PROXY_ENABLE=true
After setting these parameters my worker could see DNS server and can get access docker registry.
Replace 192.168.1.106 with your machine address in your local network. And
192.168.1.1 with your DNS server.
These parameters are documented here. Also you can get these description with concourse worker --help command.
Containerd Container Networking:
--containerd-external-ip= IP address to use to reach container's mapped ports. Autodetected if not specified. [$CONCOURSE_CONTAINERD_EXTERNAL_IP]
--containerd-dns-server= DNS server IP address to use instead of automatically determined servers. Can be specified multiple times. [$CONCOURSE_CONTAINERD_DNS_SERVER]
--containerd-restricted-network= Network ranges to which traffic from containers will be restricted. Can be specified multiple times. [$CONCOURSE_CONTAINERD_RESTRICTED_NETWORK]
--containerd-network-pool= Network range to use for dynamically allocated container subnets. (default: 10.80.0.0/16) [$CONCOURSE_CONTAINERD_NETWORK_POOL]
--containerd-mtu= MTU size for container network interfaces. Defaults to the MTU of the interface used for outbound access by the host. [$CONCOURSE_CONTAINERD_MTU]
--containerd-allow-host-access Allow containers to reach the host's network. This is turned off by default. [$CONCOURSE_CONTAINERD_ALLOW_HOST_ACCESS]
I had the same issue. Cloned this repo - https://github.com/concourse/concourse-docker
followed the directions as per the readme to generate the keys and then used the docker-compose.yml file from the clone to spin up the docker container.
Related
I am switching from docker to podman currently. Usually that works just fine. However on one of my many company laptops I ran into the following error:
PS C:\WINDOWS\system32> podman pull quay.io/podman/hello
Trying to pull quay.io/podman/hello:latest...
Error: initializing source docker://quay.io/podman/hello:latest: pinging container registry quay.io: Get "https://quay.io/v2/": dial tcp 54.163.152.191:443: i/o timeout
The above error I also get with other container registries. I tried:
Tried:
podman machine set --rootful
removing hyper-v and wsl
changing resolv.conf and adding nameserver
(tried also 8.8.8.8)
looked into symantec endpoint protection logs
(connection is not blocked)
switched between wsl 1 and 2
also tried some stuff from this thread (cf. No internet connection on WSL Ubuntu (Windows Subsystem for Linux))
I also do not get any internet inside e.g. an Ubuntu WSL VM. In Powershell running e.g. curl google.com works just fine
For completeness sake with the third option changes I get:
podman pull quay.io/podman/hello
Trying to pull quay.io/podman/hello:latest...
Error: initializing source docker://quay.io/podman/hello:latest: pinging container registry quay.io: Get "https://quay.io/v2/": dial tcp: lookup quay.io: Temporary failure in name resolution
Update:
I reinstalled Docker and get a similar issue
docker container run hello-world
Unable to find image 'hello-world:latest' locally
docker: initializing source docker://hello-world:latest: pinging container registry registry-1.docker.io: Get "https://registry-1.docker.io/v2/": dial tcp: lookup registry-1.docker.io: Temporary failure in name resolution.
See 'docker run --help'.
I have a cloudwatch agent installed in EC2 instance and also a docker image on the instance.
From the EC2 instance, I could successfully send out logs to endpoint(0.0.0.0:25888) to cloudwatch. But when I get into the docker image using docker exec -it <container id> bash, I tried to publish same logs from inside container but it failed with following error:
2861 2022-07-21 00:11:18,686 ERROR (10.0.1.124,1385:MainThread) aws_embedded_metrics.sinks.tcp_client: Failed to connect to the socket. [Errno 111] Connection refused
2862 2022-07-21 00:11:18,686 INFO (10.0.1.124,1385:MainThread) aws_embedded_metrics.sinks.agent_sink: Parsed agent endpoint (tcp) 0.0.0.0:25888
Wondering if anyone knows the root cause here or any debugging clue? Thanks in advance!
I run into this as well. My solution (workaround?) was to:
Make sure the cloudwatch agent is listening on udp://0.0.0.0:25888 and not 127.0.0.1 (the default). The CWAgent docs I have seen don't have any examples on how to achieve this.
Once inside the container, use the Docker host IP to send messages. For me this was export AWS_EMF_AGENT_ENDPOINT=udp://172.17.0.1:25888, as I was using aws-embedded-metrics-python. YMMV depending on the underlying library that you use.
I have a running k3d Kubernetes cluster:
$ kubectl cluster-info
Kubernetes master is running at https://0.0.0.0:6550
CoreDNS is running at https://0.0.0.0:6550/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://0.0.0.0:6550/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
I have a python script that uses the kubernetes client api and manages namespaces, deployments, pod, etc. This works just fine in my local environment because I have all the necessary python modules installed and have direct access to my local k8s cluster. My goal is to containerize so that this same script is successfully run for my colleagues on their systems.
While running the same python script in a docker container, I receive connection errors:
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='172.17.0.1', port=6550): Max retries exceeded with url: /api/v1/namespaces (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f8b637c5d68>: Failed to establish a new connection: [Errno 113] No route to host',))
172.17.0.1 is my docker0 bridge address so assumed that would resolve or forward traffic to my localhost. I have tried loading k8s configuration from my local .kube/config which references server: https://0.0.0.0:6550 and also creating a separate config file with server: https://172.17.0.1:6550 and both give the same No route to host error (with the respective ip address in the HTTPSConnectionPool(host=...))
One idea I was pursing was running a socat process outside the container and tunnel traffic from inside the container across a bridge socket mounted in from the outside, but looks like the docker image I need to use does not have socat installed. However, I get the feeling like the real solution should be much simplier than all of this.
Certainly there have been other instances of a docker container needing access to a k8s cluster served outside of the docker network. How is this connection typically established?
Use docker network command to create a predefined network
You can pass --network to attach k3d to an existing Docker network and also to docker run to do the same for another container
https://k3d.io/internals/networking/
I have slightly modified this example: https://github.com/grpc/grpc-web/tree/master/net/grpc/gateway/examples/echo. I am running envoy on a docker container with exposed port 8080 (running this proxy server is required because the browser can't speak directly to a backend gRPC service). I am running all the services on my localhost (the host machine of the envoy docker container). However, I cannot seem to connect envoy in the docker container to the services running on the host machine.
I compiled grpc_cli in the container and when I run grpc_cli ls 192.168.1.10:9000 (host's LAN IP address and the port the service is running on), I get
root#bdc9ac396a87:~/grpc# ./bins/opt/grpc_cli ls 192.168.1.10:9000
Received an error when querying services endpoint.
ServerReflectionInfo rpc failed. Error code: 14, message: failed to connect to all addresses, debug info: {"created":"#1569023274.866465052","description":"Failed to pick subchannel","file"
:"src/core/ext/filters/client_channel/client_channel.cc","file_line":3876,"referenced_errors":[{"created":"#1569023274.866463178","description":"failed to connect to all addresses","file":"
src/core/ext/filters/client_channel/lb_policy/pick_first/pick_first.cc","file_line":395,"grpc_status":14}]}
I get an almost identical error when I use the IP address of the docker0 interface, which should also provide a connection to the host machine.
root#bdc9ac396a87:~/grpc# ./bins/opt/grpc_cli ls 172.17.0.1:9000
Received an error when querying services endpoint.
ServerReflectionInfo rpc failed. Error code: 14, message: failed to connect to all addresses, debug info: {"created":"#1569022455.801913949","description":"Failed to pick subchannel","file"
:"src/core/ext/filters/client_channel/client_channel.cc","file_line":3876,"referenced_errors":[{"created":"#1569022455.801910006","description":"failed to connect to all addresses","file":"
src/core/ext/filters/client_channel/lb_policy/pick_first/pick_first.cc","file_line":395,"grpc_status":14}]}
However, running a simple http server from the host with
python -m http.server
I can run the following commands from the container just fine:
wget 172.17.0.1:8000/test.txt // works
wget 192.168.1.10:8000/test.txt // works
A client on the host (not in the container) connects and works just fine with the service, so it's not a server problem.
Does docker block certain types of traffic? I noticed in the example the server was placed on another docker container, and it worked (it also worked locally for me), but I'd prefer to have my services running on my host machine while I build and test them. Is there a setting somewhere to enable gRPC from the container to a service on the host machine?
Docker version 1.13.1, build 47e2230/1.13.1
Fedora 29
I am looking to build a sample project with docker:
docker build -t helloworld .
But then, I get the following:
>docker build -t helloworld .
Sending build context to Docker daemon 2.048kB
Step 1/2 : FROM java
Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 10.0.2.3:53: no such host
I am behind a corporate proxy. I guess I need to configure HTTP/HTTPS proxy from where docker is running and I am trying to setup the environment variable as documented here: docker proxy settings and many other online resources.
However, as I am using Windows 7, I used Docker Toolbox and successfully created a virtual box with this:
docker-machine create -d=virtualbox docker4java
But this creates a VM but without the systemctl. So I am not sure what different setups I need to do when using Oracle VM Virtual box.
Please note: I also followed the advice of changing the nameserver on /etc/resolve.conf file to 8.8.8.8 and this makes no difference only a different error:
Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
This worked for me, create .docker directory in your home directory(home directory of user with which docker service is started), inside the .docker directory create config.json file with below configuration. and then restart the docker service.
{
"proxies":
{
"default":
{
"httpProxy": "http://myproxy.server.com:8080/",
"httpsProxy": "http://myproxy.server.com:8080/",
"noProxy": "my.jenkins.com"
}
}
}
Note: my docker version is 18.06.1-ce and API version 1.38