I have a redis cluster running on my MacBook (localhost), I am easing my way into docker as part of my developer environment.
I also have things like MongoDB running on my host and am able to successfully connect to via host.docker.internal, at this time i'm not looking to containerize redis, or my other services
Im specifically getting errors that my server is somehow trying to connect using 127.0.0.1 when my code is definitely using host.docker.internal
this is happening on the current nodejs container, but also another ruby container - so something about redis is unhappy
oauth | 2022-07-22T21:57:58.186Z app:cache:client FATAL Redis Cluster Error Error: connect ECONNREFUSED 127.0.0.1:30003
oauth | 2022-07-22T21:57:58.188Z app:cache:client FATAL Redis Cluster Error Error: connect ECONNREFUSED 127.0.0.1:30001
oauth | 2022-07-22T21:57:58.189Z app:cache:client FATAL Redis Cluster Error Error: connect ECONNREFUSED 127.0.0.1:30002
Dockerfile
# syntax=docker/dockerfile:1
FROM node:16.16.0-buster as base
WORKDIR /app
COPY package.json package.json
COPY yarn.lock yarn.lock
FROM base as dev
RUN yarn install
COPY . .
CMD ["node", "src/index.js"]
docker-compose.dev.yml
version: '3.8'
services:
oauth:
build:
context: .
container_name: oauth
ports:
- 5050:5050
environment:
- RACK_ENV=docker
- PORT=5050
volumes:
- ./:/app
command: yarn run nodemon ./src/index.js
docker-compose -f docker-compose.dev.yml up --build
[+] Building 6.8s (17/17) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 245B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> resolve image config for docker.io/docker/dockerfile:1 0.8s
=> [auth] docker/dockerfile:pull token for registry-1.docker.io 0.0s
=> CACHED docker-image://docker.io/docker/dockerfile:1#sha256:443aab4ca21183e069e7d8b2dc68006594f40bddf1b15bbd83f5137bd93e80e2 0.0s
=> [internal] load .dockerignore 0.0s
=> [internal] load build definition from Dockerfile 0.0s
=> [internal] load metadata for docker.io/library/node:16.16.0-buster 0.5s
=> [auth] library/node:pull token for registry-1.docker.io 0.0s
=> [internal] load build context 0.9s
=> => transferring context: 3.52MB 0.9s
=> [base 1/4] FROM docker.io/library/node:16.16.0-buster#sha256:2e1b4542d4a06e0e0442dc38af1f4828760aecc9db2b95e7df87f573640d98cd 0.0s
=> CACHED [base 2/4] WORKDIR /app 0.0s
=> CACHED [base 3/4] COPY package.json package.json 0.0s
=> CACHED [base 4/4] COPY yarn.lock yarn.lock 0.0s
=> CACHED [dev 1/2] RUN yarn install 0.0s
=> [dev 2/2] COPY . . 2.2s
=> exporting to image 2.0s
=> => exporting layers 2.0s
=> => writing image sha256:a297a1937c12a7403b12dec58a71c28df60caa0ee387daec51af2ffb0dc5968e 0.0s
=> => naming to docker.io/library/oauth_oauth 0.0s
[+] Running 1/1
⠿ Container oauth Recreated 0.1s
Attaching to oauth
oauth | yarn run v1.22.19
oauth | $ NODE_ENV=docker yarn run nodemon ./src/index.js
oauth | $ /app/node_modules/.bin/nodemon ./src/index.js
oauth | [nodemon] 2.0.16
oauth | [nodemon] to restart at any time, enter `rs`
oauth | [nodemon] watching path(s): *.*
oauth | [nodemon] watching extensions: js,mjs,json
oauth | [nodemon] starting `node ./src/index.js`
oauth | 2022-07-22T21:57:58.161Z app:index INFO ⚡️ Successfully Started Express Server
oauth | 2022-07-22T21:57:58.162Z app:index INFO ⚡️ Environment: docker
oauth | 2022-07-22T21:57:58.163Z app:index INFO ⚡️ Node Version: v16.16.0
oauth | 2022-07-22T21:57:58.163Z app:index INFO ⚡️ Listening on: http://localhost:5050
oauth | 2022-07-22T21:57:58.163Z app:index INFO ⚡️ OS linux
oauth | 2022-07-22T21:57:58.186Z app:cache:client FATAL Redis Cluster Error Error: connect ECONNREFUSED 127.0.0.1:30003
oauth | 2022-07-22T21:57:58.188Z app:cache:client FATAL Redis Cluster Error Error: connect ECONNREFUSED 127.0.0.1:30001
oauth | 2022-07-22T21:57:58.189Z app:cache:client FATAL Redis Cluster Error Error: connect ECONNREFUSED 127.0.0.1:30002
More info:
Redis Cluster is ran by doing the following:
(i've been using this for several years, it works)
https://developer.redis.com/explore/redisinsight/cluster/
$ cd ~/Documents/dev/redis-6.2.6; ./utils/create-cluster/create-cluster start;
Starting 30001
Starting 30002
Starting 30003
Starting 30004
Starting 30005
Starting 30006
$ redis-cli -c -p 30001
127.0.0.1:30001> ping
PONG
i connected to the container to prove that host.docker.internal resolves to host
$ docker exec -it <container id> /bin/bash
$ ping host.docker.internal
PING host.docker.internal (192.168.65.2) 56(84) bytes of data.
64 bytes from 192.168.65.2 (192.168.65.2): icmp_seq=1 ttl=37 time=0.182 ms
64 bytes from 192.168.65.2 (192.168.65.2): icmp_seq=2 ttl=37 time=0.275 ms
64 bytes from 192.168.65.2 (192.168.65.2): icmp_seq=3 ttl=37 time=0.230 ms
64 bytes from 192.168.65.2 (192.168.65.2): icmp_seq=4 ttl=37 time=0.516 ms
64 bytes from 192.168.65.2 (192.168.65.2): icmp_seq=5 ttl=37 time=0.540 ms
64 bytes from 192.168.65.2 (192.168.65.2): icmp_seq=6 ttl=37 time=0.560 ms
64 bytes from 192.168.65.2 (192.168.65.2): icmp_seq=7 ttl=37 time=0.566 ms
^C
--- host.docker.internal ping statistics ---
7 packets transmitted, 7 received, 0% packet loss, time 138ms
rtt min/avg/max/mdev = 0.182/0.409/0.566/0.161 ms
However, netstat doesnt return anything for 30001 only 30003
$ netstat -a | grep 30001
(nothing)
$ netstat -a | grep 30003
tcp4 0 0 localhost.30003 localhost.51793 ESTABLISHED
tcp4 0 0 localhost.51793 localhost.30003 ESTABLISHED
tcp6 0 0 *.30003 *.* LISTEN
tcp4 0 0 *.30003 *.* LISTEN
the default config for redis when it starts is here
# Settings
BIN_PATH="/Users/aronlilland/Documents/dev/redis-6.2.6/src"
CLUSTER_HOST=127.0.0.1
PORT=30000
TIMEOUT=2000
NODES=6
REPLICAS=1
PROTECTED_MODE=yes
ADDITIONAL_OPTIONS=""
# You may want to put the above config parameters into config.sh in order to
# override the defaults without modifying this script.
if [ -a config.sh ]
then
source "config.sh"
fi
# Computed vars
ENDPORT=$((PORT+NODES))
if [ "$1" == "start" ]
then
while [ $((PORT < ENDPORT)) != "0" ]; do
PORT=$((PORT+1))
echo "Starting $PORT"
$BIN_PATH/redis-server --port $PORT --protected-mode $PROTECTED_MODE --cluster-enabled yes --cluster-config-file nodes-${PORT}.conf --cluster-node-timeout $TIMEOUT --appendonly yes --appendfilename appendonly-${PORT}.aof --dbfilename dump-${PORT}.rdb --logfile ${PORT}.log --daemonize yes ${ADDITIONAL_OPTIONS}
done
exit 0
fi
I got a problem while creating a docker image using docker build -t image_name .. When I execute it I get errors:
=> [internal] load build definition from Dockerfile 0.1s
=> => transferring dockerfile: 32B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> ERROR [internal] load metadata for docker.io/library/java:8 0.2s
------
> [internal] load metadata for docker.io/library/java:8:
------
failed to solve with frontend dockerfile.v0: failed to create LLB definition: failed to authorize: rpc error: code = Unknown desc = failed to fetch anonymous token: Get https://auth.docker.io/token?scope=repository%3Alibrary%2Fjava%3Apull&service=registry.docker.io: net/http: TLS handshake timeout
This recently happened for me when running a build script for earthly/earthly.
OS: Arch Linux 5.14.8-arch1-1
Docker from official repository: Docker version 20.10.8, build 3967b7d28e
Solution (likely a Linux-only solution)
DNS was misconfigured for me. For some reason, docker pull golang:1.16-alpine3.14 worked fine but was failing when running the build script. This answer on r/docker helped.
Adding a DNS nameserver to my /etc/resolv.conf solved this issue for me:
cat /etc/resolv.conf
# Cloudflare
nameserver 1.1.1.1
Other Attempted Solutions
1. Disable Buildkit
From this answer to Docker build: failed to fetch oauth token for openjdk?, this did not solve the issue since I believe buildkit was required for the script I was running:
export DOCKER_BUILDKIT=0
export COMPOSE_DOCKER_CLI_BUILD=0
2. Manually pull image
3. Authenticating with Docker
The error looked like something that might happen when I was unauthenticated with hub.docker.com. After logging in with docker login --username <username> I still receieved the errors.
If you use MacOS check if there are any kind of System Extensions which intercept the traffic - e.g. firewalls, vpn clients, security tools.
run systemextensionsctl list in your terminal app:
1 extension(s)
--- com.apple.system_extension.network_extension
enabled active teamID bundleID (version) name [state]
* * 78UFGP42EU ch.tripmode.TripMode.FilterExtension (3.1.0/1304) FilterExtension [activated enabled]
In my case the App TripMode was the reason for the problem. Uninstalling the tool and restarting the system fixed the problem.
I am on Ubuntu and docker version:
Docker version 18.06.3-ce, build d7080c1
I got this error:
OCI runtime create failed: container_linux.go:348: starting container
process caused "process_linux.go:297: copying bootstrap data to pipe
caused \"write init-p: broken pipe\"": unknown
when I ran:
docker build \
--build-arg bitbucket_pwd="$bitbucket_password" \
--build-arg commit_datavana="$commit_sha" \
--build-arg CACHE_BUST="$(date)" \
-t "$name_tag" .
does anyone know what causes that error? Should I downgrade docker?
Downgrade your version from 18.06.3 to 18.06.1 and follow the instruction from this link.It will helpfull
https://medium.com/#dirk.avery/docker-error-response-from-daemon-1d46235ff61d
this error was resolved in my ubuntu 14.04lts system
upgrade your kernel to 4.x version
$ apt-get install --install-recommends linux-generic-lts-xenial
I had the same error when set very low memory limit on kubernetes 200m instead of 200Mi for pod :-)
Normal Scheduled <unknown> default-scheduler Successfully assigned ingress-nginx/ingress-nginx-controller-b979fbd5-bkl2t to worker04.cluster
Warning FailedCreatePodSandBox 12m (x4 over 12m) kubelet, worker04.cluster Failed to create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod "ingress-nginx-controller-b979fbd5-bkl2t": Error response from daemon: OCI runtime create failed: container_linux.go:370: starting container process caused: process_linux.go:338: getting the final child's pid from pipe caused: read init-p: connection reset by peer: unknown
Warning FailedCreatePodSandBox 12m (x9 over 12m) kubelet, worker04.cluster Failed to create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod "ingress-nginx-controller-b979fbd5-bkl2t": Error response from daemon: OCI runtime create failed: container_linux.go:370: starting container process caused: process_linux.go:334: copying bootstrap data to pipe caused: write init-p: broken pipe: unknown
Normal SandboxChanged 7m45s (x284 over 12m) kubelet, worker04.cluster Pod sandbox changed, it will be killed and re-created.
Warning FailedCreatePodSandBox 2m45s (x152 over 11m) kubelet, worker04.cluster (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod "ingress-nginx-controller-b979fbd5-bkl2t": Error response from daemon: OCI runtime create failed: container_linux.go:370: starting container process caused: process_linux.go:334: copying bootstrap data to pipe caused: write init-p: broken pipe: unknown
The kernel version and the docker version do not match. My original kernel version and docker are:
$ uname -a
Linux cn0314000510l 5.4.0-42-generic #46~18.04.1-Ubuntu SMP Fri Jul 10 07:21:24 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
$ sudo docker --version
Docker version 20.10.7, build 20.10.7-0ubuntu5~18.04.3
Then roll back the docker version and solve it:
$ docker --version
Docker version 18.09.9, build 039a7df9ba
I am having trouble starting the docker containers on a particular machine: doing docker run gives random results, and that is the case whether I install atom, debian stretch of ubuntu 18.04. On the debian OSes, I am using a fresh install of Docker version 18.09.6, build 481bc77.
The most common issue is Error response from daemon: OCI runtime create failed
Here is what I see when I am trying to run the hello-world example (working ~1.5 times out of 7 times):
user#machine:~$ sudo docker run hello-world
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/get-started/
user#machine:~$ sudo docker run hello-world
docker: Error response from daemon: OCI runtime create failed: unable to retrieve OCI runtime error (open /run/containerd/io.containerd.runtime.v1.linux/moby/02c7ab23649c89b19720d57a549eb703aa442805aa3b468e7610c19e6d8fa2eb/log.json: no such file or directory): runc did not terminate sucessfully: unknown.
ERRO[0001] error waiting for container: context canceled
user#machine:~$ sudo docker run hello-world
docker: Error response from daemon: ttrpc: client shutting down: read unix #->#/containerd-shim/moby/4de0da9c33103f4622907a3ab25535075325366e9a4d0f1c4849ec20ca3cb91f/shim.sock: read: connection reset by peer: unknown.
ERRO[0001] error waiting for container: context canceled
user#machine:~$ sudo docker run hello-world
docker: Error response from daemon: ttrpc: client shutting down: read unix #->#/containerd-shim/moby/151f1ba68a9b28260a00e9cff433c5009382880fb75a28ee79fa549ffdfb21a9/shim.sock: read: connection reset by peer: unknown.
ERRO[0001] error waiting for container: context canceled
user#machine:~$ sudo docker run hello-world
docker: Error response from daemon: OCI runtime create failed: unable to retrieve OCI runtime error (open /run/containerd/io.containerd.runtime.v1.linux/moby/32de5ca60771884d4a236e3e9d2704a48f18f03e93fc6dd195f4e39fb7b56501/log.json: no such file or directory): runc did not terminate sucessfully: unknown.
ERRO[0001] error waiting for container: context canceled
user#machine:~$ sudo docker run hello-world
docker: Error response from daemon: ttrpc: client shutting down: read unix #->#/containerd-shim/moby/dcbb905d8783c65302c1a3afe8fb7913c58e7d5765b5a79072d55fb36f7bc1ea/shim.sock: read: connection reset by peer: unknown.
ERRO[0001] error waiting for container: context canceled
user#machine:~$ sudo docker run hello-world
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/get-started/
docker: Error response from daemon: OCI runtime state failed: runc did not terminate sucessfully: SIGILL: illegal instruction
PC=0x55611122e30c m=3 sigcode=2
goroutine 20 [running]:
runtime.aeshashbody()
/.GOROOT/src/runtime/asm_amd64.s:939 +0x1c fp=0xc42002d6b8 sp=0xc42002d6b0 pc=0x55611122e30c
runtime.mapaccess1_faststr(0x556111a6ad00, 0xc42007f590, 0x5561116ceb02, 0x2, 0x556100000001)
/.GOROOT/src/runtime/hashmap_fast.go:233 +0x1d1 fp=0xc42002d728 sp=0xc42002d6b8 pc=0x5561111e3031
text/template/parse.lexIdentifier(0xc4200bab60, 0x556111ae6e70)
/.GOROOT/src/text/template/parse/lex.go:441 +0x138 fp=0xc42002d7b8 sp=0xc42002d728 pc=0x556111415128
text/template/parse.(*lexer).run(0xc4200bab60)
/.GOROOT/src/text/template/parse/lex.go:228 +0x39 fp=0xc42002d7d8 sp=0xc42002d7b8 pc=0x556111413f99
runtime.goexit()
/.GOROOT/src/runtime/asm_amd64.s:2361 +0x1 fp=0xc42002d7e0 sp=0xc42002d7d8 pc=0x55611122f3b1
created by text/template/parse.lex
/.GOROOT/src/text/template/parse/lex.go:221 +0x161
goroutine 1 [chan receive, locked to thread]:
text/template/parse.(*lexer).nextItem(...)
/.GOROOT/src/text/template/parse/lex.go:195
text/template/parse.(*Tree).next(...)
/.GOROOT/src/text/template/parse/parse.go:64
text/template/parse.(*Tree).nextNonSpace(0xc42009a200, 0x0, 0x0, 0x0, 0x0, 0x0)
/.GOROOT/src/text/template/parse/parse.go:102 +0x159
text/template/parse.(*Tree).parse(0xc42009a200)
/.GOROOT/src/text/template/parse/parse.go:284 +0x2fa
text/template/parse.(*Tree).Parse(0xc42009a200, 0x5561116cead5, 0xf0, 0x0, 0x0, 0x0, 0x0, 0xc42007f800, 0xc42007c6c0, 0x2, ...)
/.GOROOT/src/text/template/parse/parse.go:233 +0x228
text/template/parse.Parse(0x5561116b62fb, 0x5, 0x5561116cead5, 0xf0, 0x0, 0x0, 0x0, 0x0, 0xc42007c6c0, 0x2, ...)
/.GOROOT/src/text/template/parse/parse.go:55 +0x10a
text/template.(*Template).Parse(0xc42008c240, 0x5561116cead5, 0xf0, 0x5561112abfaa, 0x5561116c0486, 0x1d)
/.GOROOT/src/text/template/template.go:198 +0x11a
rax 0x5561116ceb02
rbx 0x55611122e2d0
rcx 0x2
rdx 0xc42002d6c8
rdi 0xc6b7000000000000
rsi 0x1
rbp 0xc42002d718
rsp 0xc42002d6b0
r8 0xc42002d728
r9 0x0
r10 0x3
r11 0x286
r12 0xc42006e468
r13 0xff
r14 0xff
r15 0xf
rip 0x55611122e30c
rflags 0x10202
cs 0x33
fs 0x0
gs 0x0
: unknown.
ERRO[0002] error waiting for container: context canceled
Does anyone know what the error could be?
I had some some weird networking errors when installing docker, but launching the same apt install again worked:
user#machine:~$ sudo apt-get install docker-ce docker-ce-cli containerd.io
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
aufs-tools cgroupfs-mount libltdl7 pigz
The following NEW packages will be installed:
aufs-tools cgroupfs-mount containerd.io docker-ce docker-ce-cli libltdl7 pigz
0 upgraded, 7 newly installed, 0 to remove and 0 not upgraded.
Need to get 50.7 MB of archives.
After this operation, 243 MB of additional disk space will be used.
Do you want to continue? [Y/n]
Get:1 http://archive.ubuntu.com/ubuntu bionic/universe amd64 pigz amd64 2.4-1 [57.4 kB]
Get:2 http://archive.ubuntu.com/ubuntu bionic/universe amd64 aufs-tools amd64 1:4.9+20170918-1ubuntu1 [104 kB]
E: Method https has died unexpectedly!
E: Sub-process https received signal 4.
If you are facing issues after the upgrade to containerd 1.4.0, downgrade to 1.3.4.
That is, for example, if you are on Arch Linux, probably you can do:
cd /var/cache/pacman/pkg/
sudo pacman -U containerd-1.3.4-2-x86_64.pkg.tar.zst
Specifically, this is the error message you might be facing:
docker: Error response from daemon: ttrpc: closed: unknown.
If you need 1.4.0 for some reason, there is an open issue tracking this issue on Github over here, best to track it's status from there: https://github.com/containerd/containerd/issues/4483
i have a problem with docker on my local machine and my internet provider. If i connect to another i don't have it error.
So, when i try to build the project
maxim#maxim-TM1701:~/dev/projects/buzz/back_new$ sudo docker-compose up
[sudo] password for maxim:
Building web
Step 1/9 : FROM python:3.4
3.4: Pulling from library/python
55cbf04beb70: Pull complete
1607093a898c: Pull complete
9a8ea045c926: Pull complete
d4eee24d4dac: Pull complete
b59856e9f0ab: Downloading [==================================================>] 112.7MB/112.7MB
acbc9a5bd738: Download complete
2bedfced3d32: Download complete
35cfd5596113: Download complete
54603c381292: Download complete
ERROR: Service 'web' failed to build: read tcp 192.168.31.82:48810->104.18.125.25:443: read: connection reset by peer
maxim#maxim-TM1701:~/dev/projects/buzz/back_new$ sudo docker-compose up
Building web
Step 1/9 : FROM python:3.4
3.4: Pulling from library/python
55cbf04beb70: Retrying in 2 seconds
1607093a898c: Download complete
9a8ea045c926: Download complete
d4eee24d4dac: Downloading [===========> ] 7.998MB/34.33MB
b59856e9f0ab: Downloading [=> ] 4.267MB/189.1MB
acbc9a5bd738: Waiting
2bedfced3d32: Waiting
35cfd5596113: Waiting
54603c381292: Waiting
^CGracefully stopping... (press Ctrl+C again to force)
How can i fix it?