I have the following output when running kubectl get ingress:
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-svc nginx localdev.me 192.168.49.2 80 32m
and under vs code in codespaces, I see about 5 port forwarded ports that I can access from a browser that have processes that look like this:
/usr/bin/docker-proxy -proto tcp -host-ip 127.0.0.1 -host-port 49157 -container-ip 192.168.49.2 -container-port 22
I was wondering how I can configure the one with port 80 to some external port that I can access from the browser. Thank you.
Related
I have a celery instance running inside a pod in local kubernetes cluster whereas the redis server/broker it connects to is started on my localhost:6379 without kubernetes . How can i get my k8 pod to talk to locally deployed redis?
You can create a Headless Service and an Endpoint with statically defined IP address of the node where the redis server is running.
I've created an example to illustrate you how it works.
First, I created a Headless Service and an Endpoint.
NOTE: Endpoint has the IP address of the node where redis server is running:
# example.yml
---
apiVersion: v1
kind: Service
metadata:
name: redis
namespace: default
spec:
clusterIP: None
ports:
- name: redis
port: 6379
protocol: TCP
---
apiVersion: v1
kind: Endpoints
metadata:
name: redis
namespace: default
subsets:
- addresses:
- ip: 10.156.0.58 # your node's IP address
ports:
- port: 6379
name: redis
protocol: TCP
After creating above resources, we are able to resolve the redis service name to the IP address:
# kubectl get svc,ep redis
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/redis ClusterIP None <none> 6379/TCP 28m
NAME ENDPOINTS AGE
endpoints/redis 10.156.0.58:6379 28m
# kubectl run dnsutils --image=gcr.io/kubernetes-e2e-test-images/dnsutils:1.3 -it --rm
/ # nslookup redis
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: redis.default.svc.cluster.local
Address: 10.156.0.58
Additionally, if your redis server is only listening on localhost, you need to modify the iptables rules. To configure port forwarding from port 6379 (default redis port) to localhost you can use:
NOTE: Instead of 10.156.0.58 use the IP address of the node where your redis server is running.
# iptables -t nat -A PREROUTING -p tcp -d 10.156.0.58 --dport 6379 -j DNAT --to-destination 127.0.0.1:6379
As you can see, it is easier if redis is listening not only on the localhost, as we don't have to modify the iptables rules then.
Finally, let's see if we can connect from Pod to the redis server on the host machine:
# kubectl exec -it redis-client -- bash
root#redis-client:/# redis-cli -h redis
redis:6379> SET key1 "value1"
OK
I have following docker-dompose file:
version: "3.9"
services:
pihole:
container_name: pihole
image: pihole/pihole:latest
ports:
- target: 53
published: 53
protocol: tcp
mode: host
- target: 53
published: 53
protocol: udp
mode: host
# - target: 80
# published: 80
# protocol: tcp
# mode: host
environment:
TZ: 'Europe/Warsaw'
DNS1: 1.1.1.1
DNS2: 8.8.8.8
VIRTUAL_HOST: 'pihole.local'
volumes:
- ./etc/pihole/:/etc/pihole
- ./etc-dnsmasq.d:/etc/dnsmasq.d
dns:
- 1.1.1.1
- 8.8.8.8
cap_add:
- NET_ADMIN
restart: unless-stopped
networks:
- public
networks:
public:
Working solution with docker-compose
Running this with:
docker-compose --file docker-compose-pihole.yml up -d
exposes ports 53 tcp/udp on host ip address
$ nmap 172.30.0.100 -Pn
Starting Nmap 7.80 ( https://nmap.org ) at 2022-01-02 10:42 CET
Nmap scan report for 172.30.0.100
Host is up (0.0038s latency).
Not shown: 998 filtered ports
PORT STATE SERVICE
22/tcp open ssh
53/tcp open domain
and dns resolution is working
$ nslookup google.pl 172.30.0.100
Server: 172.30.0.100
Address: 172.30.0.100#53
Non-authoritative answer:
Name: google.pl
Address: 172.217.16.3
Name: google.pl
Address: 2a00:1450:401b:804::2003
and I'm able to telnet to port 53
$ telnet 172.30.0.100 53
Trying 172.30.0.100...
Connected to 172.30.0.100.
Escape character is '^]'.
NOT Working solution with docker stack deploy
Running the same docker-compose file with
docker stack deploy -c docker-compose-pihole.yml pihole
also exposes 53 port tcp/udp on host IP address
$ nmap 172.30.0.100 -Pn
Starting Nmap 7.80 ( https://nmap.org ) at 2022-01-02 10:46 CET
Nmap scan report for 172.30.0.100
Host is up (0.0022s latency).
Not shown: 998 filtered ports
PORT STATE SERVICE
22/tcp open ssh
53/tcp open domain
however name resolution is not working
nslookup google.pl 172.30.0.100
;; connection timed out; no servers could be reached
telnet to port 53 is closed by remote host
$ telnet 172.30.0.100 53
Trying 172.30.0.100...
Connected to 172.30.0.100.
Escape character is '^]'.
Connection closed by foreign host.
Another strange thing is when port 80 is exposed.
In both cases I can access web UI on port 80 connecting to host IP
I have no idea what's going on and how to fix communication on port 53.
Fixed.
One ENV was missing for pihole:
- DNSMASQ_LISTENING: all
Two days to figure this out!
My problem is that I can use docker with for example Portainer but when I run docker on the machine on sudo docker can't connect to the daemon and tells me about it:
All commands are done with root.
docker ps
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
docker service:
systemctl status docker.service
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/docker.service.d
└─options.conf
Active: active (running) since Fri 2021-10-22 19:02:54 UTC; 4 days ago
TriggeredBy: ● docker.socket
Docs: https://docs.docker.com
Main PID: 182765 (dockerd)
Tasks: 175
Memory: 93.1M
CGroup: /system.slice/docker.service
├─182765 /usr/bin/dockerd -H unix:// --containerd=/run/containerd/containerd.sock
├─182942 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 49153 -container-ip 172.17.0.2 -container-port 27017
├─182949 /usr/bin/docker-proxy -proto tcp -host-ip :: -host-port 49153 -container-ip 172.17.0.2 -container-port 27017
├─182962 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 8081 -container-ip 172.18.0.2 -container-port 8080
├─182970 /usr/bin/docker-proxy -proto tcp -host-ip :: -host-port 8081 -container-ip 172.18.0.2 -container-port 8080
├─182984 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 6881 -container-ip 172.18.0.2 -container-port 6881
├─182990 /usr/bin/docker-proxy -proto tcp -host-ip :: -host-port 6881 -container-ip 172.18.0.2 -container-port 6881
├─183004 /usr/bin/docker-proxy -proto udp -host-ip 0.0.0.0 -host-port 6881 -container-ip 172.18.0.2 -container-port 6881
├─183010 /usr/bin/docker-proxy -proto udp -host-ip :: -host-port 6881 -container-ip 172.18.0.2 -container-port 6881
├─183034 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 49154 -container-ip 172.17.0.3 -container-port 3306
├─183041 /usr/bin/docker-proxy -proto tcp -host-ip :: -host-port 49154 -container-ip 172.17.0.3 -container-port 3306
├─183148 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 49155 -container-ip 172.17.0.4 -container-port 6379
├─183154 /usr/bin/docker-proxy -proto tcp -host-ip :: -host-port 49155 -container-ip 172.17.0.4 -container-port 6379
├─183332 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 9443 -container-ip 172.17.0.5 -container-port 9443
├─183339 /usr/bin/docker-proxy -proto tcp -host-ip :: -host-port 9443 -container-ip 172.17.0.5 -container-port 9443
├─183353 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 9000 -container-ip 172.17.0.5 -container-port 9000
├─183360 /usr/bin/docker-proxy -proto tcp -host-ip :: -host-port 9000 -container-ip 172.17.0.5 -container-port 9000
├─183372 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 8000 -container-ip 172.17.0.5 -container-port 8000
├─183378 /usr/bin/docker-proxy -proto tcp -host-ip :: -host-port 8000 -container-ip 172.17.0.5 -container-port 8000
├─186468 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 8080 -container-ip 172.17.0.6 -container-port 80
└─186474 /usr/bin/docker-proxy -proto tcp -host-ip :: -host-port 8080 -container-ip 172.17.0.6 -container-port 80
Oct 27 09:31:30 falcon dockerd[182765]: time="2021-10-27T09:31:30.218332206Z" level=debug msg="Calling GET /containers/69047b41bedea4794803730ff0fa13a65e546519f9>
Oct 27 09:31:30 falcon dockerd[182765]: time="2021-10-27T09:31:30.219026938Z" level=debug msg="Calling GET /images/json?all=0"
Oct 27 09:32:58 falcon dockerd[182765]: time="2021-10-27T09:32:58.654321683Z" level=debug msg="Calling HEAD /_ping"
Oct 27 09:32:58 falcon dockerd[182765]: time="2021-10-27T09:32:58.656492046Z" level=debug msg="Calling GET /v1.37/info"
Oct 27 09:32:58 falcon dockerd[182765]: time="2021-10-27T09:32:58.673377621Z" level=debug msg="Calling GET /v1.37/containers/json?all=1&limit=0"
Oct 27 09:32:58 falcon dockerd[182765]: time="2021-10-27T09:32:58.680766521Z" level=debug msg="Calling GET /v1.37/images/json"
Oct 27 09:32:58 falcon dockerd[182765]: time="2021-10-27T09:32:58.701618241Z" level=debug msg="Calling GET /v1.37/volumes"
I can use portainer as usual
docker version:
docker -v
Docker version 20.10.7, build 20.10.7-0ubuntu1~20.04.2
More information:
ls -l /var/run
lrwxrwxrwx 1 root root 4 Jul 31 2020 /var/run -> /run
docker.sock is created:
drwxr-xr-x 2 root root 40 Oct 25 20:31 docker.sock
rcat /usr/lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
Wants=containerd.service
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
# Both the old, and new location are accepted by systemd 229 and up, so using the old location
# to make them work for either version of systemd.
StartLimitBurst=3
# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
# this option work for either version of systemd.
StartLimitInterval=60s
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Comment TasksMax if your systemd version does not support it.
# Only systemd 226 and above support this option.
TasksMax=infinity
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Any one got a clue why docker is unable to connect to the daemon?
It looks like you have a container configured to bind mount /var/lib/docker.sock and the daemon restarted that container before creating the socket. There's been some tweeks to packaging in recent releases to reduce this chance. Otherwise you may want to mount the entire directory instead of a single file.
To fix, try stopping docker, deleting the empty directory, and restarting docker to see if the socket gets created first (it's a race condition).
I perform docker-compose down and I have:
$ docker-compose ps
Name Command State Ports
------------------------------
But when I do docker-compose up -d, I get
ERROR: for php Cannot start service php: driver failed programming external connectivity on endpoint project_php_1 (1a97183b3dad2157994251af0ead734e6750d95a3c71540d95f4c32c487d0830): Bind for 127.0.0.1:9000 failed: port is already allocated
Netstat:
tcp 0 0 127.0.0.1:9000 0.0.0.0:* LISTEN 0 24211 1643/docker-proxy
ps:
$ sudo ps auxwwwt | grep docker-proxy
root 18924 0.0 0.1 1152904 3132 ? Sl 11:56 0:00 /usr/bin/docker-proxy -proto tcp -host-ip 127.0.0.1 -host-port 9000 -container-ip 172.23.0.2 -container-port 9000
root 19233 0.0 0.1 1152904 3220 ? Sl 11:56 0:00 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 54321 -container-ip 172.18.0.2 -container-port 5432
root 19241 0.0 0.1 1079172 4032 ? Sl 11:56 0:00 /usr/bin/docker-proxy -proto tcp -host-ip :: -host-port 54321 -container-ip 172.18.0.2 -container-port 5432\
I stop and start the docker service, but the docker-proxy keeps coming up. What can I do to stop it?
You must have containers running from outside this compose setup. Use docker ps to list all running containers and stop them. Afterwards, to be sure, use docker network prune to remove orphaned networks.
After that your compose setup should start normally.
I'm using Docker and Docker Compose for my development environment.
When I switch between projects, I usually have quite some pain because I receive a PORT ALREADY IN USE error.
If I do docker-compose up (which makes my rails server start), is Ctrl+C the correct way to terminate this container?
Here's my docker-compose.yml file:
db:
image: postgres
ports:
- "5432"
web:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
volumes:
- .:/myapp
ports:
- "3000:3000"
links:
- db
Sometimes, I simply have to delete ./tmp/pids/server.pid, but sometimes I have to kill -9 some process.
Here's for example what ps -edf | grep docker outputs:
root 742 1 0 Jul18 ? 00:01:11 /usr/bin/docker -d -H fd://
root 22341 742 0 Jul21 ? 00:00:00 docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 32781 -container-ip 172.17.0.48 -container-port 5432
root 22510 742 0 Jul21 ? 00:00:00 docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 3030 -container-ip 172.17.0.49 -container-port 3030
root 28766 742 0 Jul21 ? 00:00:00 docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 32783 -container-ip 172.17.0.57 -container-port 5432
root 28886 742 0 Jul21 ? 00:00:00 docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 3000 -container-ip 172.17.0.58 -container-port 3000
Am I doing something wrong?
I would launch the container as a background process with docker-compose up -d. Then later you can do a shutdown with docker-compose stop in a clean way.