Cannot see PiHole clients on Docker Swarm - docker

Dears
I am running PiHole on Docker Swarm but I only see 2 clients: 10.0.0.3 and localhost.
If I understood correctly from various discussion over the web, I should be able to see all the clients in PiHole if I expose the DNS ports with Host mode (pihole is forced to run on a single swarm node) in this way:
ports:
- published: 53
target: 53
protocol: tcp
mode: host
- published: 53
target: 53
protocol: udp
mode: host
- published: 67
target: 67
protocol: udp
mode: ingress
- published: 8053
target: 80
protocol: tcp
mode: ingress
Unfortunately, if I expose ports in this way, the dns service does not work anymore: I can see the port exposed on the container:
pi#raspy3:~ $ docker port 3be0321961a6
53/tcp -> 0.0.0.0:53
53/udp -> 0.0.0.0:53
but i cannot see them with NETSTAT:
pi#raspy3:~ $ netstat -atu | grep LISTEN
tcp 0 0 0.0.0.0:ssh 0.0.0.0:* LISTEN
tcp6 0 0 [::]:8053 [::]:* LISTEN
tcp6 0 0 [::]:domain [::]:* LISTEN
tcp6 0 0 [::]:ssh [::]:* LISTEN
tcp6 0 0 [::]:https [::]:* LISTEN
tcp6 0 0 [::]:8000 [::]:* LISTEN
tcp6 0 0 [::]:9000 [::]:* LISTEN
tcp6 0 0 [::]:2377 [::]:* LISTEN
tcp6 0 0 [::]:7946 [::]:* LISTEN
tcp6 0 0 [::]:http [::]:* LISTEN
and nslookup does not work:
pi#raspy4:~ $ nslookup google.com 192.168.32.2
;; connection timed out; no servers could be reached
Could you help me understanding what I am loosing, please?
Thanks :)

Solved changing the Interface Listening Behaviour to Listen on all interfaces, permit all origins.
Obviously be sure to follow all the security points from the PiHole's team ;)

Related

Docker/Docker-compose error starting userland proxy

I'm getting an error bringing up a project:
$ docker-compose -f docker-compose.yml up -d
Starting project-container-a ...
Starting project-container-a
Recreating project-container-b ...
Recreating project-container-b
Starting project-container-c ...
Starting project-container-c ... error
ERROR: for project-container-c Cannot start service project-container-c: driver failed programming external connectivity on endpoint project-container-c (123abcStarting project-container-a ... done
ERROR: for project-container-c Cannot start service project-container-c: driver failed programming external connectivity on endpoint project-container-c (123abc673b494c1505): Error starting userland proxy:
ERROR: Encountered errors while bringing up the project.
The docker-compose file defines project-container-c as:
services:
bento-legacy-nginx:
image: project-container-c
container_name: project-container-c
build:
context: ./
cache_from:
- project-container-c
dockerfile: ./build/nginx/Dockerfile
ports:
- 80:80
restart: always
volumes:
- ./app:/var/www/app
Nothing is bound to 80:
$ sudo netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 7665/systemd-resolv
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 1089/cupsd
tcp6 0 0 :::9000 :::* LISTEN 1815/java
tcp6 0 0 :::3308 :::* LISTEN 32040/rootlesskit
tcp6 0 0 127.0.0.1:63342 :::* LISTEN 1815/java
tcp6 0 0 :::20080 :::* LISTEN 1815/java
tcp6 0 0 ::1:631 :::* LISTEN 1089/cupsd
tcp6 0 0 :::10137 :::* LISTEN 1815/java
tcp6 0 0 127.0.0.1:6942 :::* LISTEN 1815/java
udp 0 0 127.0.0.53:53 0.0.0.0:* 7665/systemd-resolv
udp 0 0 0.0.0.0:68 0.0.0.0:* 1753/dhclient
udp 0 0 0.0.0.0:631 0.0.0.0:* 1138/cups-browsed
udp 0 0 0.0.0.0:53353 0.0.0.0:* 1094/avahi-daemon:
udp 0 0 0.0.0.0:5353 0.0.0.0:* 1094/avahi-daemon:
udp6 0 0 :::60252 :::* 1094/avahi-daemon:
udp6 0 0 :::5353 :::* 1094/avahi-daemon:
I am (attempting) to run Docker in rootless mode:
$ ps -aux | grep -i docker
me 6378 0.0 0.0 14428 960 pts/2 S+ 00:11 0:00 grep --color=auto -i docker
me 32040 0.0 0.0 111788 7328 ? Ssl Mar10 0:00 rootlesskit --net=vpnkit --mtu=1500 --slirp4netns-sandbox=auto --slirp4netns-seccomp=auto --disable-host-loopback --port-driver=builtin --copy-up=/etc --copy-up=/run /home/me/bin/dockerd-rootless.sh --experimental --storage-driver=overlay2
me 32049 0.0 0.0 110124 7128 ? Sl Mar10 0:00 /proc/self/exe --net=vpnkit --mtu=1500 --slirp4netns-sandbox=auto --slirp4netns-seccomp=auto --disable-host-loopback --port-driver=builtin --copy-up=/etc --copy-up=/run /home/me/bin/dockerd-rootless.sh --experimental --storage-driver=overlay2
me 32084 0.6 0.3 903356 63564 ? Sl Mar10 0:06 dockerd --experimental --storage-driver=overlay2
me 32098 0.4 0.1 793340 28420 ? Ssl Mar10 0:04 containerd --config /run/user/1000/docker/containerd/containerd.toml --log-level info
Docker version 19.03.6, build 369ce74a3c, Ubuntu 18.04
What is this error starting userland proxy?
I cannot say for certain, but this appears to be the inability to bind to a privileged port. Having the app bind to 8080 instead of 80 allows the container to boot and run without error

How to make prometheus work with grafana in Docker?

The main goal is to link prometheus as a backend in grafana, but entering http://localhost:9090 as the url in grafana returns HTTP Error Bad Gateway
I started a prometheus docker image but it's not listening on port 9090 on IPv4.
netstat -ntulp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 15895/nginx: master
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 3190/sshd
tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN 24970/postmaster
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 3148/master
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 15895/nginx: master
tcp6 0 0 :::9100 :::* LISTEN 16652/node_exporter
tcp6 0 0 :::80 :::* LISTEN 15895/nginx: master
tcp6 0 0 :::22 :::* LISTEN 3190/sshd
tcp6 0 0 :::3000 :::* LISTEN 28436/docker-proxy
tcp6 0 0 ::1:5432 :::* LISTEN 24970/postmaster
tcp6 0 0 ::1:25 :::* LISTEN 3148/master
tcp6 0 0 :::9090 :::* LISTEN 31648/docker-proxy
udp 0 0 0.0.0.0:68 0.0.0.0:* 2806/dhclient
udp 0 0 127.0.0.1:323 0.0.0.0:* 1639/chronyd
udp6 0 0 ::1:323 :::* 1639/chronyd
This is my docker command:
docker run -d -p 9090:9090 --name prometheus -v /etc/prometheus.yml:/etc/prometheus/prometheus.yml -v /mnt/vol-0001/prometheus_data/:/etc/prometheus/data prom/prometheus --log.level=debug
I used -p 9090:9090 and -p 0.0.0.0:9090 with same results
docker logs prometheus returns:
level=info ts=2018-12-19T21:07:59.332452641Z caller=main.go:243 msg="Starting Prometheus" version="(version=2.6.0, branch=HEAD, revision=dbd1d58c894775c0788470944b818cc724f550fb)"
level=info ts=2018-12-19T21:07:59.332554622Z caller=main.go:244 build_context="(go=go1.11.3, user=root#bf5760470f13, date=20181217-15:14:46)"
level=info ts=2018-12-19T21:07:59.332584047Z caller=main.go:245 host_details="(Linux 3.10.0-957.1.3.el7.x86_64 #1 SMP Thu Nov 29 14:49:43 UTC 2018 x86_64 9dd3a9318064 (none))"
level=info ts=2018-12-19T21:07:59.332610547Z caller=main.go:246 fd_limits="(soft=65536, hard=65536)"
level=info ts=2018-12-19T21:07:59.332631287Z caller=main.go:247 vm_limits="(soft=unlimited, hard=unlimited)"
level=info ts=2018-12-19T21:07:59.334232116Z caller=main.go:561 msg="Starting TSDB ..."
level=info ts=2018-12-19T21:07:59.334671887Z caller=repair.go:48 component=tsdb msg="found healthy block" mint=1545204931123 maxt=1545220800000 ulid=01CZ3PHTVQQTW7Q122X7Y15WV4
level=info ts=2018-12-19T21:07:59.334756938Z caller=repair.go:48 component=tsdb msg="found healthy block" mint=1545242400000 maxt=1545249600000 ulid=01CZ44997810VTYP3GV0KJXXN1
level=info ts=2018-12-19T21:07:59.334819198Z caller=repair.go:48 component=tsdb msg="found healthy block" mint=1545220800000 maxt=1545242400000 ulid=01CZ4499ASP4RG8BPR8PE5WAKY
level=info ts=2018-12-19T21:07:59.346244745Z caller=web.go:429 component=web msg="Start listening for connections" address=0.0.0.0:9090
level=info ts=2018-12-19T21:07:59.461554488Z caller=main.go:571 msg="TSDB started"
level=info ts=2018-12-19T21:07:59.461625871Z caller=main.go:631 msg="Loading configuration file" filename=prometheus.yml
level=debug ts=2018-12-19T21:07:59.462558422Z caller=manager.go:213 component="discovery manager scrape" msg="Starting provider" provider=string/0 subs=[prometheus]
level=info ts=2018-12-19T21:07:59.462601563Z caller=main.go:657 msg="Completed loading of configuration file" filename=prometheus.yml
level=info ts=2018-12-19T21:07:59.462615458Z caller=main.go:530 msg="Server is ready to receive web requests."
level=debug ts=2018-12-19T21:07:59.462669264Z caller=manager.go:231 component="discovery manager scrape" msg="discoverer channel closed" provider=string/0
I also tried disabling the firewall to make sure it wasn't the cause of this headache.
I'm no docker/kubernetes expert, you help is appreciate.
The localhost you're referring in Grafana Datasource input it's the Grafana container itself since Grafana internally resolves localhost as 127.0.0.1: probably since you're using the GUI you was expecting that the queries were issued via AJAX/frontend calls but nope, it's all backed by the backend.
Let orchestrate containers using even Docker Compose with services that connect container using Networks:
# docker-compose.yaml
version: "3"
services:
grafana:
image: grafana/grafana:5.4.1
ports:
- 3000:3000
prometheus:
image: prom/prometheus:v2.5.0
After docker-compose up -d you can visit your Docker Machine IP (or localhost if running Docker for Mac) at port :3000 and then set the Prometheus data source URL to http://prometheus:9090 and it will work!

Cannot make request to gitlab running in the official Docker container

I am trying to run gitlab from a Docker (gitlab/gitlab-ce, latest) container using the instruction given here.
Docker version
Docker version 1.12.4, build 1564f02
I first run
docker run --detach --hostname <myIP> --publish 8000:443--publish 8001:80 --publish 8002:22 --name gitlab --restart always --volume /docker/app/gitlab/config:/etc/gitlab --volume /docker/app/gitlab/logs:/var/log/gitlab --volume /docker/app/gitlab/data:/var/opt/gitlab gitlab/gitlab-ce:latest
Then I edited the container's /etc/gitlab/gitlab.rb to set
external_url 'http://<myIP>:8001'
gitlab_rails['gitlab_shell_ssh_port'] = 8002
Then I restarted the container with
docker restart gitlab
Now.
When I try to connect to <myIP>:8001 I get a (110) Connection timed out.
When I try from the Docker container's host I get
xxx#xxx:~$ curl localhost:8001
curl: (56) Recv failure: Connection reset by peer
Logs (just the end)
==> /var/log/gitlab/gitlab-workhorse/current <==
2017-07-26_14:53:41.50465 localhost:8001 # - - [2017-07-26 14:53:41.223110228 +0000 UTC] "GET /help HTTP/1.1" 200 33923 "" "curl/7.53.0" 0.281484
==> /var/log/gitlab/nginx/gitlab_access.log <==
127.0.0.1 - - [26/Jul/2017:14:53:41 +0000] "GET /help HTTP/1.1" 200 33967 "-" "curl/7.53.0"
==> /var/log/gitlab/gitlab-monitor/current <==
2017-07-26_14:53:47.27460 ::1 - - [26/Jul/2017:14:53:47 UTC] "GET /sidekiq HTTP/1.1" 200 3399
2017-07-26_14:53:47.27464 - -> /sidekiq
2017-07-26_14:53:49.22004 ::1 - - [26/Jul/2017:14:53:49 UTC] "GET /database HTTP/1.1" 200 42025
2017-07-26_14:53:49.22007 - -> /database
2017-07-26_14:53:51.48866 ::1 - - [26/Jul/2017:14:53:51 UTC] "GET /process HTTP/1.1" 200 7132
2017-07-26_14:53:51.48873 - -> /process
==> /var/log/gitlab/gitlab-rails/production.log <==
Started GET "/-/metrics" for 127.0.0.1 at 2017-07-26 14:53:55 +0000
Processing by MetricsController#index as HTML
Filter chain halted as :validate_prometheus_metrics rendered or redirected
Completed 404 Not Found in 1ms (Views: 0.7ms | ActiveRecord: 0.0ms)
Docker ps
xxx#xxx:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
67e013741b6d gitlab/gitlab-ce:latest "/assets/wrapper" 2 hours ago Up About an hour (healthy) 0.0.0.0:8002->22/tcp, 0.0.0.0:8001->80/tcp, 0.0.0.0:8000->443/tcp gitlab
Netstat
xxx#xxx:~$ netstat --listen
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 localhost:smtp *:* LISTEN
tcp 0 0 *:2020 *:* LISTEN
tcp 0 0 *:git *:* LISTEN
tcp 0 0 *:43918 *:* LISTEN
tcp 0 0 *:sunrpc *:* LISTEN
tcp6 0 0 localhost:smtp [::]:* LISTEN
tcp6 0 0 [::]:8000 [::]:* LISTEN
tcp6 0 0 [::]:8001 [::]:* LISTEN
tcp6 0 0 [::]:8002 [::]:* LISTEN
tcp6 0 0 [::]:2020 [::]:* LISTEN
tcp6 0 0 [::]:git [::]:* LISTEN
tcp6 0 0 [::]:sunrpc [::]:* LISTEN
tcp6 0 0 [::]:http [::]:* LISTEN
tcp6 0 0 [::]:43730 [::]:* LISTEN
udp 0 0 *:54041 *:*
udp 0 0 *:sunrpc *:*
udp 0 0 *:snmp *:*
udp 0 0 *:958 *:*
udp 0 0 localhost:969 *:*
udp 0 0 *:37620 *:*
udp6 0 0 [::]:54611 [::]:*
udp6 0 0 [::]:sunrpc [::]:*
udp6 0 0 localhost:snmp [::]:*
udp6 0 0 [::]:958 [::]:*
I cannot find what is wrong. Anybody can help ?
Here is a docker-compose.yml which worked fine for me
version: '2'
services:
gitlab:
image: gitlab/gitlab-ce:latest
ports:
- "8002:22"
- "8000:8000"
- "8001:443"
environment:
- "GITLAB_OMNIBUS_CONFIG=external_url 'http://192.168.33.100:8000/'"
volumes:
- ./config:/etc/gitlab
- ./logs:/var/log/gitlab
- ./data:/var/opt/gitlab
The thing is that when you configure external url as <MyIP>:8000 the listening port inside the container also is updated to 8000. In your case you are mapping port 8000 to 80 and you should be mapping 8000 to 8000 only
Read the below url for details on the same
https://docs.gitlab.com/omnibus/settings/nginx.html#setting-the-nginx-listen-port
If you need to override this port then you can do that in gitlab.rb
nginx['listen_port'] = 8081
I prefer to launch Gitlab using a docker-compose file instead of commands, as it is easy to configure, start, restart gitlab

Force InfluxDB to listen on localhost only

I am trying to get InfluxDB v1.1.1 to listen on localhost only AND work with both IPv4/v6. Server is running on Debian 7.
Default, config, I can reach influx internally and externally:
[http]
# The bind address used by the HTTP service.
bind-address = ":8086"
netstat -antp | grep "influx"
tcp6 0 0 :::8083 :::* LISTEN 813/influxd
tcp6 0 0 :::8086 :::* LISTEN 813/influxd
tcp6 0 0 :::8088 :::* LISTEN 813/influxd
curl -4 -sl -I localhost:8086/ping <- Works
curl -6 -sl -I localhost:8086/ping <- Works
Attempting to listen on localhost only using IPv6 I cant reach influx internally or externally using IPv4/IPv6:
[http]
# The bind address used by the HTTP service.
bind-address = "[::1]:8086"
netstat -antp | grep "influx"
tcp6 0 0 :::8083 :::* LISTEN 1831/influxd
tcp6 0 0 ::1:8086 :::* LISTEN 1831/influxd
tcp6 0 0 :::8088 :::* LISTEN 1831/influxd
curl -4 -sl -I localhost:8086/ping <- Does not work
curl -6 -sl -I localhost:8086/ping <- Does not work
Attempting to listen on localhost only using IPv4 I can reach influx internally using IPv4 only:
[http]
# The bind address used by the HTTP service.
bind-address = "127.0.0.1:8086"
netstat -antp | grep "influx"
tcp 0 0 127.0.0.1:8086 0.0.0.0:* LISTEN 3375/influxd
tcp6 0 0 :::8083 :::* LISTEN 3375/influxd
tcp6 0 0 :::8088 :::* LISTEN 3375/influxd
curl -4 -sl -I localhost:8086/ping <- Works
curl -6 -sl -I localhost:8086/ping <- Does not work
Not sure if I am missing something in the config or if this is just not possible.
This seems to have been solved in April 2018. TL;DR: use bind-address twice, once at the top (for backups/diagnostics access to influxdb) and once for the HTTP API:
bind-address = "127.0.0.1:8088"
[http]
bind-address = "127.0.0.1:8088"

Deploying app with sunspot based search on webbyapp

I am trying to deploy a rails app on webbyapp. I am using sunspot for search functionality. It works fine in development mode.
After deploying my app I get the "we are sorry something went wrong" page. I tried to check the logs and got this :-
(eval):2:in `post'
/usr/lib/ruby/gems/1.9.1/gems/sunspot-1.3.0/lib/sunspot/search/abstract_search.rb:38:in `execute'
/usr/lib/ruby/gems/1.9.1/gems/sunspot_rails-1.3.0/lib/sunspot/rails/searchable.rb:329:in `solr_execute_search'
/usr/lib/ruby/gems/1.9.1/gems/sunspot_rails-1.3.0/lib/sunspot/rails/searchable.rb:153:in `solr_search'
/var/rapp/StudyAbroader/app/controllers/home_controller.rb:24:in `search'
/usr/lib/ruby/gems/1.9.1/gems/actionpack-3.1.3/lib/action_controller/metal/implicit_render.rb:4:in `send_action'
/usr/lib/ruby/gems/1.9.1/gems/actionpack-3.1.3/lib/abstract_controller/base.rb:167:in `process_action'
/usr/lib/ruby/gems/1.9.1/gems/actionpack-3.1.3/lib/action_controller/metal/rendering.rb:10:in `process_action'):
app/controllers/home_controller.rb:24:in `search'
I dont know what to make of it.
I installed topcat6 and opnjdk-6 on my production machine as it was told in lot of tutorial.
Here is my sunspot.yml file :-
production:
solr:
hostname: xxx.webbyapp.com
port: 8080
log_level: WARNING
Updated with netstat-ntpl output
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:53888 0.0.0.0:* LISTEN -
tcp6 0 0 127.0.0.1:8005 :::* LISTEN -
tcp6 0 0 :::8080 :::* LISTEN -
tcp6 0 0 :::8982 :::* LISTEN 19270/java
tcp6 0 0 :::22 :::* LISTEN -
tcp6 0 0 ::1:5432 :::* LISTEN -
I am trying to debug it since morning but no luck. Can someone please have a look at it?
Changing the schema.xml file in the production with the schema.xml file in development which can be found here appname/solr/conf/schema.xml did it for me, atleast sunspot started to show the search result.

Resources