How to preview web apps on GCloud - docker

I have Grafana hosted on Google Cloud Platform using docker - https://github.com/kamon-io/docker-grafana-graphite. I confirmed docker is running on GCE and as GCE only allows port 8080 upwards I change the Grafana port to 8080. I tried previewing using the console, and it returned
Error: Could not connect to Cloud Shell on port 8080.
Ensure your server is listening on port 8080 and try again.
This error does not pertain to this app alone but all the apps I have hosted on GCE, so I seeking a valid way to preview webapps on GCE.
This is the docker file docker-compose.yml
version: '2'
services:
grafana_graphite:
build: .
image: kamon/grafana_graphite
container_name: kamon-grafana-dashboard
ports:
- '8080:8080'
- '8181:8181'
- '8125:8125/udp'
- '8126:8126'
- '2003:2003'
volumes:
- ./data/whisper:/opt/graphite/storage/whisper
- ./data/grafana:/opt/grafana/data
- ./log/graphite:/opt/graphite/storage/log
- ./log/supervisor:/var/log/supervisor

The Grafana backend binds to port 3000 by default, even if you open the firewall on port 8080 it may not work. You have to use one of the following alternatives:
Redirect port 8080 to the Grafana port using:
$ sudo iptables -t nat -A PREROUTING -p tcp --dport 8080 -j REDIRECT --to- port 3000
Put a webserver like Nginx or Apache in front of Grafana and have them proxy requests to Grafana.
Further information on Grafana configurations options can be found on this documentation link.

Related

Docker: how to route path to port?

I am running 2 applications using separate Docker containers on the same server.
The first application uses the port 8000. Dockerfile:
EXPOSE 8000
docker-compose.yml:
command: uvicorn app.main:app --host 0.0.0.0 --port 8000
ports:
- "8000:8000"
The second application uses the port 8001. Dockerfile:
EXPOSE 8001
docker-compose.yml:
command: uvicorn app.main:app --host 0.0.0.0 --port 8001
ports:
- "8001:8001"
I have created DNS type A record:
api.mydomain.com to 123.123.123.123
api.mydomain.com:8000 and api.mydomain.com:8001 are both functioning. I want to route url path to port. For example:
api.mydomain.com/appone/ to api.mydomain.com:8000
api.mydomain.com/apptwo/ to api.mydomain.com:8001
Is it possible to do this using Docker? If yes, how can I do it?
Is it possible to do this using Docker?
Directly no, indirectly yes. Docker runs applications.
how can I do it?
You can run a proxy HTTP server that forwards domains to specific ports. Nginx and Apache are the most popular HTTP servers. There is also Fabio proxy.
You can run that server inside docker.

Using custom local domain with Docker

I am running Docker using Docker Desktop on Windows.
I would like to set-up a simple server.
I run it using:
$ docker run -di -p 1234:80 yahya/example-server
This works as expected and runs fine on localhost:1234.
However, I want to give it's own local domain name (e.g. api.example.test), which should only be accessible locally.
Normally for a VM setup I would edit the Windows hosts file, get the IP address of the VM (let's say it's 192.168.90.90) and add something like the following:
192.168.90.90 api.example.test
How would I do something similar in Docker.
I know you can enter an ip address for port forwarding, but if I enter any local IP I get the following error:
$ docker run -di -p 192.168.90.90:1234:80 yahya/example-server
docker: Error response from daemon: Ports are not available: exposing port TCP 192.168.90.90:80 -> 0.0.0.0:0: listen tcp 192.168.90.90:80: can't bind on the specified endpoint.
However, it does work for 10.0.0.7 for some reason (I found this IP automatically added in the hosts file after installing Docker Desktop).
$ docker run -di -p 10.0.0.7:1234:80 yahya/example-server
This essentially solves the issue, but would become an issue again if I have more than 1 project.
Is there a way I can use another local IP address (preferably without a nginx proxy)?
I think there is no simple way to do this without some kind of reverse-proxy.
In my dev environment I use Traefik and dnscrypt-proxy to achieve automatic *.test domain names for multiple projects at same time
First, start Traefik proxy on ports 80 and 433, example docker-compose.yml:
---
networks:
traefik:
name: traefik
services:
traefik:
image: traefik:2.8.3
container_name: traefik
restart: always
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- traefik
ports:
- 80:80
- 443:443
environment:
TRAEFIK_API: 'true'
TRAEFIK_ENTRYPOINTS_http: 'true'
TRAEFIK_ENTRYPOINTS_http_ADDRESS: :80
TRAEFIK_ENTRYPOINTS_https: 'true'
TRAEFIK_ENTRYPOINTS_https_ADDRESS: :443
TRAEFIK_ENTRYPOINTS_https_HTTP_TLS: 'true'
TRAEFIK_GLOBAL_CHECKNEWVERSION: 'false'
TRAEFIK_GLOBAL_SENDANONYMOUSUSAGE: 'false'
TRAEFIK_PROVIDERS_DOCKER: 'true'
TRAEFIK_PROVIDERS_DOCKER_EXPOSEDBYDEFAULT: 'false'
Then, attach your service to traefik network, and set labels for routing (see Traefik & Docker). Example docker-compose.yml:
---
networks:
traefik:
external: true
services:
example:
image: yahya/example-server
restart: always
labels:
traefik.enable: true
traefik.docker.network: traefik
traefik.http.routers.example.rule: Host(`example.test`)
traefik.http.services.example.loadbalancer.server.port: 80
networks:
- traefik
Finally, add to hosts:
127.0.0.1 example.test
Instead of manually adding all future domains to hosts, you can setup local DNS resolver. I prefer to use cloaking feature of dnscrypt-proxy for this.
You can install it using Installation instructions, then uncomment following line in dnscrypt-proxy.toml:
cloaking_rules = 'cloaking-rules.txt'
and add to cloaking-rules.txt:
*.test 127.0.0.1
finally, setup your network connection to use 127.0.0.1 as DNS resolver

Docker with rabbitmq networking

I have trouble understanding how docker port mapping works. I have a docker-compose file with a couple of containers, one of them is a rabbitmq service.
The docker-compose file is:
version: "3.9"
volumes:
test:
external: true
services:
rabbitmq3:
container_name: "rabbitmq"
image: rabbitmq:3.8-management-alpine
environment:
- RABBITMQ_DEFAULT_USER=myuser
- RABBITMQ_DEFAULT_PASS=mypassword
ports:
# AMQP protocol port
- '5671:5672'
# HTTP management UI
- '15671:15672'
So the container runs using docker compose up, no problem. But when I access the rabbitmq management plugin using container_ip:15671 or container_ip:15671, I don't get anything. But when I access it using 127.0.0.1:15672, I can access the management plugin.
It probably is a stupid question but how can I access the container service using localhost?
The port sematic is as such <HOST_PORT>:<CONTAINER_PORT>. So -p 15671:15672 implies that the container port 15672 is mapped to the port 15671 on your machine.
Based on your docker compose file, the ports 5671 and 15671 are exposed on your machine.
The management portal can be accessed using http://localhost:15671 and the rabbitmq service can be used using the http://localhost:5671.
The IP 127.0.0.1 is localhost.

I tried to deploy gRPC (go) server in docker and expose port in local port but port bindings are not working

I tried to deploy gRPC server and mongodb in docker. After that I trying to binding docker ports to my local ports. mongodb ports binding was working fine. But, gRPC server ports are not binding my local port
ports:
- "50051:50051"
like this i tried in docker-compose.yml
docker-compose.yml
services:
auth_server:
container_name: auth_service
build: .
command: go run server.go
volumes:
- .:/go/src/auth_server
working_dir: /go/src/auth_server
ports:
- "50051:50051"
environment:
PORT: 50051
In client gRPC file I used host and port like, 0.0.0.0:50051
conn, err := grpc.Dial("0.0.0.0:50051", grpc.WithInsecure())
but it was not working. I can't find any bug, so I assume I am doing something incorrectly.
You should use 127.0.0.1:50051 when connecting from a client on the host machine, or auth_server:50051 if you are connecting from docker-compose network.
If you're running this on windows I would check the "reserved port ranges" with command
netsh interface ipv4 show excludedportrange protocol=tcp
Also see this thread on github.
If it's linux check that nothing on the host is binding on that port.

Get Host of linked container

I have the following setup:
services:
web: &web
ports:
- "3000:3000"
env_file:
- .env.web
links:
- google_service
google_service:
command: bundle exec rails s -b 0.0.0.0 -p 3001
ports:
- "3001:3001"
environment:
RAILS_ENV: development
When I run docker-compose run --publish 3000:3000 web then I can access lvh.me:3001 in my browser.
But when in the container web I try to access this url I get Errno::ECONNREFUSED (Failed to open TCP connection to test.lvh.me:3001 (Connection refused - connect(2) for "127.0.0.1" port 3001)):
How can I access port 3001 from container google_service in the container web? Thanks
As Suggested by Creek, the best way here to call google service container from web container is by addressing google_service:3001.
In networks created via docker compose, the containers know each other by the service names, no matter whether they are linked or not. By default, they are aware about each other.
In case you want to make it accessible via host, use the IP or DNS name of the host machine OR use network mode as host "https://docs.docker.com/compose/compose-file/#network_mode" in docker compose.
While using host network mode, localhost will be the host machine & not the container itself.

Resources