I'm having an issue with a Ubuntu 22.10 host on which I installed Docker and have several services running (docker-elk).
I'm also running a simple "Hello world" HTTP server on port 3000:
GET http://localhost:3000 -> "Hello world"
I can access my Docker services for another computer on the same network using:
http://192.168.3.10:5601 -> Loads Kibana UI running in Docker on Host
However, I cannot access my simple HTTP server from that same computer:
http://192.168.3.10:3000 -> Refused to connect
From reading here looks like Docker makes some changes to iptables which probably has something to do with this:
I tried running that command, no success. Now iptables --list gives me:
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy DROP)
target prot opt source destination
DOCKER-USER all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-1 all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain DOCKER (1 references)
target prot opt source destination
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target prot opt source destination
DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-ISOLATION-STAGE-2 (1 references)
target prot opt source destination
DROP all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-USER (1 references)
target prot opt source destination
ACCEPT all -- anywhere anywhere
RETURN all -- anywhere anywhere
I have previously "allowed" those ports using ufw but when that didnt work I also tried completely disabling the firewall using:
sudo ufw disable
sudo iptables -F
Even then I still cant reach my http server from the other computer on the same network.
Related
I am using a really simple docker-compose file from here:
https://github.com/brandonserna/flask-docker-compose
this is the docker compose file:
version: '3.5'
services:
flask-app-service:
build: ./app
volumes:
- ./app:/usr/src/app
- .:/user/src
ports:
- 5555:9999
However I can only reach the app from outside network when I am using port 80.
ports:
- 80:9999
When I am using for example port 8000. I cant reach the container from outside network.
From the local machine I can reach the app. (Tested with wget localhost:8000)
ports:
- 8000:9999
iptables -L gives me this:
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
DOCKER-USER all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-1 all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain DOCKER (2 references)
target prot opt source destination
ACCEPT tcp -- anywhere 172.18.0.2 tcp dpt:9999
ACCEPT tcp -- anywhere 172.17.0.2 tcp dpt:http
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target prot opt source destination
DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-ISOLATION-STAGE-2 (2 references)
target prot opt source destination
DROP all -- anywhere anywhere
DROP all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-USER (1 references)
target prot opt source destination
RETURN all -- anywhere anywhere
Not enough for comment so this is why:
From what it seems it could be either firewall rule in your host running the container or one between the host to your house.
To test which on between the two I'd try to use nmap with --reason and --tracerout options, since we have connectivity in another port it's unlikely that there is a complete block between your home and the container so the traceroute wouldn't give much info but just in case.
Also if you have root access to the host machine or just to the iptables service try to stop it to check if that's the root cause for the block.
also check with docker ps if the port is bound to the port on the machine, should look something like this:
0.0.0.0:port --> tcp\port
where instead of port you have the port number
If it doesn't maybe it's due to some problem with the docker-compose up command so try to run the service with a simple docker run command
I have a docker container which needs to add some iptables rules into the host. From searching, it seems like this is supposed to work either in privileged mode or by adding CAP_NET_ADMIN and CAP_NET_RAW and in host networking mode.
However, I tried both of these and no matter what I do the docker container seems to have it's own set of iptables rules. Here's an example:
on the host machine iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
DOCKER-USER all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain DOCKER-USER (1 references)
target prot opt source destination
RETURN all -- anywhere anywhere
# Warning: iptables-legacy tables present, use iptables-legacy to see them
(note that I ran docker with iptables set to false to try to debug this so it's a minimal set of rules, that setting doesn't seem to make a difference)
Next in an Ubuntu container: docker run -it --privileged --net=host ubuntu:18.04 /bin/bash same command (iptables -L)
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
so it's a totally different filter table, like it has it's own copy. Similar behavior for other tables and I've confirmed that adding rules in the container does not add them on the host even though the container is privileged and in host networking mode.
The host is a raspberry pi running Raspbian buster. Is there something else I need to do to make this work?
I should have thought of this earlier but I checked the raspbian kernel version and it was 4.19.something which is ancient at this point. So I re-installed with Ubuntu 20.04 server (which provides an arm64 distribution for the raspberry pi) and it seem to work as expected now. So likely it was something to do with the out-of-date kernel.
Bear with me, I'm new to Docker...
I'm trying to get a Docker environment going on a Red Hat Linux server (7.6) and am having trouble accessing containers from a computer other than the host.
I got Docker installed no problem. Then, the first container I installed was Portainer and the Portainer Agent:
docker run -d -p 8000:8000 -p 9000:9000 --name=portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer
docker run -d -p 9001:9001 --name portainer_agent --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/docker/volumes:/var/lib/docker/volumes portainer/agent
Seems peachy:
# docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
973a685cfbe1 portainer/portainer "/portainer" 19 hours ago Up 2 minutes 0.0.0.0:8000->8000/tcp, 0.0.0.0:9000->9000/tcp portainer
602537dc21ec portainer/agent "./agent" 45 hours ago Up 19 hours 0.0.0.0:9001->9001/tcp portainer_agent
And using # curl http://localhost:9000 connects just fine. However, the connection gets dropped when attempting to connect from another computer on the same network (in a different subnet, if that matters). I can connect to the server just fine (I'm managing it via SSH, and even tested netcat on port 9002 for good measure).
The iptables, if this helps:
# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy DROP)
target prot opt source destination
DOCKER-USER all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-1 all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain DOCKER (2 references)
target prot opt source destination
ACCEPT tcp -- anywhere 172.17.0.2 tcp dpt:etlservicemgr
ACCEPT tcp -- anywhere 172.17.0.3 tcp dpt:cslistener
ACCEPT tcp -- anywhere 172.17.0.3 tcp dpt:irdmi
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target prot opt source destination
DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-ISOLATION-STAGE-2 (2 references)
target prot opt source destination
DROP all -- anywhere anywhere
DROP all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-USER (1 references)
target prot opt source destination
RETURN all -- anywhere anywhere
I've searched around a bit but keep finding conflicting answers (some suggesting that it should just work, and others suggesting that there's a lot more I've got left to learn and configure). I'm afraid that I'm fumbling in the dark. I gather that I need a route configured to forward host traffic to the container? Or an iptables rule? What exactly am I missing?
...Nevermind.
On a lark, I tried connecting to the server from a device that's on-premises; rather than my computer which is connected via VPN. The on-prem device connected fine.
Using Docker v 17.03.1-ce, on a linux mint machine, i'm unable to reach the container web server (container port 5000) with my browser (localhost port 9000) on the host.
Container launched with command :
sudo docker run -d -p 9000:5000 --name myContainer imageName
I started by checking that the server (flask) on my container was properly launched. It's launched.
I wanted to check that the server was working properly, so in the container, using curl, i sent a GET request on localhost, port 5000. The server returned the web page
So, the server is working, therefore the issue lies somewhere in the communication between container and host.
I checked iptables, but am not sure what to make of it:
sudo iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy DROP)
target prot opt source destination
DOCKER-ISOLATION all -- anywhere anywhere
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain DOCKER (2 references)
target prot opt source destination
ACCEPT tcp -- anywhere 172.17.0.2 tcp dpt:5000
Chain DOCKER-ISOLATION (1 references)
target prot opt source destination
DROP all -- anywhere anywhere
DROP all -- anywhere anywhere
RETURN all -- anywhere anywhere
sudo iptables -t nat -L -n
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
DOCKER all -- 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
DOCKER all -- 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- 172.17.0.0/16 0.0.0.0/0
MASQUERADE all -- 172.18.0.0/16 0.0.0.0/0
MASQUERADE tcp -- 172.17.0.2 172.17.0.2 tcp dpt:5000
Chain DOCKER (2 references)
target prot opt source destination
RETURN all -- 0.0.0.0/0 0.0.0.0/0
RETURN all -- 0.0.0.0/0 0.0.0.0/0
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:9000 to:172.17.0.2:5000
Expected result : using my browser, with url "localhost:9000", i can receive the homepage sent from the container, through port 5000.
edit: Adding docker logs and docker ps
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
59a20248c5b2 apptest "python3 src/jboos..." 12 hours ago Up 12 hours 0.0.0.0:9000->5000/tcp jboost
sudo docker logs jboost
* Serving Flask app "jboost_app" (lazy loading)
* Environment: production
WARNING: Do not use the development server in a production environment.
Use a production WSGI server instead.
* Debug mode: on
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
* Restarting with stat
* Debugger is active!
* Debugger PIN: 310-292-856
127.0.0.1 - - [03/Jul/2019 04:12:54] "GET / HTTP/1.1" 200 -
edit 2: adding results for curl localhost:9000 on host machine
So when connecting with my web browser, the connection doesn't work, but curl gives a more specific message:
curl localhost:9000
curl: (56) Recv failure: Connection reset by peer
I found the solution in this post : https://devops.stackexchange.com/questions/3380/dockerized-flask-connection-reset-by-peer
The Docker networking and port forwarding were working correctly. The problem was with my flask server. It seems that by default, the server is configured to only accept requests from local host.
When launching your flash server, with the "run" command, you must specify host='0.0.0.0' , so that any ip can be served.
if __name__ == "__main__":
app.run(host='0.0.0.0')
I am trying to connect to a docker-compose deployed service stack on a DigitalOcean Docker droplet. It contains a MySQL container with a database and a go/alpine container with the API. I am using a custom bridge network which the 2 containers connect to. The issue also occurred when trying to deploy the stack locally on my mac and accessing the API container via localhost:port. I am not using docker-machine as I assume it only is needed for multi-host deployments. The stack is deployed successfully. The server container seems to be able to connect to the DB container. I am wondering if the issue might be within the host's firewall rules?
I did try to run the app locally with mysql server running on my machine and it does work, so I don't think the reason is malfunctioning code. I couldn't get it to work either with basic HTTP server nor with https with self-signed certificates (both work on my local machine).
docker-compose.yml
version: "3.7"
networks:
net:
attachable: true
services:
db:
build: ./db
ports:
- "3306:3306"
environment:
- MYSQL_ENV=local
networks:
- net
server:
build: ./server
ports:
- "80:5000"
- "443:5001"
networks:
- net
tty: true
links:
- db:db
iptables -L with the stack deployed:
Chain INPUT (policy DROP)
target prot opt source destination
ufw-before-logging-input all -- anywhere anywhere
ufw-before-input all -- anywhere anywhere
ufw-after-input all -- anywhere anywhere
ufw-after-logging-input all -- anywhere anywhere
ufw-reject-input all -- anywhere anywhere
ufw-track-input all -- anywhere anywhere
Chain FORWARD (policy ACCEPT)
target prot opt source destination
DOCKER-USER all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-1 all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ufw-before-logging-forward all -- anywhere anywhere
ufw-before-forward all -- anywhere anywhere
ufw-after-forward all -- anywhere anywhere
ufw-after-logging-forward all -- anywhere anywhere
ufw-reject-forward all -- anywhere anywhere
ufw-track-forward all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
ufw-before-logging-output all -- anywhere anywhere
ufw-before-output all -- anywhere anywhere
ufw-after-output all -- anywhere anywhere
ufw-after-logging-output all -- anywhere anywhere
ufw-reject-output all -- anywhere anywhere
ufw-track-output all -- anywhere anywhere
Chain DOCKER (2 references)
target prot opt source destination
ACCEPT tcp -- anywhere 172.24.0.2 tcp dpt:mysql
ACCEPT tcp -- anywhere 172.24.0.3 tcp dpt:5001
ACCEPT tcp -- anywhere 172.24.0.3 tcp dpt:5000
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target prot opt source destination
DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-ISOLATION-STAGE-2 (2 references)
target prot opt source destination
DROP all -- anywhere anywhere
DROP all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-USER (1 references)
target prot opt source destination
RETURN all -- anywhere anywhere
Chain ufw-after-forward (1 references)
target prot opt source destination
Chain ufw-after-input (1 references)
target prot opt source destination
ufw-skip-to-policy-input udp -- anywhere anywhere udp dpt:netbios-ns
ufw-skip-to-policy-input udp -- anywhere anywhere udp dpt:netbios-dgm
ufw-skip-to-policy-input tcp -- anywhere anywhere tcp dpt:netbios-ssn
ufw-skip-to-policy-input tcp -- anywhere anywhere tcp dpt:microsoft-ds
ufw-skip-to-policy-input udp -- anywhere anywhere udp dpt:bootps
ufw-skip-to-policy-input udp -- anywhere anywhere udp dpt:bootpc
ufw-skip-to-policy-input all -- anywhere anywhere ADDRTYPE match dst-type BROADCAST
Chain ufw-after-logging-forward (1 references)
target prot opt source destination
Chain ufw-after-logging-input (1 references)
target prot opt source destination
LOG all -- anywhere anywhere limit: avg 3/min burst 10 LOG level warning prefix "[UFW BLOCK] "
Chain ufw-after-logging-output (1 references)
target prot opt source destination
Chain ufw-after-output (1 references)
target prot opt source destination
Chain ufw-before-forward (1 references)
target prot opt source destination
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
ACCEPT icmp -- anywhere anywhere icmp destination-unreachable
ACCEPT icmp -- anywhere anywhere icmp time-exceeded
ACCEPT icmp -- anywhere anywhere icmp parameter-problem
ACCEPT icmp -- anywhere anywhere icmp echo-request
ufw-user-forward all -- anywhere anywhere
Chain ufw-before-input (1 references)
target prot opt source destination
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
ufw-logging-deny all -- anywhere anywhere ctstate INVALID
DROP all -- anywhere anywhere ctstate INVALID
ACCEPT icmp -- anywhere anywhere icmp destination-unreachable
ACCEPT icmp -- anywhere anywhere icmp time-exceeded
ACCEPT icmp -- anywhere anywhere icmp parameter-problem
ACCEPT icmp -- anywhere anywhere icmp echo-request
ACCEPT udp -- anywhere anywhere udp spt:bootps dpt:bootpc
ufw-not-local all -- anywhere anywhere
ACCEPT udp -- anywhere 224.0.0.251 udp dpt:mdns
ACCEPT udp -- anywhere 239.255.255.250 udp dpt:1900
ufw-user-input all -- anywhere anywhere
Chain ufw-before-logging-forward (1 references)
target prot opt source destination
Chain ufw-before-logging-input (1 references)
target prot opt source destination
Chain ufw-before-logging-output (1 references)
target prot opt source destination
Chain ufw-before-output (1 references)
target prot opt source destination
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
ufw-user-output all -- anywhere anywhere
Chain ufw-logging-allow (0 references)
target prot opt source destination
LOG all -- anywhere anywhere limit: avg 3/min burst 10 LOG level warning prefix "[UFW ALLOW] "
Chain ufw-logging-deny (2 references)
target prot opt source destination
RETURN all -- anywhere anywhere ctstate INVALID limit: avg 3/min burst 10
LOG all -- anywhere anywhere limit: avg 3/min burst 10 LOG level warning prefix "[UFW BLOCK] "
Chain ufw-not-local (1 references)
target prot opt source destination
RETURN all -- anywhere anywhere ADDRTYPE match dst-type LOCAL
RETURN all -- anywhere anywhere ADDRTYPE match dst-type MULTICAST
RETURN all -- anywhere anywhere ADDRTYPE match dst-type BROADCAST
ufw-logging-deny all -- anywhere anywhere limit: avg 3/min burst 10
DROP all -- anywhere anywhere
Chain ufw-reject-forward (1 references)
target prot opt source destination
Chain ufw-reject-input (1 references)
target prot opt source destination
Chain ufw-reject-output (1 references)
target prot opt source destination
Chain ufw-skip-to-policy-forward (0 references)
target prot opt source destination
ACCEPT all -- anywhere anywhere
Chain ufw-skip-to-policy-input (7 references)
target prot opt source destination
DROP all -- anywhere anywhere
Chain ufw-skip-to-policy-output (0 references)
target prot opt source destination
ACCEPT all -- anywhere anywhere
Chain ufw-track-forward (1 references)
target prot opt source destination
ACCEPT tcp -- anywhere anywhere ctstate NEW
ACCEPT udp -- anywhere anywhere ctstate NEW
Chain ufw-track-input (1 references)
target prot opt source destination
Chain ufw-track-output (1 references)
target prot opt source destination
ACCEPT tcp -- anywhere anywhere ctstate NEW
ACCEPT udp -- anywhere anywhere ctstate NEW
Chain ufw-user-forward (1 references)
target prot opt source destination
Chain ufw-user-input (1 references)
target prot opt source destination
tcp -- anywhere anywhere tcp dpt:ssh ctstate NEW recent: SET name: DEFAULT side: source mask: 255.255.255.255
ufw-user-limit tcp -- anywhere anywhere tcp dpt:ssh ctstate NEW recent: UPDATE seconds: 30 hit_count: 6 name: DEFAULT side: source mask: 255.255.255.255
ufw-user-limit-accept tcp -- anywhere anywhere tcp dpt:ssh
ACCEPT tcp -- anywhere anywhere tcp dpt:2375
ACCEPT tcp -- anywhere anywhere tcp dpt:2376
Chain ufw-user-limit (1 references)
target prot opt source destination
LOG all -- anywhere anywhere limit: avg 3/min burst 5 LOG level warning prefix "[UFW LIMIT BLOCK] "
REJECT all -- anywhere anywhere reject-with icmp-port-unreachable
Chain ufw-user-limit-accept (1 references)
target prot opt source destination
ACCEPT all -- anywhere anywhere
Chain ufw-user-logging-forward (0 references)
target prot opt source destination
Chain ufw-user-logging-input (0 references)
target prot opt source destination
Chain ufw-user-logging-output (0 references)
target prot opt source destination
Chain ufw-user-output (1 references)
target prot opt source destination
UPDATE:
I have 3 json files with configs & credentials for different environments that are parsed into a config object with the following format (credentials substituted for obvious reasons):
{
"server": {
"certificate": "<HTTPS_CERT>.pem",
"key": "<HTTPS_KEY.pem",
"ip": "127.0.0.1",
"port": "5000",
"protocol": "http://",
"file_protocol": "bfile://"
},
"database": {
"address": "db",
"port": "3306",
"name": "brieefly",
"user": "<USERNAME>",
"password": "<PASSWORD>"
},
"auth": {
"public": "<JWT_AUTH_KEY>.rsa.pub",
"private": "<JWT_AUTH_PRIV_KEY>.rsa"
}
}
The config is then passed to a db object and a router object:
db:
// connect to db - this succeeds
func Connect(config *config.Config) (*DB, *err.Error) {
connectionString := fmt.Sprintf("%s:%s#(%s:%s)/%s?parseTime=true",
config.Database.User,
config.Database.Password,
config.Database.Address,
config.Database.Port,
config.Database.Name)
log.Debug(connectionString)
db, sqlErr := sql.Open("mysql", connectionString)
if sqlErr != nil {
return nil, err.New(sqlErr, err.ErrConnectionFailure, nil)
}
sqlErr = db.Ping()
if sqlErr != nil {
return nil, err.New(sqlErr, err.ErrConnectionFailure, nil)
}
return &DB{db}, nil
}
.
.
.
router:
.
.
.
// Run - starts the server
func (r *Router) Run() *err.Error {
path := config.MyPath(r.config)
var httpErr error
if r.config.Environment == config.Local {
httpErr = http.ListenAndServe(path, r.mux)
} else {
httpErr = http.ListenAndServeTLS(path, r.config.TLSCert(), r.config.TLSKey(), r.mux)
}
return err.New(httpErr, err.ErrInternal, nil)
}
.
.
.
main function:
// since the db container needs time to start mysql server daemon, the app is retrying the connection infinitely until it succeds, *router.Run()* is a blocking operation.
func main() {
retry.PerformInfinite(retry.DefaultOptions(), func() *err.Error {
log.Info("Configuring...")
c, cErr := config.NewConfig(config.Local)
if cErr != nil {
log.Error(cErr)
return cErr
}
log.Info("Configuration successful.")
log.Info("Connecting to database...")
db, dbErr := db.Connect(c)
if dbErr != nil {
log.Error(dbErr)
return dbErr
}
log.Info("Connected.")
router := net.NewRouter(db, c)
log.Info("Server is running.")
log.Info("Accepting standard input -> ")
rtErr := router.Run()
if rtErr != nil {
log.Error(dbErr)
return rtErr
}
return nil
})
}
In Docker generally the localhost 127.0.0.1 address means "this container". If you start a server process and tell it to listen on 127.0.0.1, it will only accept connections originating from within the same container. You almost always want to set servers to listen on the magic 0.0.0.0 "all interfaces" address, at which point they will be able to accept connections from other containers and the host.
In your setup, this just involves changing the configuration value "server": {"ip": "0.0.0.0"}.