How does one connect two services in the local docker-compose network? - docker

I have followed the instructions, I think, and have come up with the following configuration:
version: '3.9'
services:
flask:
image: ops:imgA
ports:
- 5000:5000
volumes:
- /opt/models:/opt/models
entrypoint: demo flask
streamlit:
image: ops:imgB
ports:
- 8501:8501
entrypoint: streamlit run --server.port 8501 demo -- stream --flask-hostname flask
The --flask-hostname flask sets the host name used in an http connect, i.e.: http://flask:5000. I can set it to anything.
The basic problem here is that I can spin up one of these images, install tmux, and run everything within a single image.
But, when I split it across multiple images and use docker-compose up (which seems better than tmux), the containers can't seem to connect to each other.
I have rattled around the documentation on docker's website, but I've moved on to the troubleshooting stage. This seems to be something that should "just work" (since there are few questions along these lines). I have total control of the box I am using, and can open or close whatever ports needed.
Mainly, I am trying to figure out how to allow, with 100% default settings nothing complicated, these two services (flask and streamlit) to speak to each other.
There must be 1 or 2 settings that I need to change, and that is it.
Any ideas?
Update
I can access all of the services externally, so I am going to open up external connections between the services (using the external IP) as a "just work" quick fix, but obviously getting the composition to work internally would be the best option.
I have also confirmed that the docker-compose and docker versions are up to date.
Update-2: changed from flask#127.0.0.1 to flask#0.0.0.0
Flask output:
flask_1 | * Serving Flask app "flask" (lazy loading)
flask_1 | * Environment: production
flask_1 | WARNING: This is a development server. Do not use it in a production deployment.
flask_1 | Use a production WSGI server instead.
flask_1 | * Debug mode: on
flask_1 | INFO:werkzeug: * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
flask_1 | 2020-12-19 02:22:16.449 INFO werkzeug: * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
flask_1 | INFO:werkzeug: * Restarting with inotify reloader
flask_1 | 2020-12-19 02:22:16.465 INFO werkzeug: * Restarting with inotify reloader
flask_1 | WARNING:werkzeug: * Debugger is active!
flask_1 | 2020-12-19 02:22:22.003 WARNING werkzeug: * Debugger is active!
Streamlit:
streamlit_1 |
streamlit_1 | You can now view your Streamlit app in your browser.
streamlit_1 |
streamlit_1 | Network URL: http://172.18.0.3:8501
streamlit_1 | External URL: http://71.199.156.142:8501
streamlit_1 |
streamlit_1 | 2020-12-19 02:22:11.389 Generating new fontManager, this may take some time...
And the streamlit error message:
ConnectionError:
HTTPConnectionPool(host='flask', port=5000):
Max retries exceeded with url: /foo/bar
(Caused by NewConnectionError(
'<urllib3.connection.HTTPConnection object at 0x7fb860501d90>:
Failed to establish a new connection:
[Errno 111] Connection refused'
)
)
Update-3: Hitting refresh fixed it.

The server process must be listening on the special "all interfaces" address 0.0.0.0. Many development-type servers by default listen on "localhost only" 127.0.0.1, but in Docker each container has its own private notion of localhost. If you use tmux or docker exec to run multiple processes inside a container, they have the same localhost and can connect to each other, but if the client and server are running in different containers, the request doesn't arrive on the server's localhost interface, and if the server is listening on "localhost only" it won't receive it.
Your setup is otherwise correct, with only the docker-compose.yml you include in the question. Some other common problems:
You must connect to the port the server process is listening on inside the container. If you remap it externally with ports:, that's ignored, and you'd connect to the second ports: number. Correspondingly, ports: aren't required. (expose: also isn't required and doesn't do anything at all.)
The client may need to wait for the server to start up. If the client depends_on: [flask] the host name will usually resolve (unless the server dies immediately) but if it takes a while to start up you will still get "connection refused" errors. See Docker Compose wait for container X before starting Y.
Neither container may use network_mode: host. This disables Docker's networking features entirely.
If you manually declare networks:, both containers need to be on the same network. You do not need to explicitly create a network for inter-container communication to work: Compose provides a default network for you, which is used if nothing else is declared.
Use the Compose service names as host names. You don't need to explicitly specify container_name: or links:.

Related

Flask debug console with docker not working

I have this flask app in a docker, with debug mode set to on:
app_1 | * Serving Flask app 'my_app' (lazy loading)
app_1 | * Environment: development
app_1 | * Debug mode: on
app_1 | * Running on all addresses.
app_1 | WARNING: This is a development server. Do not use it in a production deployment.
app_1 | * Running on http://172.22.0.2:5000/ (Press CTRL+C to quit)
app_1 | * Restarting with stat
app_1 | * Debugger is active!
app_1 | * Debugger PIN: 140-110-975
When I have a bug and I click the little console icon, I am queried the PIN, which I enter, and get: [console ready].
But then, when I type something in the console, I have:
Not Found
The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.
to the URL: http://localhost:5000/submit?&__debugger__=yes&cmd=dump()&frm=140300814179632&s=ljASKJ6S2EwWtVcN8EHR
I thought it could be that the correct port is not open, as it can be for websockets (hot reloading in webpack), but here, it seems like the port is the same (5000) as the web app.
So any idea what could go wrong?
Thanks
I'm using:
Python 3.9.5
Flask 2.0.1
EDIT
Here is docker-compose.yml:
version: "3.3"
services:
app:
build: .
command: flask run --host=0.0.0.0 --debugger
volumes:
- .:/app
working_dir: /app
ports:
- 5000:5000
env_file:
- .env
By using DEBUG=True you are telling Flask to reload the server each time main.py changes. In doing so, it calls main.py each time, killing the app and then restarting it in port 5000 in the process. That is expected behaviour.
Your problem has nothing to do with docker, and is more about setting up prometheus inside a Flask application.
There are some extensions that help with this, for instance:
https://github.com/sbarratt/flask-prometheus or
https://github.com/hemajv/flask-prometheus
Maybe this sheds some light on the solution as well, but please note that this is not what you are asking here.
I suggest first giving these extensions a shot, which means refactoring your code. From there and if there is a problem implementing the extensions, then create another question providing a mcve.
EDIT:
There is also a document about this problem at https://www.agiratech.com/debugging-python-flask-app-in-docker-container

Error: unable to perform an operation on node 'rabbit#localhost'

So I have an issue with docker-compose and rabbitmq.
I run docker-compose up. Everything spins up. Docker-compose:
services:
rabbitmq3:
image: "rabbitmq:3-management"
hostname: "localhost"
command: rabbitmq-server
ports:
- 5672:5672
- 15672:15672
Then I do sudo rabbitmqctl status to check connection with node. I get this error:
Error: unable to perform an operation on node 'rabbit#localhost'. Please see diagnostics information and suggestions below.
Most common reasons for this are:
* Target node is unreachable (e.g. due to hostname resolution, TCP connection or firewall issues)
* CLI tool fails to authenticate with the server (e.g. due to CLI tool's Erlang cookie not matching that of the server)
* Target node is not running
In addition to the diagnostics info below:
* See the CLI, clustering and networking guides on https://rabbitmq.com/documentation.html to learn more
* Consult server logs on node rabbit#localhost
* If target node is configured to use long node names, don't forget to use --longnames with CLI tools
DIAGNOSTICS
===========
attempted to contact: [rabbit#localhost]
rabbit#localhost:
* connected to epmd (port 4369) on localhost
* epmd reports: node 'rabbit' not running at all
no other nodes on localhost
* suggestion: start the node
Current node details:
* node name: 'rabbitmqcli-25456-rabbit#localhost'
* effective user's home directory: /Users/olof.grund
* Erlang cookie hash: d1oONiVA/qogGxkf6vs9Rw==
When I do it in the container docker-compose exec -T rabbitmq3 rabbitmqctl status it works.
Do I need to expose something from docker somehow? Some rabbitmq client or node maybe?
I used all the tips that I have found in other sources. (adding IP to /etc/hosts/, restarts of containers, services). Took me a day to finally get this to work and it boils down to this.
<wait for 60secs since the rabbit container has been started>
rabbitmqctl stop_app
rabbitmqctl reset
rabbitmqctl force_boot
rabbitmqctl start_app
Rabbitmq uses Erlang's distribution protocol, which requires port 4369 open for the EPMD (Erlang Port Mapper Daemon), expose it in the docker-compose and stop the EPMD running in your host.

Running Percona server in Docker fails with socket error

I've been trying, and failing, to get Percona Server (version 8 on CentOS) running as a lone service inside a docker-compose.yml file. The error that keeps coming up is:
mysql | 2020-03-16T23:04:25.189164Z 0 [ERROR] [MY-010270] [Server] Can't start server : Bind on unix socket: File name too long
mysql | 2020-03-16T23:04:25.189373Z 0 [ERROR] [MY-010258] [Server] Do you already have another mysqld server running on socket: /var/lib/mysql/mysql.sock ?
mysql | 2020-03-16T23:04:25.190581Z 0 [ERROR] [MY-010119] [Server] Aborting
mysql | 2020-03-16T23:04:26.438533Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.18-9) Percona Server (GPL), Release 9, Revision 53e606f.
My docker-compose.yml file is as follows:
version: '3.7'
services:
mysql:
container_name: mysql
image: percona:8-centos
volumes:
- ./docker/mysql/setup:/docker-entrypoint-initdb.d
- ./docker/mysql/data:/var/lib/mysql
- ./docker/mysql/conf:/etc/mysql/conf.d:ro
environment:
- MYSQL_ROOT_PASSWORD=mypassword
- MYSQL_DATABASE=<redacted>
- MYSQL_USER=<redacted>
- MYSQL_PASSWORD=<redacted>
stop_grace_period: 20s
restart: always
A few things to note:
My my.cnf file, which lives on the host under docker/mysql/conf/, declares the location of the socket file as /var/run/mysql.sock instead of /var/lib/mysql/mysql.sock. Why would mysqld still be trying to use a different socket file path than the one I declared in my own config file? (And yes, my config file IS being picked up because when it used to have deprecated options declared inside it, mysqld complained and failed to start.)
In the beginning, I kept the socket file path setting alone and allowed it to use the default location; however, it resulted in the same exact error.
The documentation at the Percona Docker Hub page contains contradictions, one of the important ones being that they mention the config directory /etc/my.cnf.d inside the container, and then when they give an example they instead mention /etc/mysql/conf.d; the discrepancy makes me lose confidence in the entire rest of the documentation. Indeed, my lack of confidence now seems well-placed, since the official image fails to run properly out of the box.
So, does anyone know how to use the official Percona images? (Or am I going to be forced to roll my own service using my own Dockerfile?)
I was also getting the same error on mac os.
So getting a hint from error: "File name too long", I moved my entire project into home directory, so that my compose file was at path:~/myproject/docker-compose.yml. (May be you can try moving to root dir, just to avoid any confusion to what ~/ expands to.)
And it did the trick and mysql image was up again without any error.
PS: I am not saying that you need to place your project in homedir, but you need to find smallest folder path that works for your project.

Spring Boot tries to connect to Mongo localhost

I have a Spring Boot 2.x project using Mongo. I am running this via Docker (using compose locally) and Kubernetes. I am trying to connect my service to a Mongo server. This is confusing to me, but for development I am using a local instance of Mongo, but deployed in GCP I have named mongo services.
here is my application.properties file:
#mongodb
spring.data.mongodb.uri= mongodb://mongo-serviceone:27017/serviceone
#logging
logging.level.org.springframework.data=trace
logging.level.=trace
And my Docker-compose:
version: '3'
# Define the services/containers to be run
services:
service: #name of your service
build: ./ # specify the directory of the Dockerfile
ports:
- "3009:3009" #specify ports forwarding
links:
- mongo-serviceone # link this service to the database service
volumes:
- .:/usr/src/app
depends_on:
- mongo-serviceone
mongo-serviceone: # name of the service
image: mongo
volumes:
- ./data:/data/db
ports:
- "27017:27017"
When I try docker-compose up . I get the following error:
mongo-serviceone_1 | 2018-08-22T13:50:33.454+0000 I NETWORK
[initandlisten] waiting for connections on port 27017 service_1
| 2018-08-22 13:50:33.526 INFO 1 --- [localhost:27017]
org.mongodb.driver.cluster : Exception in monitor thread
while connecting to server localhost:27017 service_1
| service_1 | com.mongodb.MongoSocketOpenException:
Exception opening socket service_1 | at
com.mongodb.connection.SocketStream.open(SocketStream.java:62)
~[mongodb-driver-core-3.6.3.jar!/:na]
running docker ps shows me:
692ebb72cf30 serviceone_service "java -Djava.securit…" About an hour ago Up 9 minutes 0.0.0.0:3009->3009/tcp, 8080/tcp serviceone_service_1
6cd55ae7bb77 mongo "docker-entrypoint.s…" About an hour ago Up 9 minutes 0.0.0.0:27017->27017/tcp serviceone_mongo-serviceone_1
While I am trying to connect to a local mongo, I thought that by using the name "mongo-serviceone"
Hard to tell what the exact issue is, but maybe this is just an issue because of the space " " after "spring.data.mongodb.uri=" and before "mongodb://mongo-serviceone:27017/serviceone"?
If not, maybe exec into the "service" container and try to ping the mongodb with: ping mongo-serviceone:27017
Let me know the output of this, so I can help you analyze and fix this issue.
Alternatively, you could switch from using docker compose to a Kubernetes native dev tool, as you are planning to run your application on Kubernetes anyways. Here is a list of possible tools:
Allow hot reloading:
DevSpace: https://github.com/covexo/devspace
ksync: https://github.com/vapor-ware/ksync
Pure CI/CD tools for dev:
Skaffold: https://github.com/GoogleContainerTools/skaffold
Draft: https://github.com/Azure/draft
For most of them, you will only need minikube or a dev namespace inside your existing cluster on GCP.
Looks like another application was running on port 27017 on your localhost Similar reported issue
quick way to check on linux/mac:
telnet 127.0.01 27017
check logs files:
docker logs serviceone_service

Finding the next available port with Ansible

I'm using Ansible to deploy a Ruby on Rails application using Puma as a web server. As part of the deployment, the Puma configuration binds to the IP address of the server on port 8080:
bind "tcp://{{ ip_address }}:8080"
This is then used in the nginx vhost config to access the app:
upstream {{ app_name }} {
server {{ ip_address }}:8080;
}
All of this is working fine. However, I now want to deploy multiple copies of the app (staging, production) onto the same server and obviously having several bindings on 8080 is causing issues so I need to use different ports.
The most simple solution would be to include the port in a group var and then just drop it in when the app is deployed. However, this would require background knowledge of the apps already running on the server and it kind of feels like the deployment should be able to "discover" the port to use.
Instead, I was considering doing some kind of iteration through ports, starting at 8080, and then checking each until one is not being used. netstat -anp | grep 8080 gives a return code 0 if the port is being used so perhaps I could use that command to test (though I'm not sure of how to do the looping bit).
Has anyone come up against this problem before? Is there a more graceful solution that I'm overlooking?
I'd define list of allowed ports and compare it to available ports.
Something like this:
- hosts: myserver
vars:
allowed_ports:
- 80
- 8200
tasks:
- name: Gather occupied tcp v4 ports
shell: netstat -nlt4 | grep -oP '(?<=0.0.0.0:)(\d+)'
register: used_ports
- name: Set bind_port as first available port
set_fact:
bind_port: "{{ allowed_ports | difference(used_ports.stdout_lines | map('int') | list) | first | default(0) }}"
failed_when: bind_port | int == 0
- name: Show bind port
debug: var=bind_port
You may want to tune 0.0.0.0 in the regexp if you need to check ports on specific interface.

Resources