How to expose mysql port? - docker

I'm trying to expose the port 3310 to allow remote MySQL connection. So far, I have this configuration:
database:
container_name: sfapi_db
restart: always
ports:
- 3310:3306
build:
context: ./docker/database
environment:
- MYSQL_DATABASE=${DB_NAME}
- MYSQL_USER=${DB_USER}
- MYSQL_PASSWORD=${DB_PWD}
- MYSQL_ROOT_PASSWORD=${DB_PWD_ROOT}
volumes:
- dbdata:/var/lib/mysql
- type: bind
source: ./docker/database/my.cnf
target: /etc/mysql/my.cnf
and this is the Dockerfile:
FROM mariadb:latest
CMD ["mysqld"]
EXPOSE 3310
here the my.cnf:
[client-server]
# Port or socket location where to connect
port = 3310
socket = /run/mysqld/mysqld.sock
# Import all .cnf files from configuration directory
!includedir /etc/mysql/conf.d/
!includedir /etc/mysql/mariadb.conf.d/
when I run docker-compose up --build -d and access to the container list, I get:
I also opened the port on the server using sudo ufw allow 3310. Also, I cannot use the port 3306 'cause it's used by another container, that's why I'm using 3310.
For some reason, when I go there and check if the port 3310 is opened, I always get that is closed.
How can I fix this?

If the my.cnf resides in the container. Then the port used should be 3306
[client-server]
# Port or socket location where to connect
port = 3306
socket = /run/mysqld/mysqld.sock
Since in the context of the container, this is the port that exposes mysql.
So what probably happens is that you expose 3306 but the container has assigned 3310 for the mysql server. Thus 3306 is indeed closed.

Related

How to stop Docker container from allocating host ports?

How do you launch Postgres from Docker, using docker-compose?
My docker-compose.yml looks like:
version: "3.6"
services:
db:
container_name: db
image: postgres:14-alpine
environment:
- POSTGRES_USER=test
- POSTGRES_PASSWORD=test
- POSTGRES_DB=test
ports:
- "5432:5432"
command: -c fsync=off -c synchronous_commit=off -c full_page_writes=off --max-connections=200 --shared-buffers=4GB --work-mem=20MB
tmpfs:
- /var/lib/postgresql
web:
container_name: web
build:
context: ..
dockerfile: test_tools/Dockerfile
shm_size: '2gb'
volumes:
- /dev/shm:/dev/shm
depends_on:
- db
This is a simple test environment to mimic a web server and a database server.
Yet when I build this, it fails with:
Creating db ... error
ERROR: for db Cannot start service db: driver failed programming external connectivity on endpoint db (bdaebf844ee8ddd593b6bc75733d8aa6196112b62f7909be060017a9a33b3c34): Error starting userland proxy: listen tcp4 0.0.0.0:5432: bind: address already in use
Why is my Postgres container trying to allocate a port on the host?
I do have Postgres running on port 5432 of the host, but why would this be interfering? These are just test containers that only need to talk to each other, and should not be accessible to the host, much less allocate host ports.
I've confirmed with docker ps -a that there are no other containers that might also be consuming port 5432.
ports:
- 5432
will start your Postgres, but on a random (free) host port.
Try to map postgres to different port on host for example
ports:
5432:15432
will make your db works on port 15432 on your host.

Cannot connect to Mysql using Docker

I build a website using Strapi and Gatsby, everythings works well when I try to connect to a remote database, but I'm trying to create a db inside a container and so far no luck.
Essentially, what I did is create the following docker-compose:
version: '3'
services:
backend:
container_name: myapp_backend
build: ./backend/
ports:
- '3002:3002'
volumes:
- ./backend:/usr/src/myapp/backend
- /usr/src/myapp/backend/node_modules
environment:
- APP_NAME=myapp_backend
- DATABASE_CLIENT=mysql
- DATABASE_HOST=db
- DATABASE_PORT=3307
- DATABASE_NAME=myapp_db
- DATABASE_USERNAME=johnny
- DATABASE_PASSWORD=stecchino
- DATABASE_SSL=false
- DATABASE_AUTHENTICATION_DATABASE=myapp_db
- HOST=localhost
depends_on:
- db
restart: always
db:
container_name: myapp_mysql
image: mysql:5.7
volumes:
- ./db.sql:/docker-entrypoint-initdb.d/db.sql
restart: always
ports:
- 3307:3307
environment:
MYSQL_ROOT_PASSWORD: 5!JF6!FgAkvt
MYSQL_DATABASE: myapp_db
MYSQL_USER: johnny
MYSQL_PASSWORD: stecchino
command: mysqld --character-set-server=utf8 --collation-server=utf8_general_ci --init-connect='SET NAMES UTF8;' --innodb-flush-log-at-trx-commit=0
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: 'myapp_phpmyadmin'
links:
- db
environment:
PMA_HOST: db
PMA_PORT: 3307
ports:
- '8081:80'
volumes:
- /sessions
depends_on:
- db
frontend:
container_name: myapp_frontend
build: ./frontend/
ports:
- '3001:3001'
depends_on:
- backend
volumes:
- ./frontend:/usr/src/myapp/frontend
the backend service contains the Strapi application, the db service contains the mysql instance which runs on the port 3307 'cause 3306 is already in use.
Then I have also installed phpmyadmin, and last but not least the Gastby site. When I run using docker-compose up --build, and try to access to phpmyadmin using:
http://localhost:8081/index.php
with the following credentials:
user: johnny
pwd: stecchino
I get:
MySQL mysqli::real_connect():(HY000/2002): Connection refused
now, what I did for fix that situation is pass the port 3306 instead of 3307 to backend and phpmyadmin service. And magically, everything works. But why? I have mapped container and host to 3307...
There are 2 things happening here.
Mysql is running on port 3306.
This is because you never told the mysql container to run on port 3307. The default configuration is running on 3306.
phpadmin can connect to mysql at port 3306.
Of course it can. This is because when you define multiple services within the same docker-compose file, they start on the same network. This means that they can see and connect to each other's internal ports without the need for external port binding like 3306:3306
I would suggest to keep port bindings only for services that you want access outside the docker environment (like the UI), and for internal components just expose the port like this
expose:
- 3306
Both answers are useful, I am particularly fond of Manish's answer
I wanted to add some additional wording:
There are the internal docker networks which nothing from the outside can gain access to. From inside any given service (or container), you can reach every other service (or container) via:
<service-name>:<port>/path/of/resources
<container-name>:<port>/path/of/resources
In order to access resources inside the docker network from outside of docker, whether that is from your host environment, or farther upstream on the internet, the docker daemon needs to bind to host ports, and then forward information received on those ports to a docker service (and ultimately a docker container).
In your docker-compose.yml when you do the 3307:3307 you are telling the docker daemon to listen on port 3307, and forward to your db service internally on it's port 3307.
However, from what we can all see, mysql is still internally (that is, inside the container) listening for traffic on port 3306. Any containers or services on the same docker networks as your db service (mysql running container(s)) would be able to access mysql via something like:
<driver>:mysql://db:3306/<dbname>
If you wanted all host traffic and docker network traffic to access mysql on port 3307, you would also need to configure mysql to listen on port 3307 instead of 3306. That tidbit of information does not appear to be in your question at the time of writing.
I hope the additional information helps! It's a topic I chat often about when talking docker with folks.
Because 3306 is the exposed port by the official Dockerfile.
What you can do is to map the port that is running MySQL to another port on your host: 3307:3306 for instance (always host:container)

How to expose redis-server port started using "webdis docker image" to host machine

I want to monitor redis running in webdis docker container.
I use telegraf which collects redis stats but, telegraf is installed on host machine and it cannot connect to redis as it is running inside docker on 6379 port.
I tried to map docker port 6379 on which redis is running inside docker with hosts 6379 port so telegraf can listen to redis metrices, but telegraf cannot listen as connection breaks.
when I use telnet on host, I get connection closed by foreign host error.
telnet 127.0.0.1 6379
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
Connection closed by foreign host.
Also, I am able to connect to webdis port on host machine, which is running on port 7379 inside wedis container.
To start webdis I am using following command : "docker run -d -p 8080:7379 -p 6379:6379 webdis"
Further to debug, I checked that redis inside webdis container is running on interface 127.0.0.1:6379
I checked that it should be running on 0.0.0.0:6379 in-order for port mapping to work properly.
How can I run redis inside webdis image on 0.0.0.0:6379?
Is there any other way I can monitor redis server running inside webdis container?
I tried to start redis-server inside webdis container by binding it to 0.0.0.0 using redis.conf file, but it still binds with 127.0.0.1
To which docker image are you refering. Is it this one? https://github.com/anapsix/docker-webdis/
If yes, when checking the Dockerfile, it does not include redis itself but in docker-compose.yaml there is a redis service include. This one does not expose ports which you need to connect to redis from outside of the container.
You need to change redis service to the following:
...
redis:
image: 'anapsix/redis:latest'
ports:
- '6379:6379'
I have this problem recently ago and I solve it.
webdis.Dockerfile
FROM nicolas/webdis:0.1.19
EXPOSE 6379
EXPOSE 7379
RUN sed -i "s/127.0.0.1/0.0.0.0/g" /etc/redis.conf
docker-compose.yaml
version: "3.8"
services:
webdis:
build:
context: .
dockerfile: webdis.Dockerfile
image: webdis_with_redis_expose
container_name: webdis
restart: unless-stopped
ports:
- "6379:6379"
- "7379:7379"
then execute docker-compose up

Accessing to gitlab docker container outputs connection refused

I have a docker container running this configuration for the gitlab-ce image:
version: "3"
services:
gitlab:
hostname: gitlab.mydomain.com
image: gitlab/gitlab-ce:latest
container_name: gitlab
restart: always
ports:
- 3000:80
volumes:
- /opt/gitlab/config:/etc/gitlab
- /opt/gitlab/logs:/var/log/gitlab
- /opt/gitlab/data:/var/opt/gitlab
networks:
default:
external:
name: custom_network
When running docker ps i see my container up and running with the 80 container port mapped to the 3000 host machine port as intended.
Altough when running : wget -O- https://172.25.0.2:3000 i am getting this error message:
Connecting to 172.25.0.2:3000... failed: Connection refused.
When you map a port, you should use the host IPs to access through the mapped port.
So if you need to access port 80 use the container IP.
If you need to access port 3000 use the host IP or localhost of the main host itself or even if you have a private interface inside your host.
So this command: wget -O- https://172.25.0.2:3000 means that you are talking to the container directly not through the mapped port and requesting a service listening on port 3000 which is not true so the result will be connection refused.

My docker-compose not working properly when I try to connect mysql with nodejs [duplicate]

This question already has an answer here:
Docker mysql cant connect to container
(1 answer)
Closed 4 years ago.
here is my db.js file
let connection = mysql.createConnection({
host: process.env.DATABASE_HOST || '127.0.0.1',
user: 'root',
database: 'bc2k19',
password: 'joeydash',
port: 33060
});
here is my docker-compose.yml file
version: '3.2'
services:
app:
build: ./app
ports:
- "3000:3000"
depends_on:
- db
environment:
- DATABASE_HOST=db
db:
build: ./db
ports:
- "3306:3306"
here is my dockerfile for mysql
FROM mysql:latest
ENV MYSQL_ROOT_PASSWORD joeydash
ENV MYSQL_DATABASE bc2k19
ENV MYSQL_USER joeydash
ENV MYSQL_PASSWORD joeydash
ADD setup.sql /docker-entrypoint-initdb.d
here is my dockerfile for my app
# Use Node v4 as the base
image.
FROM node:latest
# Add everything in the current directory to our image, in the 'app' folder.
ADD . /app
# Install dependencies
RUN cd /app; \
npm install --production
# Expose our server port.
EXPOSE 3000
# Run our app.
CMD ["node", "/app/bin/www"]
I don't know why everytime I try to do docker-compose up and connect it shows
Error: connect ECONNREFUSED 127.0.0.1:3306
app_1 | at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1117:14)
It shoes like the connection is not set but it is set
There are 3 things happening here:
Your port number in the nodejs app (db.js) has a typo.
change 33060 to 3306
Your user is root yet the user in the environment is joeydash
Change root to joeydash
Mysql might block your connection since it's set to listen to localhost per default. Within the container world localhost most of the times if not always points to within the container.
To fix point two you you should check your mysql config for the bind section. (look for bind-address) Make sure it allows connections from the ip your other container is running on.
If MySQL binds to 127.0.0.1, then
only software on the same computer
will be able to connect (because
127.0.0.1 is always the local computer).
If MySQL binds to
192.168.0.2 (and the server computer's IP address is
192.168.0.2 and it's on a /24 subnet), then any computers on the same
subnet (anything that starts with 192.168.0) will be able to connect.
If MySQL binds to
0.0.0.0, then any computer which is able to reach the server computer
over the network will be able to connect.
Quoting from here https://stackoverflow.com/a/3552946/2724940
If we go through every scenario:
The first one fails since all containers have their own localhost
The 2nd option could work if the correct ip is set to your mysql config.
The 3rd option works as mysql is now configured to allow all remote connections.
my.cnf (located in /etc/mysql/my.cnf
[mysqld]
bind-address = 0.0.0.0
You might need to create a network or link containers depending on your version of docker compose.
At the end of docker-compose.yml you could add:
networks:
backend:
driver: 'bridge'
And for each container add:
networks:
- backend

Resources