I created a docker container using the standard "image: postgres:13", but inside the container it doesn't start postgresql because there is no cluster. What could be the problem?
Thx for answers!
My docker-compose:
version: '3'
services:
laravel.test:
build:
context: ./vendor/laravel/sail/runtimes/8.0
dockerfile: Dockerfile
args:
WWWGROUP: '${WWWGROUP}'
image: sail-8.0/app
ports:
- '${APP_PORT:-80}:80'
environment:
WWWUSER: '${WWWUSER}'
LARAVEL_SAIL: 1
volumes:
- '.:/var/www/html'
networks:
- sail
depends_on:
- pgsql
pgsql:
image: 'postgres:13'
ports:
- '${FORWARD_DB_PORT:-5432}:5432'
environment:
PGPASSWORD: '${DB_PASSWORD:-secret}'
POSTGRES_DB: '${DB_DATABASE}'
POSTGRES_USER: '${DB_USERNAME}'
POSTGRES_PASSWORD: '${DB_PASSWORD:-secret}'
volumes:
- 'sailpgsql:/var/lib/postgresql/data'
networks:
- sail
healthcheck:
test: ["CMD", "pg_isready", "-q", "-d", "${DB_DATABASE}", "-U", "${DB_USERNAME}"]
retries: 3
timeout: 5s
networks:
sail:
driver: bridge
volumes:
sailpgsql:
driver: local
and I get an error when trying to contact the container:
SQLSTATE[08006] [7] could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
and inside the container, when I try to start or restart postgres, I get this message:
[warn] No PostgreSQL clusters exist; see "man pg_createcluster" ... (warning).
You should not connect through localhost but by the container name as host name.
So change your .env to contain
DB_CONNECTION=[what the name is in the config array]
DB_HOST=pgsql
DB_PORT=5432
DB_DATABASE=laravel
DB_USERNAME=[whatever you want]
DB_PASSWORD=[whatever you want]
Related
I want to connect redash to MySQL Server. I added MYSQL_TCP_PORT for server to use TCP connection, not default UNIX socket (to avoid mysqld.sock error). If I go to mysql container and mysql -p - I can open mysql shell. But If I test connection in redash - it will return (2006, "Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)").
Here is my docker-compose file:
# This configuration file is for the **development** setup.
# For a production example please refer to getredash/setup repository on GitHub.
version: "2.2"
x-redash-service: &redash-service
build:
context: .
# args:
# skip_frontend_build: "true" # set to empty string to build
volumes:
- .:/app
env_file:
- .env
x-redash-environment: &redash-environment
REDASH_LOG_LEVEL: "INFO"
REDASH_REDIS_URL: "redis://redis:6379/0"
REDASH_DATABASE_URL: "postgresql://postgres#postgres/postgres"
REDASH_RATELIMIT_ENABLED: "false"
REDASH_MAIL_DEFAULT_SENDER: "redash#example.com"
REDASH_MAIL_SERVER: "email"
REDASH_ENFORCE_CSRF: "true"
REDASH_GUNICORN_TIMEOUT: 60
# Set secret keys in the .env file
services:
server:
<<: *redash-service
command: dev_server
depends_on:
- postgres
- redis
ports:
- "5000:5000"
- "5678:5678"
networks:
- default_network
environment:
<<: *redash-environment
PYTHONUNBUFFERED: 0
scheduler:
<<: *redash-service
command: dev_scheduler
depends_on:
- server
networks:
- default_network
environment:
<<: *redash-environment
worker:
<<: *redash-service
command: dev_worker
depends_on:
- server
networks:
- default_network
environment:
<<: *redash-environment
PYTHONUNBUFFERED: 0
redis:
image: redis:3-alpine
restart: unless-stopped
networks:
- default_network
postgres:
image: postgres:9.5-alpine
# The following turns the DB into less durable, but gains significant performance improvements for the tests run (x3
# improvement on my personal machine). We should consider moving this into a dedicated Docker Compose configuration for
# tests.
ports:
- "15432:5432"
command: "postgres -c fsync=off -c full_page_writes=off -c synchronous_commit=OFF"
restart: unless-stopped
networks:
- default_network
environment:
POSTGRES_HOST_AUTH_METHOD: "trust"
email:
image: djfarrelly/maildev
ports:
- "1080:80"
restart: unless-stopped
networks:
- default_network
mysql:
image: mysql/mysql-server:latest
ports:
- "3306:3306"
restart: unless-stopped
container_name: mysql
networks:
- default_network
environment:
MYSQL_ROOT_PASSWORD: "${MYSQL_ROOT_PASSWORD}"
MYSQL_TCP_PORT: 3306
networks:
default_network:
external: false
name: default_network
driver: bridge
As I see - redash is connecting via unix socket - not TCP connection (otherwise there will no mysqld.sock err). I don't know - what I should fix in docker-compose or somewhere else to make it connect properly. Any suggestions? If you need me to provide more info - ask me please
I've been trying to see my web app started from my docker-compose file but nothing is appearing. It works when I serve the app locally but not through docker. I'm using rust and actix-web for my backend set for localhost on port 8080 and I'm exposing the ports for docker-compose but it still isn't working.
my docker-compose file:
services:
database:
image: postgres
restart: always
expose:
- 5432
ports:
- "5432:5432"
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
POSTGRES_DB: zion
PGDATA: /var/lib/postgresql/data/
healthcheck:
test: ["CMD", "pg_isready", "-d", "zion", "-U", "postgres"]
timeout: 25s
interval: 10s
retries: 5
networks:
- postgres-compose-network
test:
build:
context: .
dockerfile: Dockerfile
entrypoint: ./test-entrypoint.sh
depends_on:
database:
condition: service_healthy
networks:
- postgres-compose-network
server:
build:
context: .
dockerfile: Dockerfile
entrypoint: ./run-entrypoint.sh
restart: always
expose:
- 8080
ports:
- 8080:8080
depends_on:
database:
condition: service_healthy
networks:
- postgres-compose-network
networks:
postgres-compose-network:
driver: bridge
backend main.rs
#[actix_web::main]
async fn main() -> std::io::Result<()> {
env_logger::init_from_env(env_logger::Env::new().default_filter_or("info"));
log::info!("starting HTTP server at http://localhost:8080");
let secret_key = Key::generate();
let pool = establish_connection();
log::info!("database connection established");
HttpServer::new(move || {
App::new()
.app_data(web::Data::new(pool.clone()))
.wrap(middleware::Logger::default())
.wrap(IdentityMiddleware::default())
.configure(database::routes::user::configure)
.service(
Files::new("/", "../frontend/dist")
.prefer_utf8(true)
.index_file("index.html"),
)
})
.bind(("127.0.0.1", 8080))?
.run()
.await
}
The answer, as provided by #David-Maze, was to bind my app's address to 0.0.0.0 instead of 127.0.0.1.
I am using the following docker configuration:
version: "3"
services:
database:
image: mysql
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: 1009
restart: "always"
phpmyadmin:
image: phpmyadmin
ports:
- "8080:80"
environment:
PMA_HOST: database
restart: "always"
depends_on:
- database
It works, but if I change the mysql port for my PC to - "3360:3306" (for example 3306 may be busy with the main mysql service running on the computer) - it stops working. If I do this, I will be able to connect to mysql from my computer in CMD like mysql -u root -P 3360 -p, but phpmyadmin will give the error - connection refused.
No matter what I do, I see this error. For example I used the following docker configuration:
version: "3"
services:
database:
image: mysql
networks:
- my-network
ports:
- "3360:3306"
environment:
MYSQL_ROOT_PASSWORD: 1009
restart: "always"
phpmyadmin:
image: phpmyadmin
links:
- database
networks:
- my-network
ports:
- "8080:80"
environment:
PMA_HOST: database
PMA_PORT: 3360
restart: "always"
depends_on:
- database
networks:
my-network:
Even all of this didn't work. Phpmyadmin gave out one inscription when trying to connect - Connection refused all the time. But it still worked in the terminal.
Why can't I use a different port for phpmyadmin in docker?
At the moment, I'm trying to set up my docker-compose file to run multiple (two) Node.js API's.
The first Node.js server connects fine to the database.
The second Node.js server keeps throwing this error
original: Error: connect ECONNREFUSED 172.29.0.3:3307
So my question is how to run multiple Node.js API's in docker?
This is my docker-compose.yaml:
version: '3.7'
services:
ISAAC-floor-database:
image: mysql
environment:
MYSQL_ROOT_PASSWORD: isaac
ports:
- "3306:3306"
volumes:
- ./sql-scripts:/docker-entrypoint-initdb.d
ISAAC-sensor-database:
image: mysql
environment:
MYSQL_ROOT_PASSWORD: isaac
ports:
- "3307:3307"
volumes:
- ./sql-scripts:/docker-entrypoint-initdb.d
ISAAC-floor-back-end:
image: jjuless/isaac-floor-back-end
environment:
DB_HOST: ISAAC-floor-database
DB_USER: root
DB_PASSWORD: isaac
DB_DATABASE: isaac
DB_PORT: 3306
DB_DIALECT: mysql
ports:
- "3000:80"
depends_on:
- ISAAC-floor-database
ISAAC-sensor-back-end:
image: jjuless/isaac-sensor-back-end
environment:
DB_HOST: ISAAC-sensor-database
DB_USER: root
DB_PASSWORD: isaac
DB_DATABASE: isaac
DB_PORT: 3307
DB_DIALECT: mysql
ports:
- "3001:80"
depends_on:
- ISAAC-sensor-database
Mysql is always listening on port 3306 in each container, so if you want to have multiple mysql instances in the same host, you will need to map different host ports to the same guest port
Hence you will need to change your docker-compose file as follows
ISAAC-floor-database:
image: mysql
environment:
MYSQL_ROOT_PASSWORD: isaac
ports:
- "3306:3306" # <-- HOST port same as GUEST port
volumes:
- ./sql-scripts:/docker-entrypoint-initdb.d
ISAAC-sensor-database:
image: mysql
environment:
MYSQL_ROOT_PASSWORD: isaac
ports:
- "3307:3306" # <-- Notice here the GUEST port is the same as the first container!
volumes:
- ./sql-scripts:/docker-entrypoint-initdb.d
I am not able to connect a node.js app with rabbit-mq server. Postgres is correctly connected. I don't know why I have a connection refused.
version: "3"
networks:
app-tier:
driver: bridge
services:
db:
image: postgres
environment:
- POSTGRES_USER=dockerDBuser
- POSTGRES_PASSWORD=dockerDBpass
- POSTGRES_DB=performance
ports:
- "5433:5432"
volumes:
- ./pgdata:/var/lib/postgresql/data
networks:
- app-tier
rabbitmq:
image: rabbitmq:3.6.14-management
healthcheck:
test: ["CMD", "curl", "-f", "http://127.0.0.1:5672"]
interval: 30s
timeout: 10s
retries: 5
ports:
- "0.0.0.0:5672:5672"
- "0.0.0.0:15672:15672"
networks:
- app-tier
app:
build: .
depends_on:
- rabbitmq
- db
links:
- rabbitmq
- db
command: npm run startOrc
environment:
DATABASE_URL: postgres://dockerDBuser:dockerDBpass#db:5432/asdf
restart: on-failure
networks:
- app-tier
It seems it's trying to connect to the host rabbitmq instead of the container rabbitmq
Try changing env variable CLOUDAMQP_URL to amqp://rabbitmq:5672
You can call service by it's name i.e rabbitmq.
This error also comes up if you haven't started docker and run rabbitmq server. So if in case if someone who's reading this post gets the same error, please check whether your rabbitmq server is running.
You can use below command to run the rabbitmq server. (5672 is the port of that server)
docker run -p 5672:5672 rabbitmq