Symfony 4 produces SQLSTATE[HY000] [2002] Connection refused using Docker containers - docker

I have 2 Docker containers using the following configuration:
version: '2'
volumes:
db_data: {}
services:
db:
container_name: database-mysql-5.7
build:
context: ./docker/db/
ports:
- "3321:3306"
volumes:
- db_data:/var/lib/mysql
web:
container_name: apache-symfony-4
build:
context: ./docker/web/
image: php
links:
- db
depends_on:
- db
ports:
- "8021:80"
volumes:
- ./app/:/var/www/symfony
I can connect to the db container from my local host (Mac OS) and I can connect to the db container (and execute sql statements successfully) from the web container. Additionally, when I exec to the web container I can execute doctrine:migrations:diff commands successfully.
However, when I try and run code like the following:
$User = $this->getDoctrine()
->getRepository(User::class)
->findAll();
I receive the following error:
An exception occurred in driver: SQLSTATE[HY000] [2002] Connection refused
octrine\DBAL\Exception\ConnectionException:
An exception occurred in driver: SQLSTATE[HY000] [2002] Connection refused
at vendor/doctrine/dbal/lib/Doctrine/DBAL/Driver/AbstractMySQLDriver.php:108
at Doctrine\DBAL\Driver\AbstractMySQLDriver->convertException('An exception occurred in driver: SQLSTATE[HY000] [2002] Connection refused', object(PDOException))
(vendor/doctrine/dbal/lib/Doctrine/DBAL/DBALException.php:176)
at Doctrine\DBAL\DBALException::wrapException(object(Driver), object(PDOException), 'An exception occurred in driver: SQLSTATE[HY000] [2002] Connection refused')
(vendor/doctrine/dbal/lib/Doctrine/DBAL/DBALException.php:161)
at Doctrine\DBAL\DBALException::driverException(object(Driver), object(PDOException))
(vendor/doctrine/dbal/lib/Doctrine/DBAL/Driver/PDOMySql/Driver.php:47)
Here is the contents of my .env file
# This file is a "template" of which env vars need to be defined for your application
# Copy this file to .env file for development, create environment variables when deploying to production
# https://symfony.com/doc/current/best_practices/configuration.html#infrastructure-related-configuration
###> symfony/framework-bundle ###
APP_ENV=dev
APP_SECRET=7c2deaf7464ad40b484d457e02a56918
#TRUSTED_PROXIES=127.0.0.1,127.0.0.2
#TRUSTED_HOSTS=localhost,example.com
###< symfony/framework-bundle ###
###> symfony/swiftmailer-bundle ###
# For Gmail as a transport, use: "gmail://username:password#localhost"
# For a generic SMTP server, use: "smtp://localhost:25?encryption=&auth_mode="
# Delivery is disabled by default via "null://localhost"
MAILER_URL=null://localhost
###< symfony/swiftmailer-bundle ###
###> doctrine/doctrine-bundle ###
# Format described at http://docs.doctrine-project.org/projects/doctrine-dbal/en/latest/reference/configuration.html#connecting-using-a-url
# For an SQLite database, use: "sqlite:///%kernel.project_dir%/var/data.db"
# Configure your db driver and server_version in config/packages/doctrine.yaml
DATABASE_URL=mysql://hereswhatsontap:#####db:####/hereswhatsontap
###< doctrine/doctrine-bundle ###
Additionally, if I try and use the DBAL it returns the same error:
/**
* #Route("/test")
*/
public function testAction(Connection $connection){
$users = $connection->fetchAll('SELECT * FROM users');
return $this->render("startup/startup.html/twig", [
"NewUser" => $users
]);
}
Attempts to use the root username/password are also failing.
Ok, last additional data (I promise :)
Executing the following code works fine from within the /public folder of Symfony
<?php
$dsn = 'mysql:host=db;dbname=hereswhatsontap;';
$username = 'hereswhatsontap';
$password = '####';
$options = array(
PDO::MYSQL_ATTR_INIT_COMMAND => 'SET NAMES utf8',
);
$dbh = new PDO($dsn, $username, $password, $options);
$sql = "SELECT * from user";
$statement = $dbh->prepare($sql);
$statement->execute();
$users = $statement->fetchAll(2);
print_r($users);
?>

Thanks a lot for your comment, it saved me some further hours of research lol. I would just add for newbies in Docker like me, that if you want to get your container IP you can run the following command (so that you won't have to look for in the whole configuration dump you get by using only inspect):
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' YOURCONTAINERID
Thanks again!

This issue was caused by a full lack of understanding for me on how the containers communicate amongst each other.
The web container and the db container are essentially on their own 'vpn' network which means that the web container can communicate with db directly and would then use the internal port (3306) based on my configuration.
So removing the port (which I had as the external port) from my .env files' DATABASE_URL value was the answer.
DANG!

Related

How to solve: Grafana has failed to load its application files

I want to run Grafana in a specific domain, but this problem is encountered
Grafana image version is 9.1.0
docker-compose:
version: "3.8"
services:
grafana:
container_name: ${CONTAINER_GRAFANA}
image: grafana/grafana:9.1.0
restart: unless-stopped
env_file: .env
environment:
TZ: Asia/Tehran
GF_SECURITY_ADMIN_USER: ${GRAFANA_ADMIN_USER}
GF_SECURITY_ADMIN_PASSWORD: ${GRAFANA_ADMIN_PASSWORD}
volumes:
- grafana-data:/var/lib/grafana
- ./docker/grafana/grafana.ini:/etc/grafana/grafana.ini
#- /etc/grafana/provisioning
ports:
- "${GRAFANA_PORT}:3000"
grafana.ini:
[server]
# Protocol (http, https, h2, socket)
;protocol = http
# The ip address to bind to, empty will bind to all interface
;http_addr =
# The http port to use
;http_port = 3000
# The public facing domain name used to access grafana from a browser
domain = ${GRAFANA_DOMAIN}
# Redirect to correct domain if host header does not match domain
# Prevents DNS rebinding attacks
;enforce_domain = false
# The full public facing url you use in browser, used for redirects and emails
# If you use reverse proxy and sub path specify full url (with sub path)
root_url = %(protocol)s://%(domain)s/
If you're seeing this Grafana has failed to load its application files
This could be caused by your reverse proxy settings.
If you host grafana under subpath make sure your grafana.ini root_url setting includes subpath. If not using a reverse proxy make sure to set serve_from_sub_path to true.
If you have a local dev build make sure you build frontend using: yarn start, yarn start:hot, or yarn build
Sometimes restarting grafana-server can help
Check if you are using a non-supported browser. For more information, refer to the list of supported browsers.
I would suspect, that you are missing a GRAFANA_DOMAIN: ${GRAFANA_DOMAIN} in your docker-compose.yml. You need to pass through this environment variable so it is available inside the container.

Telegraf docker can not connect to Mosquitto brocker docker [duplicate]

I am trying to run a local mosquitto broker, publisher and subscriber setup via docker and docker-compose, but the publisher cannot connect to the broker. However, connecting to local broker via cli works fine.
Getting following error when running below setup.
{ Error: connect ECONNREFUSED 127.0.0.1:1883
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1088:14)
errno: 'ECONNREFUSED',
code: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 1883 }
Local dockerized setup:
docker-compose.yml:
version: "3.5"
services:
publisher:
hostname: publisher
container_name: publisher
build:
context: ./
dockerfile: dev.Dockerfile
command: npm start
networks:
- default
depends_on:
- broker
broker:
image: eclipse-mosquitto
hostname: mosquitto-broker
container_name: mosquitto-broker
networks:
- default
ports:
- "1883:1883"
networks:
default:
dev.Dockerfile:
FROM node:11-alpine
RUN mkdir app
WORKDIR app
COPY package*.json ./
RUN npm ci
COPY ./src ./src
CMD npm start
src/index.js:
const mqtt = require("mqtt");
const client = mqtt.connect("mqtt://localhost:1883");
client.on("connect", () => {
console.log("Start publishing...");
client.publish("testTopic", "test");
});
client.on("error", (error) => {
console.error(error);
});
However, if I connect to the mosquitto broker via mqtt-js cli, it works as expected. E.g.
mqtt sub -t 'testTopic' -h 'localhost' and mqtt pub -t 'testTopic' -h 'localhost' -m 'from MQTT.js'.
What am I missing?
your publisher container and broker are running in two different containers that's mean that they are two different machines each machine has it's own ip.
you can't call broker service from your publisher container by using localhost:1883 and vice verse , from broker to publisher container
To reach broker container you have to call container ip or name or service name
in your case change mqtt.connect("mqtt://localhost:1883"); value to be mqtt.connect("mqtt://broker:1883"); and give it a try
The publisher and broker run in different containers, meaning they have different IPs.
When the publisher is trying to reach the broker at localhost:1883, it is normal to receive a ECONNREFUSED, hence the broker is not in the same container.
You should replace the 127.0.0.1 or localhost with the service name of the broker(broker in this case). The service name will be resolved to the correct IP of the broker container.
in your index.js you should change "localhost" to "broker". When inside a container "localhost" will resolve to that specific container so you should always use the service name instead and docker will take care of the routing to that specific service. Also by default all service in the same compose file are added to the same network so there is no need to specify it.
So basically change this: const client = mqtt.connect("mqtt://localhost:1883");
To this: const client = mqtt.connect("mqtt://broker:1883");

error: container_linux.go:235: starting container process caused keycloak/keycloak-gatekeeper

in Centos7, I'm trying to start 2 containers by docker-compose when I get this error:
error: container_linux.go:235: starting container process caused keycloak/keycloak-gatekeeper
# ls
docker-compose.yml Dockerfile gatekeeper-be.conf gatekeeper-fe.conf nginx-conf.d README.MD
=================
# cat docker-compose
version: '3.2'
networks:
network-bo-network:
driver: "bridge"
ipam:
config:
- subnet: "173.200.1.0/24"
gatekeeper-fe:
image: keycloak/keycloak-gatekeeper:latest
command: /keycloak-proxy --config /opt/keycloak-gatekeeper/gatekeeper.conf
volumes:
- ./gatekeeper-fe.conf:/opt/keycloak-gatekeeper/gatekeeper.conf
networks:
network-bo-network:
ipv4_address: "173.200.1.3"
network-bo-nginx:
image: nginx:1.17
ports:
- "83:80"
volumes:
- ./nginx-conf.d:/etc/nginx/conf.d
networks:
network-bo-network:
ipv4_address: "173.200.1.5"
===========================================
cat gatekeeper-fe.conf
ClientID is the client id
client-id: client-bo-app
## ClientSecret is the secret for AS
client-secret: xxxxxxxxxxxxxxxxxxx
## DiscoveryURL is the url for the keycloak server
discovery-url: https://xxxxxxxxxxxxxxxxxxxx
## SkipOpenIDProviderTLSVerify skips the tls verification for openid provider communication
skip-openid-provider-tls-verify: true
## EnableDefaultDeny indicates we should deny by default all requests
enable-default-deny: true
## EnableRefreshTokens indicate's you wish to ignore using refresh tokens and re-auth on expiration of access token
enable-refresh-tokens: true
## EncryptionKey is the encryption key used to encrypt the refresh token
encryption-key: xxxxxxxxxxxxxxxxxxxxxxxxxxxx
## Listen is the binding interface
listen: :8081
## Upstream is the upstream endpoint i.e whom were proxying to
upstream-url: http://173.200.1.1:8082
## EnableLogging indicates if we should log all the requests
enable-logging: true
## EnableJSONLogging is the logging format
enable-json-logging: true
## PreserveHost preserves the host header of the proxied request in the upstream request
preserve-host: true
## NoRedirects informs we should hand back a 401 not a redirect
no-redirects: true
## AddClaims is a series of claims that should be added to the auth headers
add-claims:
- email
- given_name
- family_name
- name
## Resources configuration
resources:
- uri: /api/v1/metadata
methods:
- GET
white-listed: true
==================================================
# docker-compose up
WARNING: Found orphan containers (network-bo-dev_network-bo-postgres_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
network-bo-dev_network-bo-nginx_1 is up-to-date
Creating network-bo-dev_gatekeeper-fe_1 ... error
ERROR: for network-bo-dev_gatekeeper-fe_1 Cannot start service gatekeeper-fe: oci runtime error: container_linux.go:235: starting container process caused "container init exited prematurely"
ERROR: for gatekeeper-fe Cannot start service gatekeeper-fe: oci runtime error: container_linux.go:235: starting container process caused "container init exited prematurely"
ERROR: Encountered errors while bringing up the project.
You should provide https://stackoverflow.com/help/minimal-reproducible-example - provided docker-compose doesn't have correct syntax.
A few obvious errors:
gatekeeper binary in the image has /opt/keycloak-gatekeeper
location, not /keycloak-proxy, but see next point
used images uses entrypoint=/opt/keycloak-gatekeeper=> command just needs that part after binary, e.g.: --config /opt/keycloak-gatekeeper/gatekeeper.conf
first line in gatekeeper-fe.conf should be comment

cURL error 7: Failed to connect to test.localhost port 80

I use laradock.
I created multiple projects. One project is one microservice.
I need to use guzzle http to get data from one microservice to another.
In laradock/nginx/sites I configured all virtual host.
Each project(microservice) work fine separatly.
But when I try get data from one container for another:
$url = "test.localhost/users";
$client = new \GuzzleHttp\Client();
$request = $client->get($url);
$response = $request->getBody();
return json_decode($response->getContents(), true);
I got an error:
GuzzleHttp \ Exception \ ConnectException
cURL error 7: Failed to connect to test.localhost port 80: Connection timed out (see https://curl.haxx.se/libcurl/c/libcurl-errors.html)
I think this is problem with docker.
I tried:
networks:
frontend:
aliases:
- test.localhost
- test2.localhost
backend:
aliases:
- test.localhost
- test2.localhost
But it didn't helped.

psql: could not translate host name "somePostgres" to address: Name or service not known

I am building a java spring mvc application in docker and dockefile build involves interacting with postgres container. Whenever i run docker-compose up the step in dockerfile which interacts with the postrges sometimes fails with an exception
psql: could not translate host name "somePostgres" to address: Name or service not known
FAILED
FAILURE: Build failed with an exception.
DockerCompose file:
abcdweb:
links:
- abcdpostgres
build: .
ports:
- "8080:8080"
volumes:
- .:/abcd-myproj
container_name: someWeb
abcdpostgres:
image: postgres
environment:
- POSTGRES_PASSWORD=postgres
- POSTGRES_USER=postgres
container_name: somePostgres
The somePostgres seems to start very quickly and There is no late loading of postgres container problem. Currently i am running this in virtual box created by docker-machine. Unable to get error as it's not persistent.
PS: Added Dockerfile
FROM java:7
RUN apt-get update && apt-get install -y postgresql-client-9.4
ADD . ./abcd-myproj
WORKDIR /abcd-myproj
RUN ./gradlew build -x test
RUN sh db/importdata.sh
CMD ./gradlew jettyRun
Basically what this error means is that psql was unable to resolve the host name, try using the ip address instead.
https://github.com/postgres/postgres/blob/313f56ce2d1b9dfd3483e4f39611baa27852835a/src/interfaces/libpq/fe-connect.c#L2275-L2285
case CHT_HOST_NAME:
ret = pg_getaddrinfo_all(ch->host, portstr, &hint,
&conn->addrlist);
if (ret || !conn->addrlist)
{
appendPQExpBuffer(&conn->errorMessage,
libpq_gettext("could not translate host name \"%s\" to address: %s\n"),
ch->host, gai_strerror(ret));
goto keep_going;
}
break;
https://github.com/postgres/postgres/blob/8255c7a5eeba8f1a38b7a431c04909bde4f5e67d/src/common/ip.c#L57-L75
int
pg_getaddrinfo_all(const char *hostname, const char *servname,
const struct addrinfo *hintp, struct addrinfo **result)
{
int rc;
/* not all versions of getaddrinfo() zero *result on failure */
*result = NULL;
#ifdef HAVE_UNIX_SOCKETS
if (hintp->ai_family == AF_UNIX)
return getaddrinfo_unix(servname, hintp, result);
#endif
/* NULL has special meaning to getaddrinfo(). */
rc = getaddrinfo((!hostname || hostname[0] == '\0') ? NULL : hostname,
servname, hintp, result);
return rc;
}
I think links are not encouraged lately.
But, if you want to have services to communicate over network and explicitly here is the config:
You need to configure network an both services to attach to that network. It is something like:
networks:
network:
external: true
abcdweb:
links:
- abcdpostgres
build: .
ports:
- "8080:8080"
volumes:
- .:/abcd-myproj
container_name: someWeb
networks:
network: null
abcdpostgres:
image: postgres
environment:
- POSTGRES_PASSWORD=postgres
- POSTGRES_USER=postgres
container_name: somePostgres
networks:
network: null
In this way the service will communicate via the network with service names as adress.
I had to set my secret_key_base in secrets.yml.
With the incorrect key, my app did not have permission to resolve the database domain.
I'm running a rails app in docker that makes use of secret_key_base. The problem is that I was running the app on the production database using the development environment. The development environment entailed the development secrete_key_base. Once I began using the correct key, I could connect to the database.
The error showed up in my rails container logs as
Raven 2.13.0 configured not to capture errors: No host specified, no public_key specified, no project_id specified
See this question for how to set the secret_key_base in secrets.yml

Resources