Installing PostgreSQL within a docker container - docker

I've been following several different tutorials as well as the official one however whenever I try to install PostgreSQL within a container I get the following message afterwards
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
I've looked through several questions here on SO and throughout the internet but no luck.

The problem is that the your application/project is trying to access the postgres socket file in the HOST machine (not docker container).
To solve it one would either have to explicitly ask for an tcp/ip connection while using the -p flag to set up a port for the postgres container, or share the unix socket with the HOST maching using the -v flag.
:NOTE:
Using the -v or --volume= flag means you are sharing some space between the HOST machine and the docker container. That means that if you have postgres installed on your host machine and its running you will probably run into issues.
Below I demonstrate how to run a postgres container that is both accessible from tcp/ip and unix socket. Also I am naming the container as postgres.
docker run -p 5432:5432 -v /var/run/postgresql:/var/run/postgresql -d --name postgres postgres
There are other solutions, but I find this one the most suitable. Finally if the application/project that needs access is also a container, it is better to just link them.

By default psql is trying to connect to server using UNIX socket. That's why we see /var/run/postgresql/.s.PGSQL.5432- a location of UNIX-socket descriptor.
If you run postgresql-server in docker with port binding so you have to tell psql to use TCP-socket. Just add host param (--host or -h):
psql -h localhost [any other params]
UPD. Or share UNIX socket descriptor with host (where psql will be started) as was shown in main answer. But I prefer to use TCP socket as easy managed approach.

FROM postgres:9.6
RUN apt-get update && apt-get install -q -y postgresql-9.6 postgresql-client-9.6 postgresql-contrib-9.6 postgresql-client-common postgresql-common
RUN echo postgres:postgres | chpasswd
RUN pg_createcluster 9.6 main --start
RUN /etc/init.d/postgresql start
RUN su -c "psql -c \"ALTER USER postgres PASSWORD 'postgres';\"" postgres

Here are instructions for fixing that error that should also work for your docker container: PostgreSQL error 'Could not connect to server: No such file or directory'
If that doesn't work for any reason, there are many of off-the-shelf postgresql docker containers you can look at for reference on the Docker Index: https://index.docker.io/search?q=postgresql
Many of the containers are built from trusted repos on github. So if you find one that seems like it meets your needs, you can review the source.
The Flynn project has also included a postgresql appliance that might be worth checking out: https://github.com/flynn/flynn-postgres

Run the below command to create a new container with PSQL running it it, which can be accessed from other containers/applications.
docker run --name postgresql-container -p 5432:5432 -e POSTGRES_PASSWORD=somePassword -d postgres
Now, export the connection-string or DB credentials from ur .env and use it in the application.
Refernce: detailed installion and running

Related

How to export docker process results to a local file - while connecting to the host through ssh?

Well considering I have a docker (with postgres) running I could dump the data using pg_dump using:
sudo docker exec <DOCKERNAME> pg_dump --data-only --table=some_table some_db
I could further send this to a file by adding > export.sql
sudo docker exec <DOCKERNAME> pg_dump --data-only --table=some_table some_db > export.sql
Finally this works fine in an (interactive) ssh session.
However when using ssh the file is stored on remote host, instead of in my local system, I wish to get the file locally instead of in remote. I know I can send a command directly to the ssh shell and then exporting is done to the local host:
ssh -p 226 USER#HOST 'command' > local.sql
IE:
ssh -p 226 USER#HOST 'echo test' > local.sql
However on try to combine both commands I get an error
ssh -p 226 USER#HOST 'sudo docker exec <DOCKERNAME> pg_dump --data-only --table=some_table some_db' > local.sql
sudo: no tty present and no askpass program specified
And if I dare remove sudo (which would be silly) I get: sh: docker: command not found. How do I solve this? How can I export the pg dump direct to my local pc? With a simple command? Or at least without first creating a copy of the file on the remote system?
I'd avoid sudo or docker exec for this setup.
First, make sure that your database container has a port published to the host. In Docker Compose, for example:
version: '3.8'
services:
db:
image: postgres
ports:
- '127.0.0.1:11111:5432'
The second port number must be the ordinary PostgreSQL port 5432; the first port number can be anything that doesn't conflict; the 127.0.0.1 setting makes the published port only accessible on the local system.
Then, when you connect to the remote system, you can use ssh to set up a port forward:
ssh -L 22222:localhost:11111 -N me#remote.example.com
ssh -L sets up a port forward from your local system to the remote system; ssh -N says to not run a command, just do the port forward.
Now on your local system, you can run psql and other similar client tools. Locally, localhost:22222 connects to the ssh tunnel; that forwards to localhost:11111 on the remote system; and that forwards to port 5432 in the container.
pg_dump --port=22222 --data-only --table=some_table some_db > export.sql
If you have the option of directly connecting to the database host, you could remove 127.0.0.1 from the ports: setting, and then pg_dump --host=remote.example.com --port=11111, without the ssh tunnel. (But I'm guessing it's there for a reason.)
You could forward socket connections over ssh then connect to the container from your host if you have docker installed:
ssh -n -N -T -L ${PWD}/docker.sock:/var/run/docker.sock user#host &
docker -H unix://${PWD}/docker.sock exec ...

multiple versions of neo4j server at the same machine

I downloaded 2 versions of neo4j on Ubuntu 18.04 which are "neo4j-community-3.5.12" and "neo4j-community-3.5.8"
I run 3.5.8 with default settings I can see it from the web. http://localhost:7474/
For 3.5.12 I changed conf/neo4j.conf file and set some other port numbers for not to conflict with the default ones.
3.5.8 version runs fine on :7474. When I start 3.5.12, the logs says it is running but when I check from browser it is not running. I tried 2 different port settings, none worked. Below is the log file.
Why it is not running?
I see that many people recommended using docker. I also tried that.
I set up docker a container with command
sudo docker run --name db1 -p7474:7474 -p7687:7687 -d -v /db1/data:/data -v /db1/logs:/logs -v /db1/conf:/conf --env NEO4J_AUTH=none neo4j
here I have an existing /d1/data/databases/graph.db folder. When I go to localhost:7474 it is fine it shows me the existing database.
I set up another docker container with command
sudo docker run --name db2 -p3001:7474 -p3002:7473 -p3003:7687 -d -v /db2/data:/data -v /db2/logs:/logs -v /db2/conf:/conf --env NEO4J_AUTH=none neo4j
here I expect to see an EMPTY database but I see the already existing database again. When I go to the data folder inside db2. I see that it created some files here. WHY do I see the same database?
Also note that when I go to see the databases, headers of the web pages showing they are using the same bolt port?
can I copy the neo4j image and use different images to generate containers? Does that help?
I recognized that multiple databases are running and active but somehow I'm not able to reach the second one through a browser.
Considering the docker commands-
cmd1: sudo docker run --name db1 -p7474:7474 -p7687:7687 -d -v /db1/data:/data -v /db1/logs:/logs -v /db1/conf:/conf --env NEO4J_AUTH=none neo4j
cmd2: sudo docker run --name db2 -p3001:7474 -p3002:7473 -p3003:7687 -d -v /db2/data:/data -v /db2/logs:/logs -v /db2/conf:/conf --env NEO4J_AUTH=none neo4j
The container ports are defaults exposed as the same host port for db1 instance. Whereas for db2 instance series 3xxx has been used.
To browse the UI provided by neo4j, you can use either 7474 or 3001 port which is mapped to 7474 container port.
Neo4j browser uses defaults (from neo4j.conf) to connect to neo4j server. The default settings are as
bolt://<machineip>:7687, where db1 instance has already exposed the container port to 7687 host port.
A running instance is found on 7687 port which initiates a WebSocket connection for db1 and db2.
How to connect to an appropriate instance?
Use: :server disconnect and :server connect with the appropriate bolt://<machineip>:port connection string
Map db1 instance bolt container port to different host port (i.e. other than 7687)
As no defaults will be available
(Preferred), set the same hostport:containerport combination e.g.
cmd2: sudo docker run --name db2 -p3001:7474 -p3002:7473 -p3003:3003-d -v /db2/data:/data -v /db2/logs:/logs -v /db2/conf:/conf --env NEO4J_AUTH=none neo4j
in this case, a Volume has to be mapped to provide neo4j.conf with updated values as dbms.connector.bolt.listen_address=:3003
In case anybody still needs it: Here is how to run two neo4j databases neo4j_01 and neo4j_02 in two different docker containers, saving the data in different directories and accessing them on different ports.
docker container 1: neo4j_01
docker run \
--name neo4j_01 \
-p1474:7474 -p1687:7687 \
-d \
-v $HOME/neo4j_01/neo4j/data:/data \
-v $HOME/neo4j_01/neo4j/logs:/logs \
-v $HOME/neo4j_01/neo4j/import:/var/lib/neo4j/import \
-v $HOME/neo4j_01/neo4j/plugins:/plugins \
--env NEO4J_AUTH=username/enterpasswordhere \
neo4j:latest
docker container 2: neo4j_02
docker run \
--name neo4j_02 \
-p2474:7474 -p2687:7687 \
-d \
-v $HOME/neo4j_02/neo4j/data:/data \
-v $HOME/neo4j_02/neo4j/logs:/logs \
-v $HOME/neo4j_02/neo4j/import:/var/lib/neo4j/import \
-v $HOME/neo4j_02/neo4j/plugins:/plugins \
--env NEO4J_AUTH=username/enterpasswordhere \
neo4j:latest
After executing the code above e.g. neo4j_01 can be reached on port 1474 (when logging in you need to change the bolt port to 1687 in the first line and then enter username and password in second and third line)
You can stop a container with docker kill neo4j_01 and restart it with docker start neo4j_01. Data will still be there. It is saved in $HOME/neo4j_01/neo4j/data.
Doing it like this, I did not encounter any problems with ports/ accessing the wrong database etc.
After a lot of effort, my solution is not to use docker.
Go and download a community server from here. https://neo4j.com/download-center/#community. It will give you a compressed file. Extract it. You will have a folder named like neo4j-community-3.5.14. Make a copy of THAT FOLDER. For each server instance, make a copy.
Inside the folder, there is a conf folder which has a file named neo4j.conf. Open that file. By changing some settings inside this folder, you can run many neo4j servers. Change the below settings
To accept non-local connections, uncomment this line:
dbms.connectors.default_listen_address=0.0.0.0
change some port numbers so that they won't intersect with already used ones
dbms.connector.bolt.listen_address=:3003
dbms.connector.https.listen_address=:3002
dbms.connector.http.listen_address=:3001

run Mysql and Wordpress on Docker, got message Error establishing a database connection

I am a newbie for Docker, I run the command below on terminal
docker run --name mysql-cont -e MYSQL_ROOT_PASSWORD=qwerty -d mysql
docker run --name wp-cont --link mysql-cont:mysql -p 8080:80 -d wordpress
if I access
127.0.0.1:8080
or
localhost:8080
it displays
Error establishing a database connection
It looks like the configure file for mysql does not work well
I searched file docker-compose.yml docker-compose.ymal
but can not get them
Your comment welcome
At first you didn't pass database connection string to Wordpress container as shown here.
If you need docker-compose.yml file you can use this official documentation to help you to implement your project.

Superset for Clickhouse in docker with SQLAlchemy

I'm trying to setup Apache Superset for Clickhouse.
My understanding so far is that I need to install SQLAlchemy for Clickhouse
https://github.com/xzkostyan/clickhouse-sqlalchemy
I'm in Ubuntu 16.04 LTS, and using the Docker vanilla version of Clickhouse and of Superset:
https://store.docker.com/community/images/yandex/clickhouse-server
https://hub.docker.com/r/amancevice/superset/
without special settings
Any idea how I can bridge the two docker containers with clickhouse-sqlalchemy ?
Where and how in that case to install that?
(if you have sample command line that I can reuse that will be great)
You don't need to bridge them: what you want is a superset server (that you happen to be running via docker) to connect to a clickhouse database (that you also happen to be running via docker).
You also shouldn't need to install SQLAlchemy for Clickhouse: looking at the dockerfile at https://hub.docker.com/r/amancevice/superset/~/dockerfile/ that image has already sqlalchemy-clickhouse installed for you.
Your steps should be as follow:
When you docker run --detach --name superset [options] amancevice/superset you should have your superset instance running at http://localhost:8088/
Similarly, when you run $ docker run -d --name some-clickhouse-server --ulimit nofile=262144:262144 -v /path/to/your/config.xml:/etc/clickhouse-server/config.xml yandex/clickhouse-server you should end-up with a clickhouse instance that you can access via SQLAlchemy at something like clickhouse://default:#some-clickhouse-server/test
You'd need to modify that connection URI based on your config.xml - and you should be able to double-check that it works by connecting to it in your python console.
You should then be able to connect superset to your clickhouse db in the same way you'd connect to any other DB: by navigating into Superset's menu > Sources > Databases > [new]
Consider using already prepared and configured docker-compose.yml which included in Apache Superset (see https://github.com/apache/superset/blob/master/docker-compose.yml).
To work with Clickhouse should be installed sqlalchemy driver. There are two ones:
clickhouse-sqlalchemy by xzkostyan
sqlalchemy-clickhouse by cloudflare.
I recommend using clickhouse-sqlalchemy because it is actually supported and evolute, it supports both available protocols to interact with ClickHouse - HTTP and TCP (native protocol).
Let's connect to one of the public ClickHouse:
either Demo Yandex CH
docker run -it --rm yandex/clickhouse-client:latest \
--host gh-api.clickhouse.tech --user explorer -s
or Demo Altinity.Cloud CH
docker run -it --rm yandex/clickhouse-client:latest \
--host github.demo.trial.altinity.cloud -s --user demo --password demo
download source code from repo https://github.com/apache/superset
execute the commands
cd superset-master
docker-compose up
# open the new terminal
docker-compose exec superset bash /app/docker/docker-init.sh
docker-compose exec superset pip install clickhouse-sqlalchemy
docker-compose restart
wait for containers to be started and the web app to be built (see the console output, webpack should finish its work)
browse URL http://localhost:8088 (use credentials admin / admin)
add the database using one of the connection string:
# connection string for Demo Yandex ClickHouse
clickhouse+native://explorer#gh-api.clickhouse.tech/default?secure=true
# connection string for Demo Altinity.Cloud CH
clickhouse+native://demo:demo#github.demo.trial.altinity.cloud/default?secure=true
See also https://stackoverflow.com/a/66006784/303298.

How to persist configuration & analytics across container invocations in Sonarqube docker image

Sonarqube official docker image, is not persisting any configuration changes like: creating users, changing root password or even installing new plugins.
Once the container is restarted, all the configuration changes disappear and the installed plugins are lost. Even the projects' keys and their previous QA analytics data is unavailable after a restart.
How can we persist the data when using Sonarqube's official docker image?
Sonarqube image comes with a temporary h2 database engine which is not recommended for production and doesn't persist across container restarts.
We need to setup a database of our own and point it to Sonarqube at the time of starting the container.
Sonarqube docker images exposes two volumes "$SONARQUBE_HOME/data", "$SONARQUBE_HOME/extensions" as seen from Sonarqube Dockerfile.
Since we wanted to persist the data across invocations, we need to make sure that a production grade database is setup and is linked to Sonarqube and the extensions directory is created and mounted as volume on the host machine so that all the downloaded plugins are available across container invocations and can be used by multiple containers (if required).
Database Setup:
create database sonar;
grant all on sonar.* to `sonar`#`%` identified by "SOME_PASSWORD";
flush privileges;
# since we do not know the containers IP before hand, we use '%' for sonarqube host IP.
It is not necessary to create tables, Sonarqube creates them if it doesn't find them.
Starting up Sonarqube container:
# create a directory on host
mkdir /server_data/sonarqube/extensions
mkdir /server_data/sonarqube/data # this will be useful in saving startup time
# Start the container
docker run -d \
--name sonarqube \
-p 9000:9000 \
-e SONARQUBE_JDBC_USERNAME=sonar \
-e SONARQUBE_JDBC_PASSWORD=SOME_PASSWORD \
-e SONARQUBE_JDBC_URL="jdbc:mysql://HOST_IP_OF_DB_SERVER:PORT/sonar?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true&useConfigs=maxPerformance" \
-v /server_data/sonarqube/data:/opt/sonarqube/data \
-v /server_data/sonarqube/extensions:/opt/sonarqube/extensions \
sonarqube
Hi #VanagaS and others landing here.
I just wanted to provide an alternative to the above. Maybe some would even consider it an easier one.
Notice this line SONARQUBE_HOME in the Dockerfile for the docker-sonarqube image. We can control this environment variable.
When using docker run. Simply do:
txt
docker run -d \
...
...
-e SONARQUBE_HOME=/sonarqube-data
-v /PERSISTENT_DISK/sonarqubeVolume:/sonarqube-data
This will make Sonarqube create the conf, data and so forth folders and store data therein. As needed.
Or with Kubernetes. In your deployment YAML file. Do:
txt
...
...
env:
- name: SONARQUBE_HOME
value: /sonarqube-data
...
...
volumeMounts:
- name: app-volume
mountPath: /sonarqube-data
And the name in the volumeMounts property points to a volume in the volumes section of the Kubernetes deployment YAML file.
This again will make Sonarqube use the /sonarqube-data mountPath for creating extenions, conf and so forth folders, then save data therein.
And voila your Sonarqube data is thereby persisted.
I hope this will help others.
N.B. Notice that the YAML and Docker run examples are not exhaustive. They focus on the issue of persisting Sonarqube data.
Since Sonarqube v7.9 , Mysql is not supported. One needs to use postgresql. Install Postgresql and configure to run on host ip rather than localhost, private ip is preferred.
Reference: https://www.digitalocean.com/community/tutorials/how-to-install-and-use-postgresql-on-ubuntu-18-04
postgres=# create database sonar;
postgres=# create user sonar with encrypted password 'mypass';
postgres=# grant all privileges on database sonar to sonar;
create a directory on host
mkdir /server_data/sonarqube/extensions
mkdir /server_data/sonarqube/data # this will be useful in saving startup time
Start the container
docker run -d
--name sonarqube
-p 9000:9000
-e SONARQUBE_JDBC_USERNAME=sonar
-e SONARQUBE_JDBC_PASSWORD=mypass
-e SONARQUBE_JDBC_URL=jdbc:postgresql://{host/private ip only}:5432/sonar
-v /server_data/sonarqube/data:/opt/sonarqube/data
-v /server_data/sonarqube/extensions:/opt/sonarqube/extensions
sonarqube
You may face this error when you do "docker logs container_id"
ERROR: [1] bootstrap checks failed [1]: max virtual memory areas
vm.max_map_count [65530] is too low, increase to at least [262144]
This is the fix, run on your host
sysctl -w vm.max_map_count=262144
In order to add hostname
edit /etc/postgresql/10/main/postgresql.conf
In order to add docker as client for postgres edit /etc/postgresql/10/main/pg_hba.conf
10 - postgres version used

Resources