I am trying to get db dump by command
docker exec container-name sh -c 'exec mysqldump --all-databases -uroot -p""' > db-backups/some-dump-name.sql
and I am getting
Got error: 2002: "Can't connect to local MySQL server through socket '/opt/bitn
ami/mysql/tmp/mysql.sock' (2)" when trying to connect
Magento runs on this image. Any ideas what could be wrong? I can provide more details if needed.
Bitnami Engineer here,
You also need to set the hostname of the database when backing up the databases. The Magento container doesn't include a database server, it uses an external one.
You probably specified that using the MARIADB_HOST env variable. If you used the docker-compose.yml file we provide, that hostname is mariadb.
exec mysqldump --all-databases -uroot -h HOSTNAME -p""
Related
How to recreate my problem
Creating the MonetDB Container
I have this setup (using windows with Docker Desktop).
Create the official monetdb docker container with the follwing command:
docker run -v $HOME/Desktop/monetdbtest:/monetdbtest -e 'MONET_DATABASE=docker' -e 'MONETDB_PASSWORD=docker' -p 50000:50000 -d topaztechnology/monetdb:latest
explanation what the command does:
creates a monetdb container with a database called 'docker' and applies the password 'docker' to the default user called 'monetdb'. It also mounts my directory monetdbtest/ into the container.
Testing the container with DBeaver
I test the connection using DBeaver with the following credentials:
JDBC URL: jdbc:monetdb://localhost:50000/docker
host: localhost
port: 50000
Database/schema: docker
username: monetdb
password: docker
this works fine, i am able to connect and can exequte sql queries with dbeaver.
Using mclient within the container to send queries
I enter the container as root with the following command:
docker exec -u root -t -i nostalgic_hodgkin /bin/bash
(replace nostalgic_hodgkin with your randomly generated container name)
2.
I navigate to my mounted directory
cd monetdbtest
then I test the connection with mclient:
mclient -h localhost -p 50000 -d docker
I get asked for user and password, so for user I enter
monetdb and for password I enter docker. It works and I am in the mclient shell, able to execute SQL queries.
3.
Since I don't want to always enter username and password I create a .monetdb file in the monetdbtest/ directory. It looks like this:
user=monetdb
password=docker
Now I should be able to use the mclient command without entering user information. So I type this command:
mclient -h localhost -p 50000 -d docker
However I get the message:
'nvalidCredentialsException:checkCredentials:invalid credentials for user 'monetdb
I did everything according to the mclient manual. Maybe I missed something?
You may need to export the environment variable DOTMONETDBFILE with value /monetdbtest/.monetdb. See the man page for mclient, especially the paragraph before the OPTIONS heading.
Well considering I have a docker (with postgres) running I could dump the data using pg_dump using:
sudo docker exec <DOCKERNAME> pg_dump --data-only --table=some_table some_db
I could further send this to a file by adding > export.sql
sudo docker exec <DOCKERNAME> pg_dump --data-only --table=some_table some_db > export.sql
Finally this works fine in an (interactive) ssh session.
However when using ssh the file is stored on remote host, instead of in my local system, I wish to get the file locally instead of in remote. I know I can send a command directly to the ssh shell and then exporting is done to the local host:
ssh -p 226 USER#HOST 'command' > local.sql
IE:
ssh -p 226 USER#HOST 'echo test' > local.sql
However on try to combine both commands I get an error
ssh -p 226 USER#HOST 'sudo docker exec <DOCKERNAME> pg_dump --data-only --table=some_table some_db' > local.sql
sudo: no tty present and no askpass program specified
And if I dare remove sudo (which would be silly) I get: sh: docker: command not found. How do I solve this? How can I export the pg dump direct to my local pc? With a simple command? Or at least without first creating a copy of the file on the remote system?
I'd avoid sudo or docker exec for this setup.
First, make sure that your database container has a port published to the host. In Docker Compose, for example:
version: '3.8'
services:
db:
image: postgres
ports:
- '127.0.0.1:11111:5432'
The second port number must be the ordinary PostgreSQL port 5432; the first port number can be anything that doesn't conflict; the 127.0.0.1 setting makes the published port only accessible on the local system.
Then, when you connect to the remote system, you can use ssh to set up a port forward:
ssh -L 22222:localhost:11111 -N me#remote.example.com
ssh -L sets up a port forward from your local system to the remote system; ssh -N says to not run a command, just do the port forward.
Now on your local system, you can run psql and other similar client tools. Locally, localhost:22222 connects to the ssh tunnel; that forwards to localhost:11111 on the remote system; and that forwards to port 5432 in the container.
pg_dump --port=22222 --data-only --table=some_table some_db > export.sql
If you have the option of directly connecting to the database host, you could remove 127.0.0.1 from the ports: setting, and then pg_dump --host=remote.example.com --port=11111, without the ssh tunnel. (But I'm guessing it's there for a reason.)
You could forward socket connections over ssh then connect to the container from your host if you have docker installed:
ssh -n -N -T -L ${PWD}/docker.sock:/var/run/docker.sock user#host &
docker -H unix://${PWD}/docker.sock exec ...
I am not able to connect with MongoDB and PostgreSQL. I am using the command below:
docker exec -it todomvc-mongodb mongo -user wolkenkit -p 576085646aa24f4670b929f0c47032ebf149e48f admin.
It shows the following result:
2018-08-14T11:48:20.592+0000 E QUERY [thread1] Error: Authentication failed. : –
I have tried to reproduce your issue. What I have done:
I cloned the wolkenkit-todomvc sample application.
I started it using wolkenkit start.
This gave me the (randomly created) shared key 4852f4430d67990c28354b6fafae449c4f82e9ab (please note that this value is different each time you run wolkenkit start, unless you set it explicitly to a value of your choice, so YMMV).
Then I tried:
$ docker exec -it todomvc-mongodb mongo -user wolkenkit -p 4852f4430d67990c28354b6fafae449c4f82e9ab admin
It actually doesn't work. The reason for this is that the parameter -user does not exist, it either has to be -u or --username. If you run:
$ docker exec -it todomvc-mongodb mongo -u wolkenkit -p 4852f4430d67990c28354b6fafae449c4f82e9ab admin
Then, things work as expected.
Hope this helps 😊
I am actually trying to connect to an MS SQL Server on Azure, from Python, via the module pymssql which relies on FreeTDS. I just can't make it work. I found the command line tool tsql which is supposedly for testing FreeTDS connections. And also, I can't connect with tsql. Regarding this, I have one very specific question.
How do I specify which "database" in the tsql tool. Fx if I use dbeaver, I must specify the database, "ava-iot". Using man tsql does not tell me how to specify another database.
When I try:
$ tsql -H uepbua32ii.database.windows.net -p 1433 -U Azure_SQL_Reader_Temporary -P XXXXXX
I get:
"The server principal "Azure_SQL_Reader_Temporary" is not able to access the database "master" under the current security context."
This tells me, that it is specifically trying to connect to a database named master. So how do I tell it to try the database ava-iot.
The reason this is happening is because your user Azure_SQL_Reader_Temporary has the default database set to be master. You can change that as well. But to answer your question of how to do it with tsql using the -D parameter.
tsql -H uepbua32ii.database.windows.net -p 1433 -D dbname -U Azure_SQL_Reader_Temporary -P XXXXXX
Good luck!
I've been following several different tutorials as well as the official one however whenever I try to install PostgreSQL within a container I get the following message afterwards
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
I've looked through several questions here on SO and throughout the internet but no luck.
The problem is that the your application/project is trying to access the postgres socket file in the HOST machine (not docker container).
To solve it one would either have to explicitly ask for an tcp/ip connection while using the -p flag to set up a port for the postgres container, or share the unix socket with the HOST maching using the -v flag.
:NOTE:
Using the -v or --volume= flag means you are sharing some space between the HOST machine and the docker container. That means that if you have postgres installed on your host machine and its running you will probably run into issues.
Below I demonstrate how to run a postgres container that is both accessible from tcp/ip and unix socket. Also I am naming the container as postgres.
docker run -p 5432:5432 -v /var/run/postgresql:/var/run/postgresql -d --name postgres postgres
There are other solutions, but I find this one the most suitable. Finally if the application/project that needs access is also a container, it is better to just link them.
By default psql is trying to connect to server using UNIX socket. That's why we see /var/run/postgresql/.s.PGSQL.5432- a location of UNIX-socket descriptor.
If you run postgresql-server in docker with port binding so you have to tell psql to use TCP-socket. Just add host param (--host or -h):
psql -h localhost [any other params]
UPD. Or share UNIX socket descriptor with host (where psql will be started) as was shown in main answer. But I prefer to use TCP socket as easy managed approach.
FROM postgres:9.6
RUN apt-get update && apt-get install -q -y postgresql-9.6 postgresql-client-9.6 postgresql-contrib-9.6 postgresql-client-common postgresql-common
RUN echo postgres:postgres | chpasswd
RUN pg_createcluster 9.6 main --start
RUN /etc/init.d/postgresql start
RUN su -c "psql -c \"ALTER USER postgres PASSWORD 'postgres';\"" postgres
Here are instructions for fixing that error that should also work for your docker container: PostgreSQL error 'Could not connect to server: No such file or directory'
If that doesn't work for any reason, there are many of off-the-shelf postgresql docker containers you can look at for reference on the Docker Index: https://index.docker.io/search?q=postgresql
Many of the containers are built from trusted repos on github. So if you find one that seems like it meets your needs, you can review the source.
The Flynn project has also included a postgresql appliance that might be worth checking out: https://github.com/flynn/flynn-postgres
Run the below command to create a new container with PSQL running it it, which can be accessed from other containers/applications.
docker run --name postgresql-container -p 5432:5432 -e POSTGRES_PASSWORD=somePassword -d postgres
Now, export the connection-string or DB credentials from ur .env and use it in the application.
Refernce: detailed installion and running