How to specify which database (not "master") in tsql (FreeDTS command line tool) - freetds

I am actually trying to connect to an MS SQL Server on Azure, from Python, via the module pymssql which relies on FreeTDS. I just can't make it work. I found the command line tool tsql which is supposedly for testing FreeTDS connections. And also, I can't connect with tsql. Regarding this, I have one very specific question.
How do I specify which "database" in the tsql tool. Fx if I use dbeaver, I must specify the database, "ava-iot". Using man tsql does not tell me how to specify another database.
When I try:
$ tsql -H uepbua32ii.database.windows.net -p 1433 -U Azure_SQL_Reader_Temporary -P XXXXXX
I get:
"The server principal "Azure_SQL_Reader_Temporary" is not able to access the database "master" under the current security context."
This tells me, that it is specifically trying to connect to a database named master. So how do I tell it to try the database ava-iot.

The reason this is happening is because your user Azure_SQL_Reader_Temporary has the default database set to be master. You can change that as well. But to answer your question of how to do it with tsql using the -D parameter.
tsql -H uepbua32ii.database.windows.net -p 1433 -D dbname -U Azure_SQL_Reader_Temporary -P XXXXXX
Good luck!

Related

psql: error: connection to server on socket "/tmp/.s.PGSQL.5432" failed: FATAL: database "myname" does not exist

I a rails app running on my local environment using postgresql. This morning I spun up a new one and after install the pg gem, etc. I am running into the following error when trying to run
psql
psql: error: connection to server on socket "/tmp/.s.PGSQL.5432" failed: FATAL: database "jackcollins" does not exist
What's strange is the db name "jackcollins" is from my other rails app.
I ran
pgrep -l postgres
and the output was
20902 postgres
20919 postgres
20920 postgres
20921 postgres
20922 postgres
20923 postgres
20924 postgres
I'm unsure how to proceed so that these apps can both run their own postgres instance.
I had the same problem as you, after making attempts to reinstall, rm -rf xxx.pid, etc., I ended up executing the following command and was eventually able to connect to the PostgreSQL database.
brew install postgresql
an error message appears
run createdb (because macs don't create username databases after installing PostgreSQL)
execute psql to connect successfully.
Regarding the error, you are trying to connect to jackcollins, please can you test trying to connect using the database flag?:
psql -d your_database_name
If you type
psql --help
in the terminal, you will see that the default database is your username. Unless you have that database in PostgreSQL, you will get an error. Instead, you can run the command
psql -d your_database_name
Also, if you want to log in as a specific user (by default it is a current user):
psql -d your_database_name -U your_username -W
The last -W is for password. Hope, it helps!
In the absence of -d <database_name> psql will use the OS user name as the database name. As in many things it better to be explicit rather then implicit, especially when working on a new instance. In addition use -h <host_name>, -p <port_number> and -U <user_name>. Then you know who you are connecting as, as well as how. It is spelled out here psql and since psql is a libpq program in more detail here Key words.

Magento, database dump

I am trying to get db dump by command
docker exec container-name sh -c 'exec mysqldump --all-databases -uroot -p""' > db-backups/some-dump-name.sql
and I am getting
Got error: 2002: "Can't connect to local MySQL server through socket '/opt/bitn
ami/mysql/tmp/mysql.sock' (2)" when trying to connect
Magento runs on this image. Any ideas what could be wrong? I can provide more details if needed.
Bitnami Engineer here,
You also need to set the hostname of the database when backing up the databases. The Magento container doesn't include a database server, it uses an external one.
You probably specified that using the MARIADB_HOST env variable. If you used the docker-compose.yml file we provide, that hostname is mariadb.
exec mysqldump --all-databases -uroot -h HOSTNAME -p""

Unable to restore complete database from pg_dump

I ran the following command to backup my PostgreSQL database:
pg_dump -U postgres -h localhost -W -F t crewdb > /home/chris1/Documents/crewcut/crewdb/crewdb_bak.tar
This file was later saved to a USB.
After installing PostgreSQL on a new Ubuntu 18.04 system I ran the following command to restore the database from the USB:
psql -U postgres -d crewdb < /media/chh1/1818-305D/crewdb_bak.tar
The structure of the database has been recovered, so tables, views etc. except the actual data in the tables which has not been recovered.
Has anyone got an idea why this is and how to solve this.
I do not know if the command you ran to restore your data is correct; on any case try to use the pq_restore as says from the official documentation "restore a PostgreSQL database from an archive file created by pg_dump" that's the correct way to do it.
In my case I use pg_dumpall -U user > backup.sql then "cat backup.sql | psql -U user database"
I recommend you to check out the flags you're using on your pg_dump

Rancher with postgresql

I'm trying to run Rancher as a container using a postgresql database, instead of the Rancher database. In the documentation (http://docs.rancher.com/rancher/installing-rancher/installing-server/) is written that you can use an external database but it mentions only mysql. I was wondering if it is possible to use another external database like postgresql.
So, i tried starting the container with the below command pointing it to the postgresql database running on the same host as the container:
docker run -d --restart=always -p 8080:8080 -e CATTLE_DB_CATTLE_MYSQL_HOST=127.0.0.1 -e CATTLE_DB_CATTLE_MYSQL_PORT=5432 -e CATTLE_DB_CATTLE_MYSQL_NAME=db_name -e CATTLE_DB_CATTLE_USERNAME=db_user -e CATTLE_DB_CATTLE_PASSWORD=some_password rancher/server
The above results in the container starting up but without using the postgresql database which i'm telling it to use. It uses instead the Rancher database.
Tried also with the below but still the same results:
docker run -d --restart=always -p 8080:8080 -e CATTLE_DB_CATTLE_HOST=127.0.0.1 -e CATTLE_DB_CATTLE_PORT=5432 -e CATTLE_DB_CATTLE_NAME=db_name -e CATTLE_DB_CATTLE_USERNAME=db_user -e CATTLE_DB_CATTLE_PASSWORD=some_password rancher/server
I'm thinking that either the arguments that i've passed are wrong, or Rancher supports only mysql as an external database.
Any ideas/suggestions ?
Thank you,
MySQL is the only supported database. (and H2, for integration tests)
All the database interaction is through abstractions that are theoretically database-agnostic, but we don't package the driver for or ever test against Postgres. According to this old PR there are a bunch of small issues with things like the the column type names that would have to be addressed. To attempt to use it you would have to set CATTLE_DB_CATTLE_DATABASE=postgres, what you're doing now is trying to connect the MySQL (actually MariaDB) client to a Postgres port and they have no idea how to talk to each other.

Installing PostgreSQL within a docker container

I've been following several different tutorials as well as the official one however whenever I try to install PostgreSQL within a container I get the following message afterwards
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
I've looked through several questions here on SO and throughout the internet but no luck.
The problem is that the your application/project is trying to access the postgres socket file in the HOST machine (not docker container).
To solve it one would either have to explicitly ask for an tcp/ip connection while using the -p flag to set up a port for the postgres container, or share the unix socket with the HOST maching using the -v flag.
:NOTE:
Using the -v or --volume= flag means you are sharing some space between the HOST machine and the docker container. That means that if you have postgres installed on your host machine and its running you will probably run into issues.
Below I demonstrate how to run a postgres container that is both accessible from tcp/ip and unix socket. Also I am naming the container as postgres.
docker run -p 5432:5432 -v /var/run/postgresql:/var/run/postgresql -d --name postgres postgres
There are other solutions, but I find this one the most suitable. Finally if the application/project that needs access is also a container, it is better to just link them.
By default psql is trying to connect to server using UNIX socket. That's why we see /var/run/postgresql/.s.PGSQL.5432- a location of UNIX-socket descriptor.
If you run postgresql-server in docker with port binding so you have to tell psql to use TCP-socket. Just add host param (--host or -h):
psql -h localhost [any other params]
UPD. Or share UNIX socket descriptor with host (where psql will be started) as was shown in main answer. But I prefer to use TCP socket as easy managed approach.
FROM postgres:9.6
RUN apt-get update && apt-get install -q -y postgresql-9.6 postgresql-client-9.6 postgresql-contrib-9.6 postgresql-client-common postgresql-common
RUN echo postgres:postgres | chpasswd
RUN pg_createcluster 9.6 main --start
RUN /etc/init.d/postgresql start
RUN su -c "psql -c \"ALTER USER postgres PASSWORD 'postgres';\"" postgres
Here are instructions for fixing that error that should also work for your docker container: PostgreSQL error 'Could not connect to server: No such file or directory'
If that doesn't work for any reason, there are many of off-the-shelf postgresql docker containers you can look at for reference on the Docker Index: https://index.docker.io/search?q=postgresql
Many of the containers are built from trusted repos on github. So if you find one that seems like it meets your needs, you can review the source.
The Flynn project has also included a postgresql appliance that might be worth checking out: https://github.com/flynn/flynn-postgres
Run the below command to create a new container with PSQL running it it, which can be accessed from other containers/applications.
docker run --name postgresql-container -p 5432:5432 -e POSTGRES_PASSWORD=somePassword -d postgres
Now, export the connection-string or DB credentials from ur .env and use it in the application.
Refernce: detailed installion and running

Resources