Wordpresss error establishing a database connection - docker

I keep getting this error saying "Error establishing a database connection". Wordpress is in a docker container but mysql is on the host machine that could be the problem but im not sure. Im using a pterodactyl web host egg.
I tried using this guide for the database part only: https://ubuntu.com/tutorials/install-and-configure-wordpress#5-configure-database but i still get this error. These are the db commands i did:
MariaDB [(none)]> CREATE DATABASE wordpress;
MariaDB [(none)]> CREATE USER wordpress#localhost IDENTIFIED BY 'password';
Query OK, 0 rows affected (0.007 sec)
MariaDB [(none)]> GRANT SELECT,INSERT,UPDATE,DELETE,CREATE,DROP,ALTER
-> ON wordpress.*
-> TO wordpress#localhost;
Query OK, 0 rows affected (0.002 sec)
MariaDB [(none)]> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.007 sec)
MariaDB [(none)]> quit
Bye

Related

mysql2 gem Segmentation fault at 0x0000000000000000 Aborted (core dumped) [duplicate]

The command:
mysql -u root -p
gives the error:
ERROR 1698 (28000): Access denied for user 'root'#'localhost'
But running sudo privileges, works:
sudo mysql -u root -p
Is it possible to get rid of the sudo requirement because it prevents me from opening the database in intellij? I tried the following as in the answer to this question Connect to local MySQL server without sudo:
sudo chmod -R 755 /var/lib/mysql/
which did not help. The above question has a different error thrown
Only the root user needs sudo requirement to login to mysql. I resolved this by creating a new user and granting access to the required databases:
CREATE USER 'newuser'#'localhost' IDENTIFIED BY 'password';
GRANT ALL PRIVILEGES ON database_name.* TO 'newuser'#'localhost';
now newuser can login without sudo requirement:
mysql -u newuser -p
You need to change algorithm. Following work for me,
mysql > ALTER USER 'root'#'localhost' IDENTIFIED WITH mysql_native_password BY '';
mysql > FLUSH PRIVILEGES;
You can use the same ROOT user, or a NEW_USER and remove the SUDO privileges. Below example shows how to remove connect using ROOT, without SUDO.
Connect to MY-SQL using SUDO
sudo mysql -u root
Delete the current Root User from the User Table
DROP USER 'root'#'localhost';
Create a new ROOT user (You can create a different user if needed)
CREATE USER 'root'#'%' IDENTIFIED BY '';
Grant permissions to new User (ROOT)
GRANT ALL PRIVILEGES ON *.* TO 'root'#'%' WITH GRANT OPTION;
Flush privileges, so that the Grant tables get reloaded immediately. (Why do we need to flush privileges?)
FLUSH PRIVILEGES;
Now it's all good. Just in case, check whether a new root user is created.
SELECT User,Host FROM mysql.user;
+------------------+-----------+
| User | Host |
+------------------+-----------+
| root | % |
| debian-sys-maint | localhost |
| mysql.session | localhost |
| mysql.sys | localhost |
+------------------+-----------+
4 rows in set (0.00 sec)
Exit mysql. (Press CTRL + Z). Connect to MySQL without SUDO
mysql -u root
Hope this will help!
first login to your mysql with sudo.
then use this code to change "plugin" coloumn value from "unix_socket" or "auth_socket" to "mysql_native_password" for root user.
UPDATE mysql.user SET plugin = 'mysql_native_password' WHERE user = 'root' AND plugin IN ('unix_socket', 'auth_socket');
FLUSH PRIVILEGES;
finally restart mysql service. that's it.
if you want more info, check this link
UPDATE:
In new versions of mysql or mariadb you can use :
ALTER USER 'root'#'localhost' IDENTIFIED WITH mysql_native_password USING PASSWORD('your-password');
FLUSH PRIVILEGES;
I have solved this problem using following commands.
CREATE USER 'username'#'localhost' IDENTIFIED BY 'password';
GRANT ALL PRIVILEGES ON * . * TO 'username'#'localhost';
FLUSH PRIVILEGES;
Here,
username = any user name you like.
and password = any password you like.
You can use the below query:
GRANT ALL PRIVILEGES ON *.* TO 'username'#'localhost' IDENTIFIED BY 'password';
This query is enough.
This answer needs to be slightly adapted for mariaDB instead of mysql.
First login as root using sudo:
$ sudo mysql -uroot
Then alter the mariadb root user:
ALTER USER 'root'#'localhost' IDENTIFIED WITH mysql_native_password USING PASSWORD('mypassword');
FLUSH PRIVILEGES;
From now on sudo is not longer needed:
$ mysql -uroot -p
Version used:
mysql Ver 15.1 Distrib 10.4.13-MariaDB, for osx10.15 (x86_64) using readline 5.1
Login to mysql with sudo:
sudo mysql -u root -p
After that Delete current root#localhost account:
~ MariaDB [(none)]> DROP USER 'root'#'localhost';
~ MariaDB [(none)]> CREATE USER 'root'#'localhost' IDENTIFIED BY 'password';
~ MariaDB [(none)]> GRANT ALL PRIVILEGES ON *.* TO 'root'#'localhost' WITH GRANT OPTION;
~ MariaDB [(none)]> FLUSH PRIVILEGES;
In the comment of the question you answer you referenced, it reads
Ok, just try to analyze all of the directories down in the path of the
socket file, they need to have o+rx and the sock file too (it's not a
good idea to make it modifiable by others).
You can also try to remove mysql.sock and then restart mysqld, the
file should be created by the daemon with proper privileges.
This seemed to work for this question(the one you said you looked at) so it may work for you as well
The error Message:
"ERROR 1698 (28000): Access denied for user 'root'#'localhost'"
means that the Server not allow the connect for this user and not that mysql cant access the socket.
try this to solve the problem:
Login in your DB
sudo mysql -u root -p
then make these modifications:
MariaDB []>use mysql;
MariaDB [mysql]>update user set plugin=' ' where User='root';
MariaDB [mysql]>flush privileges;
MariaDB [mysql]>exit
try login again without sudo

Running a Chainlink Node - Can't connect to database

Using docker-desktop on macOS.
I'm trying to run a node following the instructions on this page.
The database name is node, which is the same as the username: node. The user has access to the database and can log in using psql client.
Connection strings I've tried in the .env file:
postgresql://node#localhost/node
postgresql://node:password#localhost/node
postgresql://node:password#localhost:5432/node
postgresql://node:password#127.0.0.1:5432/node
postgresql://node:password#127.0.0.1/node
When I run the start command: cd ~/.chainlink-kovan && docker run -p 6688:6688 -v ~/.chainlink-kovan:/chainlink -it --env-file=.env smartcontract/chainlink local n , using docker-desktop on macOS, I get the following stack trace:
2020-09-15T14:24:41Z [INFO] Starting Chainlink Node 0.8.15 at commit a904730bd62c7174b80a2c4ccf885de3e78e3971 cmd/local_client.go:50
2020-09-15T14:24:41Z [INFO] SGX enclave *NOT* loaded cmd/enclave.go:11
2020-09-15T14:24:41Z [INFO] This version of chainlink was not built with support for SGX tasks cmd/enclave.go:12
2020-09-15T14:24:41Z [INFO] Locking postgres for exclusive access with 500ms timeout orm/orm.go:69
2020-09-15T14:24:41Z [ERROR] unable to lock ORM: dial tcp 127.0.0.1:5432: connect: connection refused logger/default.go:139 stacktrace=github.com/smartcontractkit/chainlink/core/logger.Error
/chainlink/core/logger/default.go:117
...
Does anyone know how I can resolve this?
The problem probably caused by the fact that your chainlink database has been locked with Exclusive Lock and before stopping node that locks never removed.
What you do in this situation (as what works for me) is use PgAdmin Ui or similar way to find all Locks then find the Exclusive Lock that is held on the chainlink database and note down its Process id or ids (if multiple exclusive locks there are on chainlink DB)
Log in to your pg client and run SELECT pg_terminate_backend(<pid>) or SELECT pg_cancel_backend(<pid>); Enter PID of those locks here without quotes and meanwhile keep refreshing on pg admin URL to see if those processes stopped If stopped then rerun your chainlink node.
The problem is with docker networking.
Add --network host to the docker run command so that it is:
cd ~/.chainlink-kovan && docker run -p 6688:6688 -v ~/.chainlink-kovan:/chainlink -it --env-file=.env smartcontract/chainlink --network host local n
This fixes the issue.

How to connect some graphical sql tool to database via ssh AND docker

I would like to connect my GUI database tool to my SQL Database via ssh and docker.
Currently a can connect do database via terminal using ssh user#host and some docker-compose exec mydb ... command. But then, it's of course a command line access to db.
My needs / my question
Is there a way to have my GUI database tool to connect to that DB that way ?
To be more explicit, i would like to connect to that db without any server change of any kind (really important point). So with only local configuration change. Maybe we can use the same way i can use by hand ?
What i tried
I already tried to use some ssh configuration in my config file like ProxyCommand in my ssh config file, but these command are executed in my computer...so i don't find any way to success with this.
I also searched many times anyone with the same will without success.
Somebody with great idea ?
There are some things to consider here.
make sure the port mapping is valid in the docker-compose.yaml file, it should be something like
services:
...
database:
...
ports:
- "3306:3306" # 3306 is the default Non-SSL port for MySQL database images
make sure the machine you are running the database container on has no firewall rule to block incoming traffic
make sure the in the database, the user you are trying to connect with has a whitelisted location.
for example, if you want to be able to connect as root from anywhere, you need check the user table in the mysql database, and it should look something like:
mysql> select User, Host from user;
+------------------+-----------+
| User | Host |
+------------------+-----------+
| root | % |
| mysql.infoschema | localhost |
| mysql.session | localhost |
| mysql.sys | localhost |
| root | localhost |
+------------------+-----------+
5 rows in set (0.00 sec)
mysql>
In this case, the root user can connect to this particular DB from anywhere, because of the wildcard (%) host.

Db2 docker container exited with error code 0

I am trying to run a DB2 docker container using docker-compose but the problem is the container is every time exited with error code 0. below is the docker compose file.
version: "2"
services:
db:
container_name: db2
image: ibmcom/db2express-c:latest
environment:
DB2INST1_PASSWORD: samplepassword!
LICENSE: accept
ports:
- "50000:50000"
tty: true
volumes:
- "./db_create.sh:/opt/db_create.sh"
command:
- "/opt/db_create.sh"
I added the tty: true after seeing some solutions suggest in stackoverflow. But its not working for me below is the docker log
Starting db2 ... done
Attaching to db2
db2 | Changing password for user db2inst1.
db2 | New password: Retype new password: passwd: all authentication tokens updated successfully.
db2 | libnuma: Warning: /sys not mounted or invalid. Assuming one node: No such file or directory
db2 | SQL1063N DB2START processing was successful.
db2 |
db2 | Creating database "SLIM"...
db2 | Existing "SLIM" database found...
db2 | Dropping and recreating database "SLIM"...
db2 | Connecting to database "SLIM"...
db2 | Creating tables and data in schema "DB2INST1"...
db2 | Creating tables with XML columns and XML data in schema "DB2INST1"...
db2 |
db2 | 'db2sampl' processing complete.
db2 |
db2 exited with code 0
not sure why the container is stopping even the log doesn't show anything. Does anyone how to keep the container up and running.
Thanks
A container may exit 0 when it's done all processing. So if your /opt/db_create.sh script runs things and then it doesn't keep a process running (not in background - daemon) then it will exit.

Inheriting from postgres docker container - doesn't keep the daemon alive?

I'm writing a Dockerfile that pulls in an external dump, then loads it - https://github.com/scala-eveapi/postgres-sde/commit/95d2ed70dff8326c9acc75c56c9a7b8c8f6bbc73 - docker build works fine. When running it, it restored the DB, but after running the .sql, it just exits, instead of keeping the postgres server alive.
The file:
FROM postgres:latest
ADD https://www.fuzzwork.co.uk/dump/latest/postgres-20161114-TRANQUILITY.dmp.bz2 sde.bz2
# ADD postgres-20161114-TRANQUILITY.dmp.bz2 sde.bz2
RUN bunzip2 sde.bz2
COPY load-sde.sh /docker-entrypoint-initdb.d/01-load-sde.sh
COPY add-constraints.sql /docker-entrypoint-initdb.d/02-add-constraints.sql
The other two files are:
#!/bin/bash
set -e
pg_restore -d "${POSTGRES_DB:-$POSTGRES_USER}" -U "$POSTGRES_USER" sde
And the SQL:
alter table "mapSolarSystems"
alter column "solarSystemName" set not null;
alter table "invTypes"
alter column "typeName" set not null;
alter table "staStations"
alter column "stationName" set not null;
alter table "staStations"
alter column "solarSystemID" set not null;
Logs:
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
Data page checksums are disabled.
fixing permissions on existing directory /var/lib/postgresql/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ...
WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.
ok
Success. You can now start the database server using:
pg_ctl -D /var/lib/postgresql/data -l logfile start
****************************************************
WARNING: No password has been set for the database.
This will allow anyone with access to the
Postgres port to access your database. In
Docker's default configuration, this is
effectively any other container on the same
system.
Use "-e POSTGRES_PASSWORD=password" to set
it in "docker run".
****************************************************
waiting for server to start....LOG: database system was shut down at 2016-12-06 12:05:18 UTC
LOG: MultiXact member wraparound protections are now enabled
LOG: database system is ready to accept connections
LOG: autovacuum launcher started
done
server started
ALTER ROLE
/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/01-load-sde.sh
ERROR: role "yaml" does not exist
STATEMENT: ALTER TABLE "agtAgentTypes" OWNER TO yaml;
[...] pg_restore errors
WARNING: errors ignored on restore: 89
The errors caused it to stop apparently. I added the yaml role and now it works properly.
Look at the last three lines on the initial Dockerfile (the one you're inheriting from: https://github.com/docker-library/postgres/blob/edd455e5b1dbfddc280beb244228054374f2f3dd/9.6/Dockerfile):
ENTRYPOINT ["/docker-entrypoint.sh"]
EXPOSE 5432
CMD ["postgres"]
You're extending that Dockerfile, but you're not setting the command to run... So yea, the container stops and it'll be marked as exited.

Resources