PostgreSQL wrong ownership - ruby-on-rails

I'll preface this with the fact that I haven't used PostgreSQL much. I tried using it with RoR but the fact that it uses an ORM, I never got why PostgreSQL was the flavor of choice.
After fighting with getting the damn thing installed on Ubuntu 14.04, I need to clone a repo that depends on it.
After about 30 minutes of dealing trying a few things, I discovered:
$ /usr/lib/postgresql/9.4/bin/postgres -d 3 -D /var/lib/postgresql/9.4/main -c config_file=/etc/postgresql/9.4/main/postgresql.conf
LOG: skipping missing configuration file "/var/lib/postgresql/9.4/main/postgresql.auto.conf"
2015-02-14 21:05:01 PST [7665-2] FATAL: data directory "/var/lib/postgresql/9.4/main" has wrong ownership
2015-02-14 21:05:01 PST [7665-3] HINT: The server must be started by the user that owns the data directory.
2015-02-14 21:05:01 PST [7665-4] DEBUG: shmem_exit(1): 0 before_shmem_exit callbacks to make
2015-02-14 21:05:01 PST [7665-5] DEBUG: shmem_exit(1): 0 on_shmem_exit callbacks to make
2015-02-14 21:05:01 PST [7665-6] DEBUG: proc_exit(1): 0 callbacks to make
2015-02-14 21:05:01 PST [7665-7] DEBUG: exit(1)
One, I don't know what this auto.conf file it's looking for as I'm specifying the conf file.
However... (edited to what I think are the appropriate line[s])
$ sudo gedit /etc/postgresql/9.4/main/pg_hba.conf
local all postgres 127.0.0.1 peer
(I added in the local IP after nothing working. Still doesn't work.)
And (/etc/postgresql/9.4/main/)
-rw-r--r-- 1 postgres postgres 315 Feb 14 20:20 environment
-rw-r--r-- 1 postgres postgres 143 Feb 14 20:20 pg_ctl.conf
-rw-r----- 1 postgres postgres 4641 Feb 14 20:55 pg_hba.conf
-rw-r----- 1 postgres postgres 4641 Feb 14 20:20 pg_hba.conf~
-rw-r----- 1 postgres postgres 1636 Feb 14 20:20 pg_ident.conf
-rw-r--r-- 1 postgres postgres 21461 Feb 14 20:20 postgresql.conf
-rw-r--r-- 1 postgres postgres 378 Feb 14 20:20 start.conf
Seems to me the configuration files are owned by postgres. What gives?
Update (9:30p)
Running the following command (as postgres) gives the same result.
$ su - postgres; /usr/lib/postgresql/9.4/bin/postgres -d 3 -D /var/lib/postgresql/9.4/main -c config_file=/etc/postgresql/9.4/main/postgresql.conf

Judging from the error message, ownership for the data directory seems to be misconfigured. If so, fix with (as privileged system user):
chown postgres:postgres /var/lib/postgresql/9.4
chown postgres:postgres /var/lib/postgresql/9.4/main
Use the "recursive" option -R if anything inside those directories is owned by different users.

Related

Permission denied error when starting Elasticsearch as Singularity container

I am trying to run single node Elasticsearch instance on a HPC cluster. To do this, I am converting the Elasticsearch docker container as a singularity container. When I launch the container itself I get the following error:
$ singularity exec --overlay overlay.img elastic.sif /usr/share/elasticsearch/bin/elasticsearch
Could not create auto-configuration directory
Exception in thread "main" java.lang.RuntimeException: starting java failed with [1]
output:
[0.000s][error][logging] Error opening log file 'logs/gc.log': Permission denied
[0.000s][error][logging] Initialization of output 'file=logs/gc.log' using options 'filecount=32,filesize=64m' failed.
error:
Invalid -Xlog option '-Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m', see error log for details.
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
at org.elasticsearch.server.cli.JvmOption.flagsFinal(JvmOption.java:113)
at org.elasticsearch.server.cli.JvmOption.findFinalOptions(JvmOption.java:80)
at org.elasticsearch.server.cli.MachineDependentHeap.determineHeapSettings(MachineDependentHeap.java:59)
at org.elasticsearch.server.cli.JvmOptionsParser.jvmOptions(JvmOptionsParser.java:132)
at org.elasticsearch.server.cli.JvmOptionsParser.determineJvmOptions(JvmOptionsParser.java:90)
at org.elasticsearch.server.cli.ServerProcess.createProcess(ServerProcess.java:211)
at org.elasticsearch.server.cli.ServerProcess.start(ServerProcess.java:106)
at org.elasticsearch.server.cli.ServerProcess.start(ServerProcess.java:89)
at org.elasticsearch.server.cli.ServerCli.startServer(ServerCli.java:213)
at org.elasticsearch.server.cli.ServerCli.execute(ServerCli.java:90)
at org.elasticsearch.common.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:54)
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:85)
at org.elasticsearch.cli.Command.main(Command.java:50)
at org.elasticsearch.launcher.CliToolLauncher.main(CliToolLauncher.java:64)
If I understand correctly, Elasticsearch is trying to create a logfile in /var/log/elasticsearch but does not have the correct permissions. So I created the following recipe to create the folders and set the permission such that any process can write into the log directory. My recipe is the following:
Bootstrap: docker
From: elasticsearch:8.3.1
%files
elasticsearch.yml /usr/share/elasticsearch/config/
%post
mkdir -p /var/log/elasticsearch
chown -R elasticsearch:elasticsearch /var/log/elasticsearch
chmod -R 777 /var/log/elasticsearch
mkdir -p /var/data/elasticsearch
chown -R elasticsearch:elasticsearch /var/data/elasticsearch
chmod -R 777 /var/data/elasticsearch
The elasticsearch.yml file has the following content:
cluster.name: "docker-cluster"
network.host: 0.0.0.0
discovery.type: single-node
ingest.geoip.downloader.enabled: false
After building this recipe the directory /var/log/elasticsearch seems to get created correctly:
$ singularity exec elastic.sif ls -alh /var/log/
total 569K
drwxr-xr-x 4 root root 162 Jul 8 14:43 .
drwxr-xr-x 12 root root 172 Jul 8 14:43 ..
-rw-r--r-- 1 root root 7.7K Jun 29 17:29 alternatives.log
drwxr-xr-x 2 root root 69 Jun 29 17:29 apt
-rw-r--r-- 1 root root 58K May 31 11:43 bootstrap.log
-rw-rw---- 1 root utmp 0 May 31 11:43 btmp
-rw-r--r-- 1 root root 187K Jun 29 17:30 dpkg.log
drwxrwxrwx 2 elasticsearch elasticsearch 3 Jul 8 14:43 elasticsearch
-rw-r--r-- 1 root root 32K Jun 29 17:30 faillog
-rw-rw-r-- 1 root utmp 286K Jun 29 17:30 lastlog
-rw-rw-r-- 1 root utmp 0 May 31 11:43 wtmp
But when I launch the container I get the permission denied error listed above.
What is missing here? What permissions is Elasticsearch expecting?
The following workaround seems to be working for me now:
When launching the singularity container, the elasticsearch process is executed inside the container with the same UID as my own UID (the user that is launching the singularity container with singularity exec). The elasticsearch container is configured to run elasticsearch with the a separate user elasticsearch that exists inside the container. The issue is that singularity (unlike docker) will run every process inside the container with my own UID and not the elasticsearch UID, resulting in the error above.
To work around this, I created a base ubuntu singularity image and then installed elasticsearch into the container following these installation instructions (https://www.elastic.co/guide/en/elasticsearch/reference/current/targz.html). Because the installation was performed with my system user and UID, the entire elasticsearch installation belongs to my system user and not a separate elasticsearch user. Then I can launch the elasticsearch service inside the container.

After mounting a volume to the oracle 11g XE container from dockerhub it is not possible to connect

I can run and connect the default oracle setup like so:
docker run -d \
--name oracleXE \
-e ORACLE_ALLOW_REMOTE=true \
-e ORACLE_ENABLE_XDB=true \
-p 49161:1521 \
-p 49162:8080 \
oracleinanutshell/oracle-xe-11g
However, when I try to mount volumes to persist the data I run into problems. I have tried to mount just /u01/app/oracle/oradata (like answered here: Persisting data in docker's volume for Oracle database). But then I get some connection returned -1 error.
IO Error: Got minus one from a read call, connect lapse 1 ms., Authentication lapse 0 ms. Got minus one from a read call
And when I mount all the volumes (like asked here: Is there a better way to run oracle database with docker in a development environment?), then I get the famous listener error:
Listener refused the connection with the following error:
ORA-12528, TNS:listener: all appropriate instances are blocking new connections:
# Create a folder in a known location for you
mkdir -p .data/oragle11gXE/admin
mkdir -p .data/oragle11gXE/diag
mkdir -p .data/oragle11gXE/fast_recovery_area
mkdir -p .data/oragle11gXE/oradata
docker run -d \
--name oracleXE \
-e ORACLE_ALLOW_REMOTE=true \
-e ORACLE_ENABLE_XDB=true \
-v `pwd`/.data/oragle11gXE/admin:/u01/app/oracle/admin \
-v `pwd`/.data/oragle11gXE/diag:/u01/app/oracle/diag \
-v `pwd`/.data/oragle11gXE/fast_recovery_area:/u01/app/oracle/fast_recovery_area \
-v `pwd`/.data/oragle11gXE/oradata:/u01/app/oracle/oradata \
-p 49161:1521 \
-p 49162:8080 \
oracleinanutshell/oracle-xe-11g
How am I supposed to persist data?
I have even tried to copy the entire /u01/app directory over to my local machine and mount that as a volume, but this results in the -1 IO error as well.
EDIT 1:
I have tried to copy only the XE folder to my local host and mount it using -v ${pwd}/.data/oragle11gXE:/u01/app/oracle/oradata
Then I get the error message
[08006][1033] ORA-01033: ORACLE initialization or shutdown in progress
I can login into the container start SQL plus and can see that the database is mounted and active, I can not alter it to open as the command just fails pointing to a log file which is a binary.
SQL> select status, database_status from v$instance;
STATUS DATABASE_STATUS
------------ -----------------
MOUNTED ACTIVE
SQL> alter database open;
alter database open
*
ERROR at line 1:
ORA-00314: log 1 of thread 1, expected sequence# 3 doesn't match 1
ORA-00312: online log 1 thread 1:
'/u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_g6c5nhsl_.log'
When I mount the local path to an alternate /u01/app/oracle/oradata2, I can not see any difference
root#b615ff50b724:/u01/app/oracle# ls -l oradata/XE
total 1182052
-rw-r----- 1 oracle dba 9748480 Apr 20 17:29 control.dbf
-rw-r----- 1 oracle dba 671096832 Apr 20 17:23 sysaux.dbf
-rw-r----- 1 oracle dba 377495552 Apr 20 17:23 system.dbf
-rw-r----- 1 oracle dba 20979712 Apr 20 17:24 temp.dbf
-rw-r----- 1 oracle dba 26222592 Apr 20 17:23 undotbs1.dbf
-rw-r----- 1 oracle dba 104865792 Apr 20 17:23 users.dbf
root#b615ff50b724:/u01/app/oracle# ls -l oradata2/XE
total 1182040
-rw-r----- 1 oracle dba 9748480 Apr 20 17:28 control.dbf
-rw-r----- 1 oracle dba 671096832 Apr 20 17:23 sysaux.dbf
-rw-r----- 1 oracle dba 377495552 Apr 20 17:23 system.dbf
-rw-r----- 1 oracle dba 20979712 Apr 20 17:24 temp.dbf
-rw-r----- 1 oracle dba 26222592 Apr 20 17:23 undotbs1.dbf
-rw-r----- 1 oracle dba 104865792 Apr 20 17:23 users.dbf
root#b615ff50b724:/u01/app/oracle# ls -l oradata2
it is delivered with a set of init db files.
when you overwrite them with the mount nothing happens and you have to start from scratch. connect as sysdba define the tablespaces etc.
I saved those files by starting from scratch. then shutdown then save the oradata files into the external folder.
But then you start the database in stopped state.
startup; (sys as sysdba/oracle) causes
ORA-03113: end-of-file on communication channel -- after the mount...
second startup;
ORA-24324: service handle not initialized
ORA-01041: internal error. hostdef extension doesn't exist
ah more info...
working on it.
its antique oracle. dont forget. they didnt know about containers.
only about service contracts and licenses ;)

Docker - committed MySQL image not starting (data directory is unusable)

I used docker commit and now the container won't run properly. Should I:
specify a new data folder within the container, so it will get deleted when I delete the container
delete the host folder contents under /var/lib/mysql?
What I did was I started a Docker container:
docker run -p 33069:3306 --name some-mysql -e MYSQL_ROOT_PASSWORD=test -d mysql:8.0.26 mysqld --default-authentication-plugin=mysql_native_password
Configured some stuff like remote root login, inserted some data into it. Then I wanted to back it up and did:
docker commit -p 6b836bfdf062 backup-mysql8
Which went OK:
root#server:/home/user# docker images | grep mysql
backup-mysql8 latest 1effec593a03 45 minutes ago 514MB
Then I stopped and removed the old container. And tried to start a new one from the backup:
docker run -p 33069:3306 --name some-mysql -e MYSQL_ROOT_PASSWORD=test -d mysql:8.0.26 mysqld --default-authentication-plugin=mysql_native_password -d backup-mysql8
After a few seconds, it would just die.
root#server:/var/lib/mysql# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
13b17d3af8f7 mysql:8.0.26 "docker-entrypoint.s…" 21 minutes ago Exited (1) 21 minutes ago some-mysql
I looked at the logs:
docker logs 13b17d3af8f7
And found this:
2021-09-10T15:15:37.074480Z 0 [ERROR] [MY-013236] [Server] The designated data directory /var/lib/mysql/ is unusable. You can remove all files that the server added to it.
I used inspect and saw that this new host is using my host folder /var/lib/mysql, is that what this means?
docker inspect 13b17d3af8f7
The problem is that that folder on my host machine is already being used and I don't think it's used by the previous container.
root#server:/var/lib/mysql# ls -l
total 110652
-rw-r----- 1 mysql mysql 56 feb 13 2020 auto.cnf
-rw------- 1 mysql mysql 1676 feb 13 2020 ca-key.pem
-rw-r--r-- 1 mysql mysql 1112 feb 13 2020 ca.pem
-rw-r--r-- 1 mysql mysql 1112 feb 13 2020 client-cert.pem
-rw------- 1 mysql mysql 1680 feb 13 2020 client-key.pem
-rw-r--r-- 1 mysql mysql 0 iul 28 06:01 debian-5.7.flag
-rw-r----- 1 mysql mysql 291 feb 13 2020 ib_buffer_pool
-rw-r----- 1 mysql mysql 12582912 feb 13 2020 ibdata1
-rw-r----- 1 mysql mysql 50331648 feb 13 2020 ib_logfile0
-rw-r----- 1 mysql mysql 50331648 feb 13 2020 ib_logfile1
drwxr-x--- 2 mysql mysql 4096 feb 13 2020 mysql
drwxr-x--- 2 mysql mysql 4096 feb 13 2020 performance_schema
-rw------- 1 mysql mysql 1680 feb 13 2020 private_key.pem
-rw-r--r-- 1 mysql mysql 452 feb 13 2020 public_key.pem
-rw-r--r-- 1 mysql mysql 1112 feb 13 2020 server-cert.pem
-rw------- 1 mysql mysql 1676 feb 13 2020 server-key.pem
drwxr-x--- 2 mysql mysql 12288 feb 13 2020 sys
What and how to do it?
if you need persistent data stored, you should map /var/lib/mysql to a host folder instead.
e.g.
docker run -p 33069:3306 --name some-mysql -v ./mydata:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=test -d mysql:8.0.26 mysqld --default-authentication-plugin=mysql_native_password
Update:
docker inspect output just represent the section VOLUME ["/var/lib/mysql"] in Dockerfile.

could not connect to server: "/var/run/postgresql/.s.PGSQL.5432"?

I have a simple web page in Ruby On Rails with postgresql database, but when I run sever I have this error, I don't Know that I do.
I use postgresql because heroku need that the aplication is in postgresql.
I work in ubuntu 13.10
The error is:
PG::ConnectionBad
could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
I need help
thanks
Just create a softlink like this,this works for me
$ sudo ln -s /tmp/.s.PGSQL.5432 /var/run/postgresql/.s.PGSQL.5432
for more info
Also you are able to force stop and then force start the server
sudo service postgresql stop --force
sudo service postgresql start --force
I had this error due to not properly configured connection to database, and exactly because of syntax errors in setting up environment variables i.e.
MY_APP_DB= $MY_APP_DB_NAME which is wrong, instead of MY_APP_DB=$MY_APP_DB_NAME
sudo systemctl start postgresql
First, you need to make sure the socket file is located in /var/run/postgresql/.s.PGSQL.5432. To check that
$ cat /var/run/postgresql/.s.PGSQL.5432
if result shows something, then the problem is anything else. But, if file is not there you need to check /tmp dir (specially for OSX Homebrew users)
$ cd /tmp
$ l
total 16
drwxrwxrwt 7 root wheel 224B Mar 11 08:03 .
drwxr-xr-x 6 root wheel 192B Jan 23 18:35 ..
-rw-r--r-- 1 root wheel 65B Nov 7 22:59 .BBE72B41371180178E084EEAF106AED4F350939DB95D3516864A1CC62E7AE82F
srwxrwxrwx 1 shiva wheel 0B Mar 11 08:03 .s.PGSQL.5432
-rw------- 1 shiva wheel 57B Mar 11 08:03 .s.PGSQL.5432.lock
drwx------ 3 shiva wheel 96B Mar 10 17:11 com.apple.launchd.C1tUB2MvF8
drwxr-xr-x 2 root wheel 64B Mar 10 17:10 powerlog
Now, there are two ways you can solve the error
Solution One
You can change the application configuration to see for sockets at /tmp/.s.PGSQL.5432
For Rails Users
# config/database.yml
default: &default
adapter: postgresql
pool: 5
# port:
timeout: 5000
encoding: utf8
# min_messages: warning
socket: /tmp/.s.PGSQL.5432
Solution Two
You can create symlinks to the expected location
$ sudo mkdir /var/pgsql_socket
$ sudo ln /tmp/.s.PGSQL.5432 /var/pgsql_socket/
Then the error should go.
Hope this helps.
Try this that works for me.
sudo systemctl start postgresql
then
sudo systemctl start postgresql
Review the postgresql.log file
Mine said FATAL: lock file "postmaster.pid" already exists
So I removed the postmaster.pid file and restarted the service, now it's working
λ /usr/local/var/postgres/ rm postmaster.pid
λ /usr/local/var/postgres/ brew services restart postgresql
What you have to do to solve the problem of s.PGSQL.5432:
First change the unix_socket_directories postgresql.conf path / tmp / by another route
and then updates the postgres and postgres load becomes, if you want you can put the path again, and with it and recharge the postgres normally.

postgresql socket file vanishes when assets:precompile

UPDATE: Server crashed after two hours of troubleshooting, and on restart, all assets compiled fine. But if anyone sees this and understands it better than me, any comments would still be appreciated.
When running RAILS_ENV=production rake assets:precompile, I get the following:
rake aborted!
could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
So I check the directory and the file doesn't exist. When I service postgresql restart the socket file appears in the expected directory. Looks like this:
/var/run/postgresql$ ll
total 8
drwxrwsr-x 2 postgres postgres 100 Sep 12 18:24 ./
drwxr-xr-x 23 root root 760 Sep 12 17:29 ../
-rw------- 1 postgres postgres 5 Sep 12 18:24 9.1-main.pid
srwxrwxrwx 1 postgres postgres 0 Sep 12 18:24 .s.PGSQL.5432=
-rw------- 1 postgres postgres 70 Sep 12 18:24 .s.PGSQL.5432.lock
But as soon as I run rake again, the rake fails and when I check the directory, the socket file has vanished.
Please remove .lock file, then restart the server, May be it could help you.

Resources