OrientDB " Error on moving existent database" - docker

I'm trying to setup OrientDB distributed configuration with docker. But I'm getting error when starting second node -
2015-10-09 17:14:14:066 WARNI
[node1444321499719]->[[node1444321392311]] requesting deploy of
database 'testDB' on local server... [OHazelcastPlugin] 2015-10-09
17:14:14:117 INFO [node1444321499719]<-[node1444321392311] received
updated status node1444321499719.testDB=SYNCHRONIZING
[OHazelcastPlugin] 2015-10-09 17:14:14:119 INFO
[node1444321499719]<-[node1444321392311] received updated status
node1444321392311.testDB=SYNCHRONIZING [OHazelcastPlugin] 2015-10-09
17:14:15:935 WARNI [node1444321499719] moving existent database
'testDB' located in '/orientdb/databases/testDB' to
'/orientdb/databases/../backup/databases/testDB' and get a fresh copy
from a remote node... [OHazelcastPlugin] 2015-10-09 17:14:15:936 SEVER
[node1444321499719] error on moving existent database 'testDB' located
in '/orientdb/databases/testDB' to
'/orientdb/databases/../backup/databases/testDB'. Try to move the
database directory manually and retry
[OHazelcastPlugin][node1444321499719] Error on starting distributed
plugin
com.orientechnologies.orient.server.distributed.ODistributedException:
Error on moving existent database 'testDB' located in
'/orientdb/databases/testDB' to
'/orientdb/databases/../backup/databases/testDB'. Try to move the
database directory manually and retry
at com.orientechnologies.orient.server.hazelcast.OHazelcastPlugin.backupCurrentDatabase(OHazelcastPlugin.java:1007)
at com.orientechnologies.orient.server.hazelcast.OHazelcastPlugin.requestDatabase(OHazelcastPlugin.java:954)
at com.orientechnologies.orient.server.hazelcast.OHazelcastPlugin.installDatabase(OHazelcastPlugin.java:893)
at com.orientechnologies.orient.server.hazelcast.OHazelcastPlugin.installNewDatabases(OHazelcastPlugin.java:1426)
at com.orientechnologies.orient.server.hazelcast.OHazelcastPlugin.startup(OHazelcastPlugin.java:184)
at com.orientechnologies.orient.server.OServer.registerPlugins(OServer.java:979)
at com.orientechnologies.orient.server.OServer.activate(OServer.java:346)
at com.orientechnologies.orient.server.OServerMain.main(OServerMain.java:41)
I don't have this error if I'm starting orientdb cluster without docker.
Also I can move it in container
[root#64f6cc1eba61 orientdb]# mv -v /orientdb/databases/testDB
/orientdb/databases/../backup/databases/testDB
'/orientdb/databases/testDB' ->
'/orientdb/databases/../backup/databases/testDB'
'/orientdb/databases/testDB/distributed-config.json' ->
'/orientdb/databases/../backup/databases/testDB/distributed-config.json'
removed '/orientdb/databases/testDB/distributed-config.json' removed
directory: '/orientdb/databases/testDB' [root#64f6cc1eba61 orientdb]#
ls -l /orientdb/databases/../backup/databases/testDB total 4
-rw-r--r--. 1 root root 455 Oct 9 11:32 distributed-config.json [root#64f6cc1eba61 orientdb]#
I'm using OrientDB version 2.1.3

This was reported and fixed:
https://github.com/orientechnologies/orientdb/issues/4891
Set the 'distributed.backupDirectory' variable to a specific directory and the issue should be gone.
By the way, running orient distributed in docker is our experience currently a no go:
- Docker does not support multicast yet, you can work around it, but it's painful. But the main problem:
- Docker doesn't reuse ip addresses on restart, so a container restart will give it a new ip address, this messes up your cluster big time.
We abandoned using orient distributed with docker until docker is fixed on both issues (I believe it is both on the roadmap).
If you experience otherwise, I'm happy to hear your thoughts.

Related

How to get postgres to start on big sur?

I'm attempting to launch a rails server on big sur (M1 chip) and postgres is giving the following error:
ActiveRecord::ConnectionNotEstablished (could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
):
I've seen and tried several fixes but none have worked, including the following:
Reinstalling postgres via homebrew.
Reinstalling the pg gem.
brew services restart.
Trying to delete a postmaster.pid file (none exist). This directory: "/usr/local/var/postgres/postmaster.pid" does not exist on my machine.
My postgres.log file contains the following line repeating:
could not open directory "pg_notify": No such file or directory LOG: database system is shut down
While Genetic's answer works, a quicker solution would be to delete the partially created database (assuming you have just installed postgres and there's no data to be lost) and then run initdb as listed in brew info postgresql to recreate the database:
brew services stop postgresql
rm -rf "$(brew --prefix)/var/postgres"
initdb --locale=C -E UTF-8 "$(brew --prefix)/var/postgres"
brew services start postgresql
The original error on the console didn't change until I entered the following command:
brew services restart -vvv postgresql
After doing this, the errors updated. It then displayed the other directories and sub-directories that were missing. Once I added everything, all was fine.
The solution by anonymus_rex in the comments worked for me. Here are the exact steps I needed to take in case it could help anyone to elaborate a bit more. i was stuck on this for way too long.
I tried almost all of the answers in this question and this other one and this is what finally worked for me to get postgres to start
tail the logs for postgres.
the path needs to be updated depending on where postgres is installed, and your version. I am using postgresql#14 on an m1 Monterey and installed it with homebrew.
i finally found the path i needed to look at using this article.
tail /opt/homebrew/var/log/postgresql#14.log
output shows this:
2023-02-03 15:33:49.294 CST [82651] FATAL: could not open directory "pg_notify": No such file or directory
2023-02-03 15:33:49.294 CST [82651] LOG: database system is shut down
go to the / directory and cd opt/homebrew/var/postgresql#14
create the missing directory (maybe this is a different directory for you)
mkdir pg_notify
repeat this process for all missing directories.
I needed to mkdir for pg_tblspc, pg_replslot, pg_twophase, pg_stat_tmp, pg_logical/snapshots, pg_logical/mappings, pg_commit_ts, pg_snapshots, & pg_commit_ts but i recommend you specifically run the tail command each time to make sure you are not missing different directories & files than me.
finally after running the tail command repeatedly after creating each missing directory, I got this output.
2023-02-03 15:49:18.909 CST [85772] LOG: redo done at 0/17211D8 system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s
2023-02-03 15:49:18.914 CST [85771] LOG: database system is ready to accept connections
i was then able to create & migrate my db in my project ・ᴗ・

fail2ban won't start using nextcloud.log with jail

I have nextcloud installed and working fine in a docker but want to have fail2ban monitor the log files for brute force attempts. I know nextcloud has it's own baked in but it just throttles the log in attempts and I would like to all out ban them (I also have this problem with other containers as well). The docker-compose is set to create the nextcloud.log file to /mnt/nextcloud/log/nextcloud.log. I followed this guide to create the jail
https://www.c-rieger.de/nextcloud-installation-guide-ubuntu/#c06
Fail2ban is running on the host machine however, fail2ban fails to start with:
[447]: ERROR Failed during configuration: Have not found any log file for nextcloud jail
[447]: ERROR Async configuration of server failed
Thinking it was simply a permission issue, I chowned everything to root and tried to start again but still the service won't start. What am I doing wrong?
Thanks for the help!
The docker-compose is set to create the nextcloud.log file to /mnt/nextcloud/log/nextcloud.log
Be sure this file really exists and your jail.local has correct entry logpath:
[nextcloud]
...
logpath = /mnt/nextcloud/log/nextcloud.log
You can also check resulting config using dump:
fail2ban-client -d | grep 'nextcloud.*logpath'
But I'm still not sure the error message you provide was throwed by fail2ban, because its error messages look different, see https://github.com/fail2ban/fail2ban/commit/27947407bc7910f0f50972113218ebc73c4a22c7
It should be something like:
-have not found a log file for nextcloud log
+Have not found any log file for nextcloud jail

Rocker/Shiny not seeing apps after reboot

What can be the reason for rocker/shiny to not recognize shiny apps in host OS?
I'm using rocker/shiny to experiment with shiny apps on windows.
This is how I start the image:
docker run -d -p 80:3838 -v D:/Projects/DockedShiny/apps/:/srv/shiny-server/ -v D:/Projects/DockedShiny/logs:/var/log/shiny-server/ rocker/shiny
I can see shiny was started:
C:\Users\Honza>docker logs gallant_thompson
*** warning - no files are being watched ***
[2019-02-08T06:49:04.860] [INFO] shiny-server - Shiny Server v1.5.9.1 (Node.js v8.11.3)
[2019-02-08T06:49:04.887] [INFO] shiny-server - Using config file "/etc/shiny-server/shiny-server.conf"
[2019-02-08T06:49:04.927] [WARN] shiny-server - Running as root unnecessarily is a security risk! You could be running more securely as non-root.
[2019-02-08T06:49:04.931] [INFO] shiny-server - Starting listener on http://[::]:3838
Edit: Anybody has any hints what *** warning - no files are being watched *** exactly means? I have suspition this might be a clue.
By inspecting the configuration I can see it's configured to files and folders that I specified during image startup:
root#778e307632ab:/etc/shiny-server# more shiny-server.conf
# Instruct Shiny Server to run applications as the user "shiny"
run_as shiny;
# Define a server that listens on port 3838
server {
listen 3838;
# Define a location at the base URL
location / {
# Host the directory of Shiny Apps stored in this directory
site_dir /srv/shiny-server;
# Log all Shiny output to files in this directory
log_dir /var/log/shiny-server;
# When a user visits the base URL rather than a particular application,
# an index of the applications available in this directory will be shown.
directory_index on;
}
}
Despite this the shiny web directory listing is empty and when specifying app by name (e.g. http://localhost/myapp) I'm getting 'page not found'. There is no new logs in Shiny log location.
The exact same scenario worked just fine for months. I recently needed to reboot my host machine and now I cannot make shiny to recognize my apps.
Server is complaining about running as root. I'd start by adding --user shiny. Also you're not sticking to a particular version but latest, and I can see the rocker/shiny image was updated a few days ago, so it could be broken or work different than previous version. I'd stick to 3.5.1 or whatever version you prefer, like rocker/shiny:3.5.1.
LSS: The issue was in recently changed username on affected machine. Docker is caching old one and that needed to be explicitly reset in docker settings -> Shared Drives menu
More details: https://github.com/rocker-org/shiny/issues/59

Unable to start any container when Volumes are enabled Docker Toolbox

I am running Docker Toolbox v. 1.13.1a on Windows 7 Pro Service pack 1 x64OS.
with Virtual Box Version 5.1.14 r112924
when I try to run any docker image e.g. official postgres image from Docker Hub with volumes disabled, it works fine!
But when I enable the volumes it fails.
I tried all official documentations
The VM has shared folder as required and has full access to it also
shared folder screenshot
In case of my example of postgresql it crashes with following log
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
Data page checksums are disabled.
fixing permissions on existing directory /var/lib/postgresql/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... LOG: could not link file "pg_xlog/xlogtemp.27" to "pg_xlog/000000010000000000000001": Operation not permitted
FATAL: could not open file "pg_xlog/000000010000000000000001": No such file or directory
child process exited with exit code 1
initdb: removing contents of data directory "/var/lib/postgresql/data"
I know its the problem with folder permissions. But kinda stuck!
A ton of thanks in advance
I've been busy with this problem all day and my conclusion that it's currently simply not possible to run postgresql inside a docker container while keeping your data persistent in a separate volume.
I even tried running the container without linking to a volume and copying the data that was originally in /var/lib/postgresql into a folder of my host OS (Windows 10 Home), then copy that into the folder that got then linked to the container itself.
Alas, I got the next error:
FATAL: data directory "/var/lib/postgresql/data/pgadmin" has wrong ownership
HINT: The server must be started by the user that owns the data directory.
In conclusion: There's something going wrong with the ownership and the correct user owning it and to be able to fix it, you'll need a unix commandline on Windows that is able to run docker (something currently not possible with Bash on Ubuntu on Windows that is running using Ubuntu 16.04 binaries).
Maybe, in the future, you'll be able to run the needed commands (found here, under Arbitrary --user Notes), but these are *nix commands and powershell (started by Kitematic) can't run those. Bash for Ubuntu for Windows could run those, but that shell has no connection to the docker daemon/service on windows...
TL;DR: Lost a day of work: It is currently impossible on Windows.
I have been trying to fix this issue also ..
At first I thought it was a symlink problem (because the first error fails on " could not link .. operation not permitted)
To be sure symlink is permitted you have to :
share a folder in virtualbox
run virtualbox as administrator (if you account is in administrator group) Right click virtualbox.exe and select run as Administrator
if your account is not administrator, add the symlink privilege with secpol.msc > "Local Policies-User Rights Assignments" add your user to "Create symbolic links"
enable symlink for your shared folder in virtualbox :
VBoxManage setextradata VM_NAME VBoxInternal2/SharedFoldersEnableSymlinksCreate/SHARED_FOLDER_NAME 1
Alternatively you can also use the c:\User\username folder which is shared and symlink enabled by default dockertools installation
Now I can create symlinks in the shared folder from the docker container .. but I still have the same error "could not link ... operation not permitted"
So the reason must be somewhere else ... in the file permissions as you said but I do not see why ?

Could not connect to a primary node for replica set <Moped::Cluster nodes=[<Moped::Node resolved_address="127.0.0.1:27017">]>

I'm following though with the RailsApp tutorial with Devise and Mongoid (http://railsapps.github.io/tutorial-rails-mongoid-devise.html) and am encountering the following error when I get to 'Rake db:seed' down at the 'Set Up a Database Seed File' section.
Could not connect to a primary node for replica set <Moped::Cluster nodes=[<Moped::Node resolved_address="127.0.0.1:27017">]>
I've tried the instructions from nixoncd on this page here but has not fixed the issue. It tells me 'file exists' and 'Already loaded'. 'https://groups.google.com/forum/#!topic/mongodb-user/Hhh8iNCciMk
I get this if I type 'mongod' in terminal.
ERROR: could not read from config file
Any help welcome. I'm on a Mac OSX Mountain Lion with Mongoid installed using homebrew - though MongoDB was installed using the download package mongodb.org.
MongoDB shell version: 2.4.6
Thanks
EDIT: I'm not sure if this issue is related or not. Also having issues launching mongoDB. Also posted issue here:
mongoDB, could not read from config file -- config in different folder / Uninstall it?
First See if your database is running by mongo , If yes
Use this command:
sudo rm /var/lib/mongodb/mongod.lock
mongod --repair
sudo service mongodb start
Your database will work.
Installing MongoDB solved this for me:
sudo apt-get install mongodb-server
The answers above will work for you in the majority of the cases where this error occurs.
However, I would like to note that you can also get the Could not connect to a primary node for replica set error when trying to write exceptionally large batches of records to MongoDB in one request. I have encountered this error when writing more than 200,000 1 KB documents to a remote MongoDB server with in one request. The remote server had 8 GB memory and would handle several requests at once. This error can be avoided by cutting down the batch size of your requests.

Resources