RMySQL not working with a cnf file - connection

I am trying to connect to a MySQL server through R and it works perfect with the follwoing line:
con <- dbConnect(MySQL(), user="user", password="password",dbname="dbname", host="localhost", port=3306)
But, I would like to use a cnf file so that my user/apssword credentials donot appear in my code and tried the following:
rmysql.settingsfile<-"mydefault.cnf"
rmysql.db<-"test_db"
drv<-dbDriver("MySQL")
con<-dbConnect(drv,default.file=rmysql.settingsfile,group=rmysql.db)
And this is how my cnf file looks:
[test_db]
user=user
password=password
database=dbname
host=localhost
port=3306
It is in the same folder as in my R script which is my current working directory. But, I run into the following error:
Error in mysqlNewConnection(drv, ...) :
RS-DBI driver: (Failed to connect to database: Error: Access denied for user 'ODBC'#'localhost' (using password: NO)
)
Any suggestions, please?
Thanks so much

I had this problem very recently. RMySQL looks in the root directory for these files so you need to fully qualify the location of the file. i.e.:
rmysql.settingsfile<-"/home/MD-Tech/mydefault.cnf"
or
rmysql.settingsfile<-"c:\Users\MD-Tech\rfiles\mydefault.cnf"

Two things could be going on.
The CNF file should be encrypted, password should say password = ****. The MySQL documentation shows how to create a CNF file. Below would work for your code to create the CNF
shell> mysql_config_editor set --login-path=test_db --host=localhost --user=user --password
press enter without typing password, you will be prompted to enter it
The second thing is that user = NULL and password = NULL are missing as referenced in the src_mysql documentation
rmysql.settingsfile <- "~/.mylogin.cnf"
rmysql.db <- "test_db"
drv <- dbDriver("MySQL")
con <- dbConnect(drv, default.file = rmysql.settingsfile, group = rmysql.db, user = NULL, password = NULL)
When you add these and run the code, you should be set.

Fing something working at: https://www.r-bloggers.com/mysql-and-r/
Not in configuration file... but work.
con <- dbConnect(MySQL(),
user="me", password="nuts2u",
dbname="my_db", host="localhost")

Ya, getting this setup for the first time can be like pulling cats' teeth! Here is what I did while running R on a Droplet (Ubuntu 16.04, MySQL 5.7.16).
First, make sure you can at least login successfully to MySQL through the terminal
mysql -u kevin -p
Next, run R and verify that you can login in directly with dbConnect() using a user name and password
mydb = dbConnect(drv, user='kevin', password='ilovecats', dbname='catnapdb', host='127.0.0.1', port=3306)
Edit your mysql.cnf text file and at the bottom add a new group (exact name of this file and its location will depend on operating system and versions).
[whiskerpatrol]
user = kevin
password = ilovecats
host = 127.0.0.1
port = 3306
database = catnapdb

Related

Cannot run Invoice ninja using docker

this is my first post here in SO, i want to use Invoice in my machine using Docker and access it locally
This are the steps i done
Cloned the repo https://github.com/invoiceninja/dockerfiles.git
Generate the APP_KEY
Question , when i generate the app key i got this error messages, are they suppossed to appear?
In Connection.php line 678:
SQLSTATE[HY000] [2002] No such file or directory (SQL: select * from information_schema.tables where table_schema = ninja and table_name = accoun
ts and table_type = 'BASE TABLE')
In Exception.php line 18:
SQLSTATE[HY000] [2002] No such file or directory
In PDOConnection.php line 38:
SQLSTATE[HY000] [2002] No such file or directory
Edit the env file with the APP_KEY=base64:...
Run the chown -R command
I want to acess Invoice Ninja locally so i changed the APP_URL to http://in5.test:8000
Changed the IP adress in the config/hosts file
-Finally i run docker-compose up and when i entered in5.test in the browser, i get a page not found but if i enter in5.test.localhost i get this page with errors
https://ibb.co/Zzs5w9b
Question, in the compose file there are some lines with
extra_hosts:
- "in5.localhost:192.168.0.124 " #host and ip
I changed the IP adress to match my local IP but when i do that and go to in5.test.localhost i get a 502 bad gateway nginx
Can someone tell me what i am doing wrong?

pyhdfs.HdfsIOException: Failed to find datanode, suggest to check cluster health. excludeDatanodes=null

I am trying to run hadoop using docker provided here:
https://github.com/big-data-europe/docker-hadoop
I use the following command:
docker-compose up -d
to up the service and am able to access it and browse file system using: localhost:9870. Problem rises whenever I try to use pyhdfs to put file on HDFS. Here is my sample code:
hdfs_client = HdfsClient(hosts = 'localhost:9870')
# Determine the output_hdfs_path
output_hdfs_path = 'path/to/test/dir'
# Does the output path exist? If not then create it
if not hdfs_client.exists(output_hdfs_path):
hdfs_client.mkdirs(output_hdfs_path)
hdfs_client.create(output_hdfs_path + 'data.json', data = 'This is test.', overwrite = True)
If test directory does not exist on HDFS, the code is able to successfully create it but when it gets to the .create part it throws the following exception:
pyhdfs.HdfsIOException: Failed to find datanode, suggest to check cluster health. excludeDatanodes=null
What surprises me is that my code is able to create the empty directory but fails to put the file on HDFS. My docker-compose.yml file is exactly the same as the one provided in the github repo. The only change I've made is in the hadoop.env file where I change:
CORE_CONF_fs_defaultFS=hdfs://namenode:9000
to
CORE_CONF_fs_defaultFS=hdfs://localhost:9000
I have seen this other post on sof and tried the following command:
hdfs dfs -mkdir hdfs:///demofolder
which works fine in my case. Any help is much appreciated.
I would keep the default CORE_CONF_fs_defaultFS=hdfs://namenode:9000 setting.
Works fine for me after adding a forward slash to the paths
import pyhdfs
fs = pyhdfs.HdfsClient(hosts="namenode")
output_hdfs_path = '/path/to/test/dir'
if not fs.exists(output_hdfs_path):
fs.mkdirs(output_hdfs_path)
fs.create(output_hdfs_path + '/data.json', data = 'This is test.')
# check that it's present
list(fs.walk(output_hdfs_path))
[('/path/to/test/dir', [], ['data.json'])]

Configuring Backup gem in Rails 5.2 - Performing backup of PostgreSQL database

I would like to perform a regular backup of a PostgreSQL database, my current intention is to use the Backup and Whenever gems. I am relatively new to Rails and Postgres, so there is every chance I am making a very simple mistake...
I am currently trying to setup the process on my development machine (MAC), but keep getting an error when trying to connect to the database.
In the terminal window, I have performed the following to check the details of my database and connection:
psql -d my_db_name
my_db_name=# \conninfo
You are connected to database "my_db_name" as user "my_MAC_username" via socket in "/tmp" at port "5432".
\q
I have also manually created a backup of the database:
pg_dump -U my_MAC_username -p 5432 my_db_name > name_of_backup_file
However, when I try to repeat this within db_backup.rb (created by the Backup gem) I get the following error:
[2018/10/03 19:59:00][error] Model::Error: Backup for Description for db_backup (db_backup) Failed!
--- Wrapped Exception ---
Database::PostgreSQL::Error: Dump Failed!
Pipeline STDERR Messages:
(Note: may be interleaved if multiple commands returned error messages)
pg_dump: [archiver (db)] connection to database "my_db_name" failed: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/pg.sock/.s.PGSQL.5432"?
The following system errors were returned:
Errno::EPERM: Operation not permitted - 'pg_dump' returned exit code: 1
The contents of my db_backup.rb:
Model.new(:db_backup, 'Description for db_backup') do
##
# PostgreSQL [Database]
#
database PostgreSQL do |db|
# To dump all databases, set `db.name = :all` (or leave blank)
db.name = "my_db_name"
db.username = "my_MAC_username"
#db.password = ""
db.host = "localhost"
db.port = 5432
db.socket = "/tmp/pg.sock"
# When dumping all databases, `skip_tables` and `only_tables` are ignored.
# db.skip_tables = ["skip", "these", "tables"]
# db.only_tables = ["only", "these", "tables"]
# db.additional_options = ["-xc", "-E=utf8"]
end
end
Please could you suggest what I need to do to resolve this issue and perform the same backup through the db_backup.rb code
In case someone else gets stuck in a similar situation, the key to unlocking this problem was the lines:
psql -d my_db_name
my_db_name=# \conninfo
I realised that I needed to change db.socket = "/tmp/pg.sock" to db.socket = "/tmp", which seems to have resolved the issue.
However, I don't understand why the path on my computer differs to the default as I didn't do anything to customise the installation of any gems or the Postgres App

Erlang :ssh authentication error. How to connect to ssh using identity file

I'm getting an authentication error when trying to connect ssh host.
The goal is to connect to the host using local forwarding. The command below is an example using drop bear ssh client to connect to host with local forwarding.
dbclient -N -i /opt/private-key-rsa.dropbear -L 2002:1.2.3.4:2006 -p 2002 -l
test_user 11.22.33.44
I have this code so far which returns empty connection
ip = "11.22.33.44"
user = "test_user"
port = 2002
ssh_config = [
user_interaction: false,
silently_accept_hosts: true,
user: String.to_charlist(user),
user_dir: String.to_charlist("/opt/")
]
# returns aunthentication error
{:ok, conn} = :ssh.connect(String.to_charlist(ip), port, ssh_config)
This is the error Im seeing
Server: 'SSH-2.0-OpenSSH_5.2'
Disconnects with code = 14 [RFC4253 11.1]: Unable to connect using the available authentication methods
State = {userauth,client}
Module = ssh_connection_handler, Line = 893.
Details:
User auth failed for: "test_user"
I'm a newbie to elixir and have been reading this erlang ssh document for 2 days. I did not find any examples in the documentation which makes it difficult to understand.
You are using non-default key name, private-key-rsa.dropbear. Erlang by default looks for this set of names:
From ssh module docs:
Optional: one or more User's private key(s) in case of publickey authorization. The default files are
id_dsa and id_dsa.pub
id_rsa and id_rsa.pub
id_ecdsa and id_ecdsa.pub`
To verify this is a reason, try renaming private-key-rsa.dropbear to id_rsa. If this works, the next step would be to add a key_cb callback to the ssh_config which should return the correct key file name.
One example implementation of a similar feature is labzero/ssh_client_key_api.
The solution was to convert dropbear key to ssh key. I have used this link as reference.
Here is the command to convert dropbear key to ssh key
/usr/lib/dropbear/dropbearconvert dropbear openssh /opt/private-key-rsa.dropbear /opt/id_rsa

IMAP Error: Login failed - Roundcube

I'm trying to login to Roundcube only the program won't let me.
I can login to the said account from the shell and mail is setup and working correctly on my server for user 'admin'. It's RC that is the problem. If I check my logs:
/usr/local/www/roundcube/logs/errors
they show:
[21-Sep-2013 17:19:02 +0100]: IMAP Error: Login failed for admin from ip.ip.ip.ip. Could not connect to ip.ip.ip.ip:143:
Connection refused in /usr/local/www/roundcube/program/lib/Roundcube/rcube_imap.php on line 184
(POST /roundcube/?_task=login&_action=login)
which doesn't give me many clues really, just leads me to:
public function connect($host, $user, $pass, $port=143, $use_ssl=null) {}
from
rcube_imap.php
Stuff I've tried, editing:
/usr/local/www/roundcube/config/main.inc.php
with:
// IMAP AUTH type (DIGEST-MD5, CRAM-MD5, LOGIN, PLAIN or null to use
// best server supported one)
//$rcmail_config['imap_auth_type'] = LOGIN;
$rcmail_config['imap_auth_type'] = null;
// Log IMAP conversation to <log_dir>/imap or to syslog
$rcmail_config['imap_debug'] = /var/log/imap;
With a failed login attempt
/var/log/imap
doesn't even get written to, leaving me no clues. I'm using dovecot and Sendmail on a FreeBSD box with full root access. It's not an incorrect username password combination for sure.
Several Googles on the string 'Roundcube: Connection to storage server failed' are fruitless.
EDIT:
I needed an entry in
/etc/rc.conf
dovecot_enable="YES"
Schoolboy error.
I had the same problem with a letsencrypt certificate and resolve it by disabling peer authentication:
$config['imap_conn_options'] = array(
'ssl' => array('verify_peer' => true, 'verfify_peer_name' => false),
'tls' => array('verify_peer' => true, 'verfify_peer_name' => false),
);
Afterwards you can set the connection string like this (starttls):
$config['default_host'] = 'tls://your-host.tld';
$config['default_port'] = '143';
$config['smtp_server'] = 'tls://your-host.tld';
$config['smtp_port'] = '25';
Or like this (ssl approach):
$config['default_host'] = 'ssl://your-host.tld';
$config['default_port'] = '993';
$config['smtp_server'] = 'ssl://your-host.tld';
$config['smtp_port'] = '587';
Make sure you use the fully qualified hostname of the certificate in the connection string (like your-host.tld) and not an internal hostname (like localhost).
Hope that helps someone else.
Change the maildir to whatever your system uses.
Change Dovecot mail_location setting to
mail_location = maildir:~/Mail
Change Postfix home_mailbox setting to
home_mailbox = Mail/
Restart services and away you go
Taken from this fedoraforum post
If you run fail2ban, then dovecot might get banned following failed Roundcube login attempts. This has happened to me twice already...
First, check if this is indeed the case:
sudo fail2ban-client status dovecot
If you get an output similar to this:
Status for the jail: dovecot
|- Filter
| |- Currently failed: 1
| |- Total failed: 8
| `- File list: /var/log/mail.log
`- Actions
|- Currently banned: 1
|- Total banned: 2
`- Banned IP list: X.X.X.X
i.e. the Currently banned number is higher than 0, then fail2ban was a bit overeager and you have to "unban" dovecot.
Run the fail2ban client in interactive mode:
sudo fail2ban-client -i
and at the fail2ban> prompt enter the following:
set dovecot unbanip X.X.X.X
where X.X.X.X is the IP address of your Dovecot server.
Exit from the interactive client and run sudo fail2ban-client status dovecot again. The Currently banned: field now should have a value of 0. What's more important, RoundCube should work again :-)
The issue is in your mail server.
Check your ports in your mail server and reset it (if necessary):
Port 25 (and 587) must be open for SMTP
Port 143 (and 993) must be open for IMAP
Port 110 must be open for POP3
Also open those ports in your firewall settings.
sudo dovecot should solve the problem.
If not restart dovecot
sudo service dovecot restart

Resources