I'm trying out the Dynamic Security module for mosquitto and everything seems to work fine as long as I never systemctl restart mosquitto.service. After install mosquitto and enabling the Dynamic Security module, I ran these two commands:
mosquitto_ctrl dynsec init /etc/mosquitto/dynamic-security.json steve
systemctl restart mosquitto.service
Then I was able to create a user, role, subscribe and publish to a topic like this:
mosquitto_ctrl -u steve -P Pass1234 dynsec createClient john0
mosquitto_ctrl -u steve -P Pass1234 dynsec createRole role0
mosquitto_ctrl -u steve -P Pass1234 dynsec addClientRole john0 role0 1
mosquitto_ctrl -u steve -P Pass1234 dynsec addRoleACL role0 publishClientSend pizza allow
mosquitto_ctrl -u steve -P Pass1234 dynsec addRoleACL role0 subscribeLiteral pizza allow
mosquitto_sub -u john0 -P Pass1234 -t pizza
# then open a second terminal window and do this:
mosquitto_pub -u john0 -P Pass1234 -t pizza -m 'hi'
# result is the word `hi` appears in the first/original terminal window
I can repeatedly publish and subscribe to topics with the john0 user on the pizza topic.
However, the moment I have to reboot my server or if I run a systemctl restart mosquitto.service, then the john0 client no longer exists.
How do I prevent the john0 user and all the roles and access privileges from disappearing after a systemctl restart mosquitto.service?
EDIT
Here's my /etc/mosquitto/mosquitto.conf
persistence true
persistence_location /var/lib/mosquitto/
log_dest file /var/log/mosquitto/mosquitto.log
include_dir /etc/mosquitto/conf.d
allow_anonymous false
per_listener_settings false
plugin /usr/lib/x86_64-linux-gnu/mosquitto_dynamic_security.so
plugin_opt_config_file /etc/mosquitto/dynamic-security.json
Also, in my /etc/mosquitto/dynamic-security.json, the only record taht exists is he one for steve. I do not see any other clients in the dynamic-security.json file.
EDIT
Also, it seems if I manually edit the /etc/mosquitto/dynamic-security.json, it does NOT immediately take effect. I need to run systemctl restart mosquitto.service in order for the changes to take effect.
So I guess now my question is specifically how do I add clients and roles such that it meets all these criteria:
I can add them during run time and they immediately take effect without a systemctl restart mosquitto.service.
After a systemctl restart mosquitto.service, that the clients and roles still exist (ie. they are not deleted)
Mosquitto was configured to store its dynamic security state in /etc/mosquitto/dynamic-security.json.
Unfortunately, /etc/mosquitto is frequently not writable by mosquitto, for security reasons. State is generally meant to be stored in /var/lib/mosquitto, which Mosquitto is able to write to.
To fix this, change the configuration to read:
plugin_opt_config_file /var/lib/mosquitto/dynamic-security.json
If you have an existing dynamic-security.json file in /etc/mosquitto you can move it to /var/lib/mosquitto and retain whatever is currently in it:
mv /etc/mosquitto/dynamic-security.json /var/lib/mosquitto
chown mosquitto /var/lib/mosquitto/dynamic-security.json
chmod 700 /var/lib/mosquitto/dynamic-security.json
The chown line makes sure it's owned by the user mosquitto - if you run mosquitto as a different user, change this line to be the user you run it as.
The chmod line makes sure that only the file's owner (and root) can read the file. Even though the passwords in the file are encrypted, we don't want to make it any easier than necessary for an attacker to access it.
This happens due to permission issues for mosquitto
You can just simply do
chown mosquitto /etc/mosquitto/dynamic-security.json
After this when you use mosquitto_ctrl commands.
It will be visible in the json file.
Related
I'm in the process of trying to find out or reset my PostgreSQL password, which is unknown to me, but which I need in order to migrate my database from SQLite3 to PostgreSQL.
I was trying to locate the hba_conf file so in the terminal I entered:
ps aux | grep postgres
and I found that the directory I needed to find was:
/Library/PostgreSQL/9.6/bin/postmaster -D/Library/PostgreSQL/9.6/data
My problem now is that it is not possible to locate this file because it apparently doesn't exist! When I cd to Library I'm unable to go any further because there is no PostgreSQL folder listed.
This is a bit of a dead end for me as I have no idea why PostgreSQL is not there. PSQL came with my version of Rails, and I updated it. When I type: 'psql -V' in the terminal, the answer is 'psql (PostgreSQL) 9.6.3'.
Help would be much appreciated, thanks :-)
From the library folder, if I run 'sudo su' then enter ls, I get the following:
.localized Calendars Dictionaries Internet Plug-Ins
Maps Saved Application State WebKit
Accounts CallServices Favorites Keyboard
Messages Screen Savers com.apple.nsurlsessiond
Address Book Plug-Ins ColorPickers FontCollections
Keyboard Layouts Metadata Services iMovie
Application Scripts Colors Fonts KeyboardServices
Passes Sharing
Application Support Compositions GameKit Keychains
PreferencePanes Sounds
Assistant Containers Google LanguageModeling
Preferences Spelling
Assistants Cookies Group Containers LaunchAgents
Printers Suggestions
Audio CoreData IdentityServices Logs
PubSub SyncedPreferences
Caches CoreFollowUp Input Methods Mail
Safari Voices
and if I enter ps I get this:
PID TTY TIME CMD
359 ttys000 0:00.02 login -pfl robertosullivan /bin/bash -c exec -
la bash /bin/bash
3267 ttys000 0:00.02 sudo su
3269 ttys000 0:00.01 su
3270 ttys000 0:00.00 sh
3271 ttys000 0:00.00 ps
If I try 'sudo find / -name psql' - I get:
find: /dev/fd/Library: No such file or directory
find: /dev/fd/Library: No such file or directory
/Library/PostgreSQL/9.6/bin/psql
/usr/local/bin/psql
/usr/local/Cellar/postgresql/9.6.3/bin/psql
When I try 'sudo find /Library/PostgreSQL/9.6/data -name *.conf' I get:
/Library/PostgreSQL/9.6/data/pg_hba.conf
/Library/PostgreSQL/9.6/data/pg_ident.conf
/Library/PostgreSQL/9.6/data/postgresql.auto.conf
/Library/PostgreSQL/9.6/data/postgresql.conf
The installation procedure creates a user account called postgres that is associated with the default Postgres role. In order to use Postgres, we can log into that account.
You can run the command you'd like with the postgres account directly with sudo
sudo -u postgres psql
This will prompt for the password for the postgres user.
If you don't have the password for this postgres user, follow the below steps:
sudo vim /etc/postgresql/9.3/main/pg_hba.conf
Around the line number 84,85 change that to
# Database administrative login by Unix domain socket
local all all trust
then Restart the PostgreSQL service via SUDO command
sudo /etc/init.d/postgresql restart
Now You can run the command to log in to the postgres account directly with sudo
sudo -u postgres psql
You will be now entered and will See the Postgresql terminal.Once you have successfully logged into postgres, you can change the password by the command
\password
and enter the NEW Password for Postgres default user, After Successfully changing the Password again go to the pg_hba.conf and revert the change to "md5"
Around the line number 84,85 change that now to
# Database administrative login by Unix domain socket
local all all md5
then Restart the PostgreSQL service via SUDO command
sudo /etc/init.d/postgresql restart
now you will be logged in as
psql -U postgres
with your new Password.
Please let me know if you have any Issues.
as we identified your data_dir as /Library/PostgreSQL/9.6/data/ and found configuration files in it, as we identified your postgres cluster is running with pg_ctl: server is running (PID: 88) /Library/PostgreSQL/9.6/bin/postgres "-D/Library/PostgreSQL/9.6/data, in order to reset your password do:
become a postgres user: sudo su - postgres
login locally with psql
reset the password with alter user USERNAME password 'NEW_PASSWORD'
after that you can connect as that user with psql -U USERNAME -h localhost using your new password
I am new to rails.
I want to take backup of pg database from digitalocean to my local machine. How I take dump of that and migrate to my local machine
To use pg_dump,
First, for the target machine(remote machine with database you want to dump), two steps to make the machine receive pg_dump requests:
1.Add or edit the following line in your postgresql.conf :(in my experience, the location maybe /etc/postgresql/9.3/main/postgresql.conf, replace 9.3 with your psql version. If nobody change the file before, you add the line below to the end of the file)
listen_addresses = '*'
2.Add the following line as the first line of file 'pg_hba.conf'. (in my experience, the location like /etc/postgresql/9.3/main/pg_hba.conf) It allows access to all databases for all users with an encrypted password:
# TYPE DATABASEUSER CIDR-ADDRESS METHOD
host all all all md5
After those two steps, type in the terminal:
/etc/init.d/postgresql start
At last, in your local machine, you should figure out the target database's user(or owner) who can read it:
You can achieve this by ssh to connect that machine and step into psql console
sudo su - postgres && psql
and type
\l
to see the db owner.
Finally you can use pg_dump in your local machine to dump the database.Like :
pg_dump -f dump_name -h host_ip -d database_name -U database_user -p 5432 -W
then input the user's password, and wait for the long time for dumping the db.
Hope you make it~
First you need to create backup then download dump from digital ocean and the run these commands on console.
Download dump using SCP.
1-pg_dump dbname > outfile
2-pg_restore --verbose --clean --jobs=4 --disable-triggers --no-acl --no-owner -h localhost -U user_name -d database_name outfile.dump
In my CI chain I execute end-to-end tests after a "docker-compose up". Unfortunately my tests often fail because even if the containers are properly started, the programs contained in my containers are not.
Is there an elegant way to verify that my setup is completely started before running my tests ?
You could poll the required services to confirm they are responding before running the tests.
curl has inbuilt retry logic or it's fairly trivial to build retry logic around some other type of service test.
#!/bin/bash
await(){
local url=${1}
local seconds=${2:-30}
curl --max-time 5 --retry 60 --retry-delay 1 \
--retry-max-time ${seconds} "${url}" \
|| exit 1
}
docker-compose up -d
await http://container_ms1:3000
await http://container_ms2:3000
run-ze-tests
The alternate to polling is an event based system.
If all your services push notifications to an external service, scaeda gave the example of a log file or you could use something like Amazon SNS. Your services emit a "started" event. Then you can subscribe to those events and run whatever you need once everything has started.
Docker 1.12 did add the HEALTHCHECK build command. Maybe this is available via Docker Events?
If you have control over the docker engine in your CI setup you could execute docker logs [Container_Name] and read out the last line which could be emitted by your application.
RESULT=$(docker logs [Container_Name] 2>&1 | grep [Search_String])
logs output example:
Agent pid 13
Enter passphrase (empty for no passphrase): Enter same passphrase again: Identity added: id_rsa (id_rsa)
#host SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.6
#host SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.6
parse specific line:
RESULT=$(docker logs ssh_jenkins_test 2>&1 | grep Enter)
result:
Enter passphrase (empty for no passphrase): Enter same passphrase again: Identity added: id_rsa (id_rsa)
Im trying to get a docker-machine up and running on a Ubuntu 14.04TSL server in our network. I have installed docker+docker-machine on the server and im able to create the docker-machine on the server with this command from my computer:
docker-machine create --driver generic --generic-ip-address 10.10.3.76 --generic-ssh-key "/Users/username/Documents/keys/mysshkey.pem" --generic-ssh-user ubuntuuser dockermachinename
The command above creates the docker-machine and im able to list it with
docker-machine ls
Im able to SSH to it by running
docker-machine ssh dockermachinename
but when i try to connect the server with (-D for debug information)
docker-machine -D env dockermachinename
I get the following message
Docker Machine Version: 0.5.2 ( 0456b9f )
Found binary path at /usr/local/bin/docker-machine-driver-generic
Launching plugin server for driver generic
Plugin server listening at address 127.0.0.1:54213
() Calling .GetVersion
Using API Version 1
() Calling .SetConfigRaw
() Calling .GetMachineName
(dockermachinename) Calling .GetState
(dockermachinename) Calling .GetURL
Reading CA certificate from /Users/username/.docker/machine/certs/ca.pem
Reading server certificate from /Users/username/.docker/machine/machines/dockermachinename/server.pem
Reading server key from /Users/username/.docker/machine/machines/dockermachinename/server-key.pem
Error checking TLS connection: Error checking and/or regenerating the certs: There was an error validating certificates for host "10.10.3.76:2376": dial tcp 10.10.3.76:2376: i/o timeout
You can attempt to regenerate them using 'docker-machine regenerate-certs [name]'.
Be advised that this will trigger a Docker daemon restart which will stop running containers.
I really need to solve this so all help is appreciated!
On Ubuntu you will need to do following steps:
1. Create user which don't require password
sudo visudo
at the end of file add following line (make sure to specify your username):
username ALL=(ALL:ALL) NOPASSWD: ALL
and then save and exit. And after that add your username to docker group like this (change username with your actual username):
usermod -aG docker username
2. Edit docker config to open 2375 and 2376 ports
sudo systemctl edit docker.service
add following snippet to that file:
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// -H tcp://0.0.0.0:2376 -H tcp://0.0.0.0:2375
then save and exit. After that reload config and restart docker deamon with:
sudo systemctl daemon-reload
sudo systemctl restart docker.service
3. Create docker-machine
Remove existing machine which is failing with:
docker-machine rm machine1
and try to create it one more time like this:
docker-machine create -d generic --generic-ip-address ip --generic-ssh-key ~/.ssh/key --generic-ssh-user username --generic-ssh-port 22 machine1
please change ip, key, username and machine1 with you actual values.
If this produce error like this:
Error checking TLS connection: Error checking and/or regenerating the certs: There was an error validating certificates for host "192.168.0.26:2376": tls: oversized record received with length 20527
You can attempt to regenerate them using 'docker-machine regenerate-certs [name]'.
Be advised that this will trigger a Docker daemon restart which might stop running containers.
then SSH to your machine and cd into following directory:
cd /etc/systemd/system/docker.service.d/
list all files in it with:
ls -l
you will probably have something like this:
-rw-r--r-- 1 root root 274 Jul 2 17:47 10-machine.conf
-rw-r--r-- 1 root root 101 Jul 2 17:46 override.conf
you will need to delete all files except 10-machine.conf with sudo rm.
After that remove machine you created and create it again. It should now work. I hope this helps. Maybe you already steps 1 and 2 if so then skip them and just try to remove override.conf file or any file in that dir which is not 10-machine.conf.
I'm working on a RabbitMQ distributed POC and I'm stuck at the basics of clustering the nodes.
I'm trying to follow the rabbit's tutorial on clustering so this is my reference.
After installing erlang (R14B04) and rabbit (2.8.2-1) I've copied the .erlang.cookie file contents from one node to the other two.
I wasn't sure about how to get erlang to notice this change to I had to restart the machines themselves (pretty brute force but I don't know erlang at all).
In addtion I opened in iptables 4369 and 5 additional ports for communications and placed under
/usr/lib64/erlang/bin/sys.config the following config:
{kernel,[{inet_dist_listen_min, XX00},{inet_dist_listen_max,XX05}]}]
Then another restart (dumb I know) to verify erlang takes these into consideration but still when I run:
rabbitmqctl cluster rabbit#HostName1
I get:
Clustering node rabbit#HostName2 with [rabbit#HostName1] ...
Error: {no_running_cluster_nodes,[rabbit#HostName1],
[rabbit#HostName1]}
There is a chance my fiddling with the erlang.cookie or with the ports did not succeed but I don't know how to check them. I tried typing erl in the cmd and then erl_epmd:names() or other commands to get more information but I'm probably way off in erlang land.
Would truly appreciate any help
Update:
I tried pinging two erlang nodes manually and got pang back.
I did the following:
Connected to two nodes, stopped rabbitmq (wasn't sure if needed but to be sure), started erlang like so (erl -sname dilbert and erl -sname dilbert2) when the erlang command line started i ran node(). on each of them and got dilbert#HostName1 and dilbert2#HostName2 respectively. I then tried to run net_adm:ping('dilbert'). and net_adm:ping('dilbert#HostName1'). with the single quote and without them from both nodes (changed names of course) and got on all 8 cases pang.
When I ran nodes(). on one of the machines I got back an empty array.
I've also tried to allow all traffic in the firewall (script) and then try to run the above commands (don't worry they're back on now) and still got back pang.
Update2:
For some reason I had cookies mismatch which I needed to resolve (thanks #kjw0188 for the suggestion [I ran erlang:get_cookie(). in the erlang command line]).
This did not help and I needed to stop iptables completely (not sure why but I'll figure it soon) and load the erlang node with -name dilbert#my-ip because my rackspace servers have no dns-name. This finally enabled me to get a pong and see the nodes see each other (nodes(). returns a non-empty array after the ping).
The problem I'm facing now is how to instruct RabbitMQ to use -name instead of -sname when starting erlang.
So I had multiple issues with connecting my two RabbitMQ nodes-
I'll add that my nodes are hosted on rackspace, and so don't have a default exposable hostname, and require iptables since there is no DMZ or built in security group concept like amazon.
Problems:
1. Cookie- Not sure how or why but I had multiple instances of .erlang.cookie (in /root, in my home directory and in /var/lib/rabbitmq/) I kept only the one in rabbitmq and verified all nodes have the same cookie.
2. IPTables- In order for the nodes to communicate I needed to open the epmd port and the range of ports for the actual communication inet_dist_listen_min inet_dist_listen_max.
/sbin/iptables -A INPUT -i eth1 -p tcp --dport ${epmd} -s ${otherNode} -j ACCEPT
/sbin/iptables -A INPUT -i eth1 -p tcp --dport ${inet_dist_listen_min}:${inet_dist_listen_max} -s ${otherNode} -j ACCEPT
empd is the usuall 4369 port and for the other range use whatever range you want.
${otherNode} is the ip of my other node.
I also needed to configure erlang through rabbitmq to use these ports (see config file at end)
3. HostName- Seeing as I don't have a hostname I needed to edit the rabbit scripts to use -name and not -sname (the first tells erlang to take the whole name, the latter stands for short name and thus appends an # symbol and the hostname).
This was accomplished by editing:
/usr/lib/rabbitmq/bin/rabbitmqctl
Added at the beginning the definition of the RABBITMQ_NODE_IP_ADDRESS property
DEFAULT_NODE_IP_ADDRESS=auto
DEFAULT_NODE_PORT=5672
[ "x" = "x$RABBITMQ_NODE_IP_ADDRESS" ] && RABBITMQ_NODE_IP_ADDRESS=${NODE_IP_ADDRESS}
[ "x" = "x$RABBITMQ_NODE_PORT" ] && RABBITMQ_NODE_PORT=${NODE_PORT}
[ "x" = "x$RABBITMQ_NODE_IP_ADDRESS" ] && [ "x" != "x$RABBITMQ_NODE_PORT" ] && RABBITMQ_NODE_IP_ADDRESS=${DEFAULT_NODE_IP_ADDRESS}
[ "x" != "x$RABBITMQ_NODE_IP_ADDRESS" ] && [ "x" = "x$RABBITMQ_NODE_PORT" ] && RABBITMQ_NODE_PORT=${DEFAULT_NODE_PORT}
and in the actual erl command I changed
-sname ${RABBITMQ_NODENAME} \ to
-name ${RABBITMQ_NODENAME}#${RABBITMQ_NODE_IP_ADDRESS}\.
This made rabbitmq listen only on the specified ip address (specified in the config file at the end) and load with that ip instead of the usuall hostname.
edited /usr/lib/rabbitmq/bin/rabbitmq-server
Changed the actual erl command from -sname ${RABBITMQ_NODENAME} \ to -name ${RABBITMQ_NODENAME}#${RABBITMQ_NODE_IP_ADDRESS}\
Added a rabbit conf (/etc/rabbitmq/rabbitmq-env.conf) file with-
#the ip address which rabbit should use, this is to limit rabbit to only use internal rackspace communication and not publicly accessible ports
NODE_IP_ADDRESS=myIpAdress
#had to change the nodename becaue otherwise rabbitmq used rabbit#Hostname and not only rabbit
NODENAME=myCompany
#This instructed rabbit to instruct erlang which ports it should use for its communications with other nodes
export SERVER_ERL_ARGS="$SERVER_ERL_ARGS -kernel inet_dist_listen_min somePort -kernel inet_dist_listen_max someOtherBiggerPort"
Some resources which helped me along the way:
RabbitMQ Clustering Guide
Clustering RabbitMQ servers for High Availability
rabbitmq-env.conf(5) manual page
Node communication by public IP address erlang mailing list (The middle post)
Configuring RabbitMQ Cluster on Cloud
Hope this will help anyone else.
EDIT:
Not sure how I was mistaken but it seemed my erlang-rabbit port instructions were not taken into consideration or were not enough. Ended up having to allow all communications between the two nodes...
One thing to really watch out for is whitespace of any kind in the erlang cookie file, especially line breaks AFTER the contents of the cookie. So long as both are identical, things are okay, but when one has a line break and the other doesn't, thing won't work.
Background: I was facing the same issue while setting up Rabbitmq cluster. I was using 2 docker containers running on my host-machine, which is equivalent to 2 separate nodes and I could not create a cluster of these two.
Solution: 1. Make sure you have same erlang cookie on all your cluster nodes, the default location is /var/lib/rabbitmq/.erlang.cookie. This file is used for authentication, so make sure, you have it same on all the nodes. After changing the .erlang.cookie restart your rabbitmq service.
Make sure that nodes are accessible from one other, use ping or telnet to check the connection.
Check that /etc/hosts have correct entries, for example if rabbit2 wants to join cluster rabbit1, /etc/hosts of rabbit2 should contain.
172.68.1.6 rabbit1
172.68.1.7 rabbit2
Now stop service using $rabbitmqctl stop_app followed by $rabbitmqctl join_cluster rabbit#rabbit1, start your service by rabbitmqctl start_app and check $rabbitmqctl cluster_status to see weather you have joined the cluster or not.
I followed the rabbitmq official documentation to setup the cluster.
to change RabbitMQ sname/name behaviour you can edit the scripts:
rabbitmq-multi
rabbitmq-server
rabbitmqctl
Example
In script rabbitmqctl there is the following piece of code:
exec erl \
-pa "${RABBITMQ_HOME}/ebin" \
-noinput \
-hidden \
${RABBITMQ_CTL_ERL_ARGS} \
-sname rabbitmqctl$$ \
-s rabbit_control \
-nodename $RABBITMQ_NODENAME \
-extra "$#"
You have to change it in:
exec erl \
-pa "${RABBITMQ_HOME}/ebin" \
-noinput \
-hidden \
${RABBITMQ_CTL_ERL_ARGS} \
-name rabbitmqctl$$ \
-s rabbit_control \
-nodename $RABBITMQ_NODENAME \
-extra "$#"
http://pearlin.info/?p=1672
so you need to copy the cookie from the node you trying to connect
example :- rabbit#node1
rabbit#node2
go to rabbit#node1 and copy the cookie from cat /var/lib/rabbitmq/.erlang.cookie
go to rabbit#node2 remove the current cookie and paste the new one.
on same node
/usr/sbin/rabbitmqctl stop_app
/usr/sbin/rabbitmqctl reset
/usr/sbin/rabbitmqctl cluster rabbit#node1
should do it.
same documented here.
http://pearlin.info/?p=1672