Mosquitto Dynamic Security Plugin fails creating dynamic-security.json - mosquitto

I get the following error when trying to create the dynamic-security file, but there is still memory in that path.
$ mosquitto_ctrl dynsec init dynamic-security.json user
New password for user:
Reenter password for user:
dynsec init: Out of memory.
Error: Out of memory.
$ df -h .
Filesystem Size Used Avail Use% Mounted on
/dev/nvme0n1p3 433G 116G 295G 29% /
How could I debug this further? Thanks!

Related

docker mount - Error response from daemon: invalid mount config for type "bind"

I am facing an issue with mounting a host directory into docker container with both -v and --mount options.
Using mount:
docker run --mount type=bind,source=/home/myuser/docker_test/out_dir,target=/home/out_dir --user 12345:1000 -it docker-name:0.1 bash
docker: Error response from daemon: invalid mount config for type "bind": stat /home/myuser/docker_test/out_dir: permission denied.
But I am able to do stat on this directory.
stat /home/myuser/docker_test/out_dir
File: '/home/myuser/docker_test/out_dir'
Size: 4096 Blocks: 8 IO Block: 32768 directory
Device: 33h/51d Inode: 9275022755226025350 Links: 2
Access: (0770/drwxrwx---) Uid: (12345/ myuser) Gid: ( 1000/ hercules)
Access: 2022-12-01 02:12:54.430582000 -0500
Modify: 2022-12-01 02:12:38.239629000 -0500
Change: 2022-12-01 02:12:38.239629000 -0500
Birth: -
Using -v:
docker run -v /home/myuser/docker_test/out_dir:/home/out_dir --user 12345:1000 -it docker-name:0.1:0.1 bash
docker: Error response from daemon: error while creating mount source path '/home/myuser/docker_test/out_dir': mkdir /home/myuser/docker_test: permission denied.
ERRO[0000] error waiting for container: context canceled
I don't know why it's trying to do mkdir but /home/myuser/docker_test already exists and is writable for the current user.
Am I missing something here?
BTW - /home is a NFS mounted directory.
EDIT: mounting /tmp worked. So this means it is related to the NFS mounted directory /home.
EDIT 2
I am working on a network machine where I don’t have root (sudo) access.
The docker service is installed by root user.
/home/myuser/docker_test/out_dir has 700 (rwx------) permissions. If I change the permission to 755, it will work. But I can’t change the directory permissions.
My question is why stat is failing when the user starting the docker has the permissions to access the source directory?
Is the stat being called by the docker executable as some ‘other’ user?
Use:
sudo docker run -v /home/myuser/docker_test/out_dir:/home/out_dir --user 12345:1000 -it docker-name:0.1:0.1 bash

Cannot open vfio device in docker container as non-root user

I have enabled virtualization in the BIOS and enabled the IOMMU on kernel command line (intel_iommu=on).
I bound a solarflare NIC to the vfio-pci device and added a udev rule to ensure the vfio device is accessible by my non-root user (e.g., /etc/udev/rules.d/10-vfio-docker-users.rules):
SUBSYSTEM=="vfio", OWNER="myuser", GROUP=="myuser"
I've launched my container with -u 1000 and mapped /dev (-v /dev:/dev). Running in an interactive shell in the container, I am able to verify that the device is there with the permissions set by my udev rule:
bash-4.2$ whoami
whoami: unknown uid 1000
bash-4.2$ ls -al /dev/vfio/35
crw-rw---- 1 1000 1000 236, 0 Jan 25 00:23 /dev/vfio/35
However, if I try and open it (e.g., python -c "open('/dev/vfio/35', 'rb')" I get IOError: [Errno 1] Operation not permitted: '/dev/vfio/35'. However, the same command works outside the container as the normal non-root user with user-id 1000!
It seems that there are additional security measures that are not allowing me to access the vfio device within the container. What am I missing?
Docker drops a number of privileges by default, including the ability to access most devices. You can explicitly grant access to a device using the --device flag, which would look something like:
docker run --device /dev/vfio/35 ...
Alternately, you can ask Docker not to drop any privileges:
docker run --privileged ...
You'll note that in both of the above examples it was not necessary to explicitly bind-mount /dev; in the first case, the device(s) you have exposed with --device will show up, and in the second case you see the host's /dev by default.

Docker container (Kubernetes): Mysql user access denied

Hi I have followed some k8s tutorials on how to get going with setting up a local db + WordPress installation, but user can't connect to mysql within my cluster.
(everything else seems ok - in Kubernetes Dashboard Web UI)
Error: [15:40:55][~]#kubectl logs -f website-56677747c7-c7lb6
[21-Nov-2019
11:07:17 UTC] PHP Warning: mysqli::__construct():
php_network_getaddresses: getaddrinfo failed: Name or service not
known in Standard input code on line 22 [21-Nov-2019 11:07:17 UTC] PHP
Warning: mysqli::__construct(): (HY000/2002):
php_network_getaddresses: getaddrinfo failed: Name or service not
known in Standard input code on line 22
MySQL Connection Error: (2002) php_network_getaddresses: getaddrinfo
failed: Name or service not known [21-Nov-2019 11:07:20 UTC] PHP
Warning: mysqli::__construct(): (HY000/1045): Access denied for user
'websiteu5er'#'10.1.0.35' (using password: YES) in Standard input code
on line 22
MySQL Connection Error: (1045) Access denied for user
'websiteu5er'#'10.1.0.35' (using password: YES)
MySQL Connection Error: (1045) Access denied for user
'websiteu5er'#'10.1.0.35' (using password: YES)
MySQL Connection Error: (1045) Access denied for user
'websiteu5er'#'10.1.0.35' (using password: YES)
My Dockerfile (which I used to create the image pushed to docker hub then pulled into k8s service + deployment):
FROM mysql:5.7
# This should create the following default root + user?
ENV MYSQL_ROOT_PASSWORD=hello123
ENV MYSQL_DATABASE=website
ENV MYSQL_USER=websiteu5er
ENV MYSQL_PASSWORD=hello123
RUN /etc/init.d/mysql start \
&& mysql -u root --password='hello123' -e "GRANT ALL PRIVILEGES ON *.* TO 'websiteu5er'#'%' IDENTIFIED BY 'hello123';"
FROM wordpress:5.2.4-php7.3-apache
# Copy wp-config file over
COPY configs/wp-config.php .
RUN chown -R www-data:www-data *
COPY ./src/wp-content/themes/bam /var/www/html/wp-content/themes/bam
The standard Docker Hub mysql image has the ability to run arbitrary SQL scripts on the very first startup of the database only. It can also set up an initial database user with a known password, again on the first startup only. Details are in the linked Docker Hub page.
In a Kubernetes context I’d use just the environment variables, and specify them in my pod spec.
containers:
- name: mysql
image: mysql:5.7 # not a custom image
env:
- name: MYSQL_USER
value: websiteu5er
- name: MYSQL_PASSWORD
value: hello123
If you did need more involved setup, I’d create a ConfigMap that contained SQL scripts, and then mount that into the container in /docker-entrypoint-initdb.d.
There’s two things going on in your Dockerfile. One is that, when you have multiple FROM lines, you’re actually executing a multi-stage build; the image you get out at the end is only the Wordpress image, and the MySQL parts before it get skipped. The second is that you can’t actually create an image FROM mysql that contains any database-level configuration or content, so the image that comes out of the first stage has the environment variables set but won’t actually have executed your GRANT PRIVILEGES statement.
I’d just delete everything before the last FROM line and not try to build a derived MySQL image; use the /docker-entrypoint-initdb.d mechanism at startup time instead.
I see you are trying to start mysql database and then grant privileges for your user but you are doing it all wrong.
After runing:
RUN /etc/init.d/mysql start \
&& mysql -u root --password='hello123' -e "GRANT ALL PRIVILEGES ON *.* TO 'websiteu5er'#'%' IDENTIFIED BY 'hello123';"
mysql starts but your query after && never runs so your user doesn't get its privileges. It will get run only if mysql exits successfully.
Look here for explanation how && works in shell.
What you want to do is run this command after mysql starts and you can do it in several ways but probably the best in your case would be using PostStart hook in kubernetes:
spec:
containers:
- name: test
image: someimage
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "mysql -u root --password='hello123' -e \"GRANT ALL PRIVILEGES ON *.* TO 'websiteu5er'#'%' IDENTIFIED BY 'hello123';\""]
You may also want to add few second sleep command before you run the query to make sure the server actually started before you connect to it.
Also take a look at kubernetes documentation and read more about lifecycle hooks.
Let me know if it was helpful.

Running 'docker-compose up' throws permission denied when trying official samaple of Docker

I am using Docker 1.13 community edition on a CentOS 7 x64 machine. When I was following a Docker Compose sample from Docker official tutorial, all things were OK until I added these lines to the docker-compose.yml file:
volumes:
- .:/code
After adding it, I faced the following error:
can't open file 'app.py': [Errno 13] Permission denied. It seems that the problem is due to a SELinux limit. Using this post I ran the following command:
su -c "setenforce 0"
to solve the problem temporarily, but running this command:
chcon -Rt svirt_sandbox_file_t /path/to/volume
couldn't help me.
Finally I found the correct rule to add to SELinux:
# ausearch -c 'python' --raw | audit2allow -M my-python
# semodule -i my-python.pp
I found it when I opened the SELinux Alert Browser and clicked on 'Details' button on the row related to this error. The more detailed information from SELinux:
SELinux is preventing /usr/local/bin/python3.4 from read access on the
file app.py.
***** Plugin catchall (100. confidence) suggests **************************
If you believe that python3.4 should be allowed read access on the
app.py file by default. Then you should report this as a bug. You can
generate a local policy module to allow this access. Do allow this
access for now by executing:
ausearch -c 'python' --raw | audit2allow -M my-python
semodule -i my-python.pp

check_disk not generating alerts: nagios

I am new to nagios.
I am trying to configure the "check_disk" service for one host but I am not getting the expected results.
I should get the emails when when disk usage goes beyond 80%.
So, There is already service defined for this task with multiple hosts, as below:
define service{
use local-service ; Name of service template to use
host_name localhost, host1, host2, host3, host4, host5, host6
service_description Root Partition
check_command check_local_disk!20%!10%!/
contact_groups unix-admins,db-admins
}
The Issue:
Further I tried to test single host i.e. "host2". The current usage of host2 is as follow:
# df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/rootvg-rootvol 94G 45G 45G 50% /
So to get instant emails, I written another service as below, where warning set to <60% and critical set to <40%.
define service{
use local-service
host_name host2
service_description Root Partition again
check_command check_local_disk!60%!40%!/
contact_groups dev-admins
}
But still I am not receive any emails for the same.
Where it going wrong.
The "check_local_disk" command is defined as below:
define command{
command_name check_local_disk
command_line $USER1$/check_disk -w $ARG1$ -c $ARG2$ -p $ARG3$
}
Your command definition currently is setup to only check your Nagios server's disk, not the remote hosts (such as host2). You need to define a new command definition to execute check_disk on the remote host via NRPE (Nagios Remote Plugin Execution).
On Nagios server, define the following:
define command {
command_name check_remote_disk
command_line $USER1$/check_nrpe -H $HOSTADDRESS$ -c check_disk -a $ARG1$ $ARG2$ $ARG3$
register 1
}
define service{
use genric-service
host_name host1, host2, host3, host4, host5, host6
service_description Root Partition
check_command check_remote_disk!20%!10%!/
contact_groups unix-admins,db-admins
}
Restart the Nagios service.
On the remote host:
Ensure you have NRPE plugin installed.
Instructions for Ubuntu: http://tecadmin.net/install-nrpe-on-ubuntu/
Instructions for CentOS / RHEL: http://sharadchhetri.com/2013/03/02/how-to-install-and-configure-nagios-nrpe-in-centos-and-red-hat/
Ensure there is a command defined for check_disk on the remote host. This is usually included in nrpe.cfg, but commented-out. You'd have to un-comment the line.
Ensure you have the check_disk plugin installed on the remote host. Mine is located at: /usr/lib64/nagios/plugins/check_disk
Ensure that allowed_hosts field of nrpe.cfg includes the IP address / hostname of your Nagios server.
Ensure that dont_blame_nrpe field of nrpe.cfg is set to 1 to allow command line arguments to NRPE commands: dont_blame_nrpe=1
If you made any changes, restart the nrpe service.

Resources