Graylog Collector Sidecar as nonroot - graylog

i would like to achieve an architecture like shown on the left side of this picture(because i want to use NXLog): http://docs.graylog.org/en/2.1/_images/sidecar_overview.png.
I have already installed Graylog2 on my RedHat Server and now i'm currently working on the configuration of collector-sidecar. As i'm working as nonroot, i had to change several directories in configuration files of collector-sidecar and NXLog. Now to the problem: Everytime i try to start collector sidecar, i get INFO/Error- Messages:
[gunge#bsul0959 bin]$ ./graylog-collector-sidecar -c /opt/ansible/sidecar/etc/graylog/collector-sidecar/collector_sidecar.yml
INFO[0000] Using collector-id: 13a3d80f-cb69-4391-8520-7a760b9b964e
INFO[0000] Fetching configurations tagged by: [linux apache syslog]
ERRO[0000] stat /var/run/graylog/collector-sidecar: no such file or directory
INFO[0000] Trying to create directory for: /var/run/graylog/collector-sidecar/nxlog.run
ERRO[0000] Not able to create directory path: /var/run/graylog/collector-sidecar
INFO[0000] Starting collector supervisor
ERRO[0010] [UpdateRegistration] Sending collector status failed. Disabling `send_status` as fallback! PUT http://127.0.0.1:12900/plugins/org.graylog.plugins.collector/collectors/13a3d80f-cb69-4391-8520-7a760b9b964e: 400 Unable to map property tags.
Known properties include: operating_system
After this start procedure, a collector appears on my Graylog Web-Interface, but if i abort the start procedure, the collector disappears again.
During the start procedure, it tries to create a path in /var/run/graylog/collector-sidecar but as i am not root, it can't. As a consequence, he can't create nxlog.run in that directory. I already tried to change the path to a place where i don't need root permissions, but i think there is no configuration file where i can do this. So i looked into the binary of collector-sidecar and found this:
func (nxc *NxConfig) ValidatePreconditions() bool {
if runtime.GOOS == "linux" {
if !common.IsDir("/var/run/graylog/collector-sidecar") {
err := common.CreatePathToFile("/var/run/graylog/collector-sidecar/nxlog.run")
if err != nil {
return false
}
}
}
return true
}
It seems, that the path is coded into the application and there is no way to configure anoter path.
Do you see a solution besides getting root permissions?

By default sidecar uses root account. Creating a new user as "collector" and giving him the files he needs and switching to his user will solve the issue.
Create the user and grant the ownership/permissions:
# useradd -r collector
# chown -R collector /etc/graylog
# chown -R collector /var/cache/graylog
# chown -R collector /var/log/graylog
# setfacl -m u:collector:r /var/log/*
Tell systemd about the new user:
# vim /etc/systemd/system/collector-sidecar.service
[Service]
User=collector
Group=collector
# systemctl daemon-reload
# systemctl restart collector-sidecar
From now on backends ( NXLog or Beats ) will use the user collector. Hope that it works for you!

Currently this is a fixed path as you saw in the code. To run it as a normal user you also have to do some more changes in the default NXlog configuration file. At the moment I would recomment you to write your own NXlog file and use it without the Sidecar in between. But you can create a GH issue so that we can add the needed option.
Cheers,
Marius

Related

VSCode docker dev container can't access ~/.ssh

I need access to /home/vscode/.ssh/ so the tools I am using can use my ssh key, however I can't seem to change the permissions.
I build the container like so:
# other steps
COPY id_rsa /home/vscode/.ssh/id_rsa
RUN chmod 600 /home/vscode/.ssh/id_rsa \
&& touch /home/vscode/.ssh/known_hosts \
&& ssh-keyscan bitbucket.org >> /home/vscode/.ssh/known_hosts
# other steps
This grabs my id_rsa from my local directory and adds it into the docker container so I can keep using my existing SSH key.
I then try and use a tool like terraform to execute a command that clones in some code from a repository that is setup with my SSH key.
$ terraform plan
ERRO[0001] 1 error occurred:
* error downloading 'ssh://git#bitbucket.org/example/example.git?ref=master': /usr/bin/git exited with 128: Cloning into '/workspaces/example'...
Failed to add the RSA host key for IP address 'xxx.xxx.xxx.xxx' to the list of known hosts (/home/vscode/.ssh/known_hosts).
Load key "/home/vscode/.ssh/id_rsa": Permission denied
git#bitbucket.org: Permission denied (publickey).
fatal: Could not read from remote repository.
How can I correctly setup access in my Dockerfile? I can't even run ssh-kegyen!
$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/vscode/.ssh/id_rsa):
/home/vscode/.ssh/id_rsa already exists.
Overwrite (y/n)? y
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Saving key "/home/vscode/.ssh/id_rsa" failed: Permission denied
The first error is occurring because the command is executed from a directory (/workspaces) that is not writable by non-admin users. For some reason, VS Code creates this directory automatically and uses it as the start-up directory when you open a new terminal session. The pwd command can be used to find out which directory the session is currently in.
If the permissions on the user's home directory are correctly set up, the command should succeed by first changing the directory:
$ cd $HOME
$ terraform plan
In the case of the second command (ssh-keygen), the path is already pointing to the user's home directory, so there might be another issue. The command might try to save temporary files to the current directory, or the permissions in the home directory could be wrong.
As #rasmus91 pointed out, it is usually not a good idea to use root privileges, even if only for development. These kind of practices can creep into production just "because it works".
EDIT: To take a guess, the Dockerfile is probably built with root privileges and the user vscode is changed only after the Dockerfile has been processed. Hence the file /home/vscode/.ssh/id_rsa might end up having the admin user as the owner.
After several attempts at overriding user permissions in the Dockerfile I came to find out the user is set from the devcontainer.json file.
Simply go to the bottom where it sets the"remoteUser": "vscode" and comment it out.
This will now set the user as root. For local development this is perfect and should cause no issues. I can now utilise the SSH key as well as anything else previously locked off.
// .devcontainer/devcontainer.json
{
"name": "Ubuntu",
"build": {
"dockerfile": "Dockerfile",
// Update 'VARIANT' to pick an Ubuntu version: focal, bionic
"args": { "VARIANT": "focal" }
},
"settings": {},
"extensions": [],
// "remoteUser": "vscode"
}

Change default volume mount point for docker rootless?

I saw this post with different solutions for standard docker installation:
How to change the default location for "docker create volume" command?
At first glance I struggle to repeat the steps to change the default mount point for the rootless installation.
Should it be the same? What would be the procedure?
I just got it working. I had some issues because I had the service running while trying to change configurations. Key takeaways:
The config file is indeed stored in ~/.config/docker/. One must make a daemon.json file here in order to change preferences. We would like to change the data-root option (and storage-driver, in case the drive does not have capabilities
To start and stop the headless service one runs systemctl --user [start | stop] docker.
a. Running the systemwide service starts a parallel and separate instance of docker, which is not rootless.
b. When stopping make sure to stop the docker.socketfirst.
Sources are (see Useage section for rootless)
and (config file information)
We ended up with the indirect solution. We have identified the directory where the volumes are mounted by default and created a symbolic link which points to the place where we actually want to store the data. In our case it was enough. Something like that:
sudo ln -s /data /home/ubuntu/.local/share/docker/volumes"

Can I specify a custom location for docker temporary files?

I'm trying to run docker in a partially locked-down environment, with /etc on a read-only mount point and a "/data" folder in a read/write mount point. I've added an /etc/docker/daemon.json file:
{
"data-root": "/data/docker"
}
but dockerd is failing on startup with this error:
failed to start daemon: Error saving key file: open /etc/docker/.tmp-key.json128868007: read-only file system
Can I stop dockerd from trying to write into /etc? Are there best practices for running docker on a host with read-only mounts?
EDIT: Turns out there was only one file being written: /etc/docker/key.json which is talked about in detail here. The .tmp-key.json bit is likely a part of some atomic file write code.
Looks like only the "key.json" file is written to /etc. After some digging, I found this PR which talks about making it configurable. As of docker 19.03.6, the option is still available for use in the daemon.json file as "deprecated-key-path": "/path/to/file".

How to audit the selinux denial inside a docker container

I have a docker container, when disable selinux, it works well;
but when enabled selinux (i.e. the docker daemon is started with --selinux-enabled), it can not start up.
So the failure should caused by selinux denial, but this is not shown in the selinux audit log. when I use the "ausearch -m XXX | audit2allow ..." to generate the policy, it does not include any denial info.
want to know how to get the selinux denial info occured inside the container, so that I can use it in generating my policy file?
ps: I checked the label info of the accessed file, they seem right,but access(ls) is denied:
# ls -dlZ /usr/bin
dr-xr-xr-x. root root system_u:object_r:container_file_t:s0:c380,c857 /usr/bin
# ls /usr/bin
ls: cannot open directory /usr/bin: Permission denied
more: the selected answer answered the question, but now the problem is the audit log shows the access is to read "unlabeled_t", but as the "ls -dZ /usr/bin" shows, it is a "container_file_t". I put this in a separate question:
Why SELinux denies access to container internal files and claims them as "unlabled_t"?
The policy likely contains dontaudit rules. Dontaudit rules do not allow acecss, but suppress logging for the specific access.
You can disable dontaudit rules with semanage:
semanage dontaudit off
After solving the issue, you probably want to turn the dontaudit rules back on to reduce log noise.
It is also possible to search for possible dontaudit rules with sesearch:
sesearch --dontaudit -t container_file_t

Unable to start any container when Volumes are enabled Docker Toolbox

I am running Docker Toolbox v. 1.13.1a on Windows 7 Pro Service pack 1 x64OS.
with Virtual Box Version 5.1.14 r112924
when I try to run any docker image e.g. official postgres image from Docker Hub with volumes disabled, it works fine!
But when I enable the volumes it fails.
I tried all official documentations
The VM has shared folder as required and has full access to it also
shared folder screenshot
In case of my example of postgresql it crashes with following log
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
Data page checksums are disabled.
fixing permissions on existing directory /var/lib/postgresql/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... LOG: could not link file "pg_xlog/xlogtemp.27" to "pg_xlog/000000010000000000000001": Operation not permitted
FATAL: could not open file "pg_xlog/000000010000000000000001": No such file or directory
child process exited with exit code 1
initdb: removing contents of data directory "/var/lib/postgresql/data"
I know its the problem with folder permissions. But kinda stuck!
A ton of thanks in advance
I've been busy with this problem all day and my conclusion that it's currently simply not possible to run postgresql inside a docker container while keeping your data persistent in a separate volume.
I even tried running the container without linking to a volume and copying the data that was originally in /var/lib/postgresql into a folder of my host OS (Windows 10 Home), then copy that into the folder that got then linked to the container itself.
Alas, I got the next error:
FATAL: data directory "/var/lib/postgresql/data/pgadmin" has wrong ownership
HINT: The server must be started by the user that owns the data directory.
In conclusion: There's something going wrong with the ownership and the correct user owning it and to be able to fix it, you'll need a unix commandline on Windows that is able to run docker (something currently not possible with Bash on Ubuntu on Windows that is running using Ubuntu 16.04 binaries).
Maybe, in the future, you'll be able to run the needed commands (found here, under Arbitrary --user Notes), but these are *nix commands and powershell (started by Kitematic) can't run those. Bash for Ubuntu for Windows could run those, but that shell has no connection to the docker daemon/service on windows...
TL;DR: Lost a day of work: It is currently impossible on Windows.
I have been trying to fix this issue also ..
At first I thought it was a symlink problem (because the first error fails on " could not link .. operation not permitted)
To be sure symlink is permitted you have to :
share a folder in virtualbox
run virtualbox as administrator (if you account is in administrator group) Right click virtualbox.exe and select run as Administrator
if your account is not administrator, add the symlink privilege with secpol.msc > "Local Policies-User Rights Assignments" add your user to "Create symbolic links"
enable symlink for your shared folder in virtualbox :
VBoxManage setextradata VM_NAME VBoxInternal2/SharedFoldersEnableSymlinksCreate/SHARED_FOLDER_NAME 1
Alternatively you can also use the c:\User\username folder which is shared and symlink enabled by default dockertools installation
Now I can create symlinks in the shared folder from the docker container .. but I still have the same error "could not link ... operation not permitted"
So the reason must be somewhere else ... in the file permissions as you said but I do not see why ?

Resources