I'm trying to use rsyslog imfile to send logs contained in Jenkins log files to a Graylog server, I added root user to jenkins group but I've still permissions issues when rsyslog tries to read files.
Here is the rsyslog script :
module(load="imfile")
ruleset(name="infiles") {
action(type="omfwd"
target="graylog.server"
protocol="tcp" port="1514" )
}
input(type="imfile" tag="jenkinsJobs"
file="/var/lib/jenkins/jobs/*/builds/*/log")
And I get the following error :
imfile: poll_tree cannot stat file '/var/lib/jenkins/jobs/test/builds/legacyIds' - ignored: Permission denied [v8.1901.0]
I also tried to let user jenkins execute the script but he can't send back logs to rsyslog, since he hasn't the permissions.
Check for selinux context using ls -lZ on target file. You can disable selinux if not required.
Related
I need access to /home/vscode/.ssh/ so the tools I am using can use my ssh key, however I can't seem to change the permissions.
I build the container like so:
# other steps
COPY id_rsa /home/vscode/.ssh/id_rsa
RUN chmod 600 /home/vscode/.ssh/id_rsa \
&& touch /home/vscode/.ssh/known_hosts \
&& ssh-keyscan bitbucket.org >> /home/vscode/.ssh/known_hosts
# other steps
This grabs my id_rsa from my local directory and adds it into the docker container so I can keep using my existing SSH key.
I then try and use a tool like terraform to execute a command that clones in some code from a repository that is setup with my SSH key.
$ terraform plan
ERRO[0001] 1 error occurred:
* error downloading 'ssh://git#bitbucket.org/example/example.git?ref=master': /usr/bin/git exited with 128: Cloning into '/workspaces/example'...
Failed to add the RSA host key for IP address 'xxx.xxx.xxx.xxx' to the list of known hosts (/home/vscode/.ssh/known_hosts).
Load key "/home/vscode/.ssh/id_rsa": Permission denied
git#bitbucket.org: Permission denied (publickey).
fatal: Could not read from remote repository.
How can I correctly setup access in my Dockerfile? I can't even run ssh-kegyen!
$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/vscode/.ssh/id_rsa):
/home/vscode/.ssh/id_rsa already exists.
Overwrite (y/n)? y
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Saving key "/home/vscode/.ssh/id_rsa" failed: Permission denied
The first error is occurring because the command is executed from a directory (/workspaces) that is not writable by non-admin users. For some reason, VS Code creates this directory automatically and uses it as the start-up directory when you open a new terminal session. The pwd command can be used to find out which directory the session is currently in.
If the permissions on the user's home directory are correctly set up, the command should succeed by first changing the directory:
$ cd $HOME
$ terraform plan
In the case of the second command (ssh-keygen), the path is already pointing to the user's home directory, so there might be another issue. The command might try to save temporary files to the current directory, or the permissions in the home directory could be wrong.
As #rasmus91 pointed out, it is usually not a good idea to use root privileges, even if only for development. These kind of practices can creep into production just "because it works".
EDIT: To take a guess, the Dockerfile is probably built with root privileges and the user vscode is changed only after the Dockerfile has been processed. Hence the file /home/vscode/.ssh/id_rsa might end up having the admin user as the owner.
After several attempts at overriding user permissions in the Dockerfile I came to find out the user is set from the devcontainer.json file.
Simply go to the bottom where it sets the"remoteUser": "vscode" and comment it out.
This will now set the user as root. For local development this is perfect and should cause no issues. I can now utilise the SSH key as well as anything else previously locked off.
// .devcontainer/devcontainer.json
{
"name": "Ubuntu",
"build": {
"dockerfile": "Dockerfile",
// Update 'VARIANT' to pick an Ubuntu version: focal, bionic
"args": { "VARIANT": "focal" }
},
"settings": {},
"extensions": [],
// "remoteUser": "vscode"
}
I have a docker container, when disable selinux, it works well;
but when enabled selinux (i.e. the docker daemon is started with --selinux-enabled), it can not start up.
So the failure should caused by selinux denial, but this is not shown in the selinux audit log. when I use the "ausearch -m XXX | audit2allow ..." to generate the policy, it does not include any denial info.
want to know how to get the selinux denial info occured inside the container, so that I can use it in generating my policy file?
ps: I checked the label info of the accessed file, they seem right,but access(ls) is denied:
# ls -dlZ /usr/bin
dr-xr-xr-x. root root system_u:object_r:container_file_t:s0:c380,c857 /usr/bin
# ls /usr/bin
ls: cannot open directory /usr/bin: Permission denied
more: the selected answer answered the question, but now the problem is the audit log shows the access is to read "unlabeled_t", but as the "ls -dZ /usr/bin" shows, it is a "container_file_t". I put this in a separate question:
Why SELinux denies access to container internal files and claims them as "unlabled_t"?
The policy likely contains dontaudit rules. Dontaudit rules do not allow acecss, but suppress logging for the specific access.
You can disable dontaudit rules with semanage:
semanage dontaudit off
After solving the issue, you probably want to turn the dontaudit rules back on to reduce log noise.
It is also possible to search for possible dontaudit rules with sesearch:
sesearch --dontaudit -t container_file_t
I am trying to transfer all .sh files from one unix server to another using jenkins.
Files are getting transfer but it is coming in my unix home directory, I need to transfer it sudo user directory.
for example:
Source server name is "a" and target server name is "u"
we are using sell4 as sudo user in target server name
it should come in home directory of sell4 user
I have used the below command
Building in workspace /var/lib/jenkins/workspace/EDB-ExtractFilefromSVN
SSH: Connecting from host [a]
SSH: Connecting with configuration [u] ...
SSH: EXEC: STDOUT/STDERR from command [sudo scp *.sh sell4#u:/usr/app/TomcatDomain/ScoringTools_ACCDomain04/] ...
sudo: scp: command not found
SSH: EXEC: completed after 201 ms
SSH: Disconnecting configuration [u] ...
ERROR: Exception when publishing, exception message [Exec exit status not zero. Status [1]]
Gitcolony notification failed - java.lang.IllegalArgumentException: Invalid url:
Finished: UNSTABLE
Can you please suggest what I am going wrong here?
EDITS:
Adding the shell screenshot:
ah so it's some kind of plugin. It seems like you want to run local sudo to login to remote server user. It won't work this way. You can't open door to bathroom and expect walking into a garden.
sudo changes your local user to root, not remote server.
Do not use sudo with scp command but rather follow these answers:
https://unix.stackexchange.com/questions/66021/changing-user-while-scp
Upon execution a deploy to a server for a specific application, the process interrupts at this stage
DEBUG [88db4789] Command: ( export RBENV_ROOT="$HOME/.rbenv" RBENV_VERSION="2.3.4" ; /usr/bin/env mkdir -p /var/www/v4/shared /var/www/v4/releases )
DEBUG [88db4789] mkdir:
DEBUG [88db4789] cannot create directory â/var/wwwâ
DEBUG [88db4789] : Permission denied
Note: this occurring only for this particular application. Another application that deploys to the same server processes past this stage
I have attempted to change ownership as suggested here, but that fails
chown: cannot access â/var/www/â: No such file or directory
so I am led to believe a configuration issue is the culprit. Aside from the environment data
server 'xx.xxx.xxx.xxx', user: 'deploy', roles: %w{db web app}
where have I missed something?
Your server instance does not have the folder /var/www, so you can do manually by ssh to that server as user deploy then try to make the folder yourself.
I think it again will fail because of your deploy user does not have the rights to /var folder. Try to change the ownership following the guide you have to do so.
While yeuem1vannam's answer is valid, this use case actually had a different problem in the deploy.rb file. The path specified there had an error in the user name, thus the permission error to create the folder upon deploy.
i would like to achieve an architecture like shown on the left side of this picture(because i want to use NXLog): http://docs.graylog.org/en/2.1/_images/sidecar_overview.png.
I have already installed Graylog2 on my RedHat Server and now i'm currently working on the configuration of collector-sidecar. As i'm working as nonroot, i had to change several directories in configuration files of collector-sidecar and NXLog. Now to the problem: Everytime i try to start collector sidecar, i get INFO/Error- Messages:
[gunge#bsul0959 bin]$ ./graylog-collector-sidecar -c /opt/ansible/sidecar/etc/graylog/collector-sidecar/collector_sidecar.yml
INFO[0000] Using collector-id: 13a3d80f-cb69-4391-8520-7a760b9b964e
INFO[0000] Fetching configurations tagged by: [linux apache syslog]
ERRO[0000] stat /var/run/graylog/collector-sidecar: no such file or directory
INFO[0000] Trying to create directory for: /var/run/graylog/collector-sidecar/nxlog.run
ERRO[0000] Not able to create directory path: /var/run/graylog/collector-sidecar
INFO[0000] Starting collector supervisor
ERRO[0010] [UpdateRegistration] Sending collector status failed. Disabling `send_status` as fallback! PUT http://127.0.0.1:12900/plugins/org.graylog.plugins.collector/collectors/13a3d80f-cb69-4391-8520-7a760b9b964e: 400 Unable to map property tags.
Known properties include: operating_system
After this start procedure, a collector appears on my Graylog Web-Interface, but if i abort the start procedure, the collector disappears again.
During the start procedure, it tries to create a path in /var/run/graylog/collector-sidecar but as i am not root, it can't. As a consequence, he can't create nxlog.run in that directory. I already tried to change the path to a place where i don't need root permissions, but i think there is no configuration file where i can do this. So i looked into the binary of collector-sidecar and found this:
func (nxc *NxConfig) ValidatePreconditions() bool {
if runtime.GOOS == "linux" {
if !common.IsDir("/var/run/graylog/collector-sidecar") {
err := common.CreatePathToFile("/var/run/graylog/collector-sidecar/nxlog.run")
if err != nil {
return false
}
}
}
return true
}
It seems, that the path is coded into the application and there is no way to configure anoter path.
Do you see a solution besides getting root permissions?
By default sidecar uses root account. Creating a new user as "collector" and giving him the files he needs and switching to his user will solve the issue.
Create the user and grant the ownership/permissions:
# useradd -r collector
# chown -R collector /etc/graylog
# chown -R collector /var/cache/graylog
# chown -R collector /var/log/graylog
# setfacl -m u:collector:r /var/log/*
Tell systemd about the new user:
# vim /etc/systemd/system/collector-sidecar.service
[Service]
User=collector
Group=collector
# systemctl daemon-reload
# systemctl restart collector-sidecar
From now on backends ( NXLog or Beats ) will use the user collector. Hope that it works for you!
Currently this is a fixed path as you saw in the code. To run it as a normal user you also have to do some more changes in the default NXlog configuration file. At the moment I would recomment you to write your own NXlog file and use it without the Sidecar in between. But you can create a GH issue so that we can add the needed option.
Cheers,
Marius