I'm running Rails 3.1 on Ubuntu 10.04 on Nginx and Passenger.
In my logs I could see much of the following:
cache error: Permission denied - /var/www/redmeetsblue/releases/20120221032538/tmp/cache/B27
I solved the problem by changing the name of the user (from google advice) but I'm unsure of the security implications. Who is nobody? and is this secure?
/var/www/redmeetsblue/current/tmp/cache
total 16K
drwxr-xr-x 4 www-data root 4.0K 2012-02-20 22:27 .
drwxr-xr-x 3 root root 4.0K 2012-02-20 22:26 ..
drwxr-xr-x 54 www-data root 4.0K 2012-02-20 22:27 assets
drwxr-xr-x 3 www-data root 4.0K 2012-02-20 22:27 sass
root#y:/var/www/redmeetsblue/current/tmp# cd b27
-bash: cd: b27: No such file or directory
root#y:/var/www/redmeetsblue/current/tmp# cd B27
-bash: cd: B27: No such file or directory
root#y:/var/www/redmeetsblue/current/tmp# chown -R nobody cache
root#y:/var/www/redmeetsblue/current/tmp# ls -alh /var/www/redmeetsblue/current/tmp/cache
total 16K
drwxr-xr-x 4 nobody root 4.0K 2012-02-20 22:27 .
drwxr-xr-x 3 root root 4.0K 2012-02-20 22:26 ..
drwxr-xr-x 54 nobody root 4.0K 2012-02-20 22:27 assets
drwxr-xr-x 3 nobody root 4.0K 2012-02-20 22:27 sass
after changing the user, my cache is working, but I'm not sure if its safe. See working cache..
cache: [GET /assets/grid.png] stale, valid, store
cache: [GET /dashboards] miss
cache: [GET /assets/grid.png] stale, valid, store
The nobody user in commonly used as unix daemons owners so that they have enough permissions to do their job, but not too many as to do potentially destructive naughtiness. Running the daemon under a user account, it wouldn't be able to for example write to the syslogs. Running it under a privileged account such as root gives the process permissions to do that, but also for everything else. So if your daemon's process is compromised, an attacker would have far more freedom to own your server. The server may also start as root (necessary for example to bind to TCP port 80) and then give up its rights to user nobody.
Related
I'm running a tomcat (tomcat:9-jre11) on docker, when launching it, it logs the following, then crashes :
Cannot find /usr/local/tomcat/bin/setclasspath.sh
This file is needed to run this program
My first issue was actually getting inside the container because I can't use docker exec on a crashed container, but I managed it by setting an entry point as /bin/bash in Rancher.
Now setclasspath.sh is very much in the /usr/local/tomcat/bin/ inside the container. It previously had all read and execution rights, I've set it to 777 just to be sure, still have the same issue. Same goes with changing the owner (tomcat seems to be using root, even if I launch the catalina.sh manually on another user, having changed the file owner). I used the heavy handed approch and set the whole damn folder as 777, and still the same :
drwxrwxrwx 1 root root 4096 Jun 29 14:53 .
drwxr-xr-x 1 root root 4096 Jun 29 14:31 ..
-rwxrwxrwx 1 root root 34699 Jun 2 21:08 bootstrap.jar
-rwxrwxrwx 1 root root 25523 Jun 29 14:00 catalina.sh
-rwxrwxrwx 1 root root 1664 Jun 2 21:08 catalina-tasks.xml
-rwxrwxrwx 1 root root 2007 Jun 28 03:01 ciphers.sh
-rwxrwxrwx 1 root root 25410 Jun 2 21:08 commons-daemon.jar
-rwxrwxrwx 1 root root 211777 Jun 2 21:08 commons-daemon-native.tar.gz
-rwxrwxrwx 1 root root 1932 Jun 28 03:01 configtest.sh
-rwxrwxrwx 1 root root 9110 Jun 28 03:01 daemon.sh
-rwxrwxrwx 1 root root 1975 Jun 28 03:01 digest.sh
-rwxrwxrwx 1 root root 3392 Jun 28 03:01 makebase.sh
-rwxrwxrwx 1 root root 3718 Jun 28 03:01 setclasspath.sh
-rwxrwxrwx 1 root root 1912 Jun 28 03:01 shutdown.sh
-rwxrwxrwx 1 root root 1914 Jun 28 03:01 startup.sh
-rwxrwxrwx 1 root root 46898 Jun 2 21:08 tomcat-juli.jar
-rwxrwxrwx 1 root root 5550 Jun 28 03:01 tool-wrapper.sh
-rwxrwxrwx 1 root root 1918 Jun 28 03:01 version.sh
I've looked at the catalina.sh script, the part which cause the issue is the following :
if [ -r "$CATALINA_HOME"/bin/setclasspath.sh ]; then
. "$CATALINA_HOME"/bin/setclasspath.sh
else
echo "Cannot find $CATALINA_HOME/bin/setclasspath.sh"
echo "This file is needed to run this program"
fi
The -r inside the condition is borked. I've read it looked if the file exists and is readable, it fill all conditions. I've added elif with -a and -f condition and the do return true, but the rights seems to be the issue despite them being set to 777 or not. I've add a whoami inside the script as well, and it's the root user, so not an issue of ownership.
The startup.sh script has a similar issue, with a -x condition, where it cannot find the catalina.sh ...
We just stumbled over this very problem today.
We have an Ubuntu 18.04 server that was upgraded from 16.04. The versions of the docker packages read:
docker-ce/now 5:19.03.1~3-0~ubuntu-xenial amd64
docker-ce-cli/now 5:19.03.1~3-0~ubuntu-xenial amd64
docker-compose/bionic,bionic,now 1.17.1-2 all
Kernel is: 4.15.0-154-generic x86_64
On this machine, running a current version of tomcat:9-jre11 [0] results in the same problem as depicted in your question.
To narrow it down, we just started a bash like this:
docker run -it --rm --entrypoint=/bin/bash tomcat:9-jre11
Now here comes the strange behavior you observed, which is completely unrelated to tomcat:
root#f338debf92f6:/usr/local/tomcat# [[ -r /bin/bash ]]
root#f338debf92f6:/usr/local/tomcat# echo $?
1
On any other machine we tested, the result is as expected, e.g.:
root#0083a80a9ec2:/usr/local/tomcat# [[ -r /bin/bash ]]
root#0083a80a9ec2:/usr/local/tomcat# echo $?
0
Unfortunately I was not able to reproduce the behavior using a freshly installed Ubuntu 18.04. I even downgraded the kernel version and installed docker from the xenial repo.
Trying to google a solution I found:
https://github.com/alpinelinux/docker-alpine/issues/156#issuecomment-912645029
So I tried strace, and here the problem is visible:
On our Ubuntu 18.04:
...
read(255, "#!/bin/bash\n[[ -r /bin/bash ]]\n", 31) = 31
faccessat2(AT_FDCWD, "/bin/bash", R_OK, AT_EACCESS) = -1 EPERM (Operation not permitted)
read(255, "", 31) = 0
...
And on any other machine I tested:
...
read(255, "#!/bin/bash\n[[ -r /bin/bash ]]\n", 31) = 31
faccessat2(AT_FDCWD, "/bin/bash", R_OK, AT_EACCESS) = -1 ENOSYS (Function not implemented)
faccessat(AT_FDCWD, "/bin/bash", R_OK) = 0
read(255, "", 31)
...
Researching the faccessat2 system call shows that it should not return EPERM [1]. I could not quite pinpoint where this behavior is introduced - somewhere between glibc and seccomp, but it all boils down to the runtime being too old for this new syscall.
Here are the solutions we came up with:
Upgrade your machine - this might not be feasible, though :)
Use a tomcat image based on an older version of Debian/Ubuntu. For us
tomcat:9.0.64-jre11-openjdk-slim-bullseye worked fine.
Run the container using the --privileged switch. This circumvents the syscall privilege problem, but would be generally a bad idea
References
digest sha256:f0c2eb420166a7d609c0031699e0778e11256f280cc2bfb5bfd61cde7ae45c61
https://man7.org/linux/man-pages/man2/faccessat.2.html
The Problem is descriped here:
https://github.com/docker-library/tomcat/issues/269
The Base Image (Eclipse Temurin) from the Tomcat Container was updated to
Ubuntu LTS 22.04 Jammy based Temurin image.
If you use an old Docker Version and libseccomp on your Host you will run into the Problem with the "-r" Flag in bash.
Our Solution was to use the Tomcat tomcat:9-jdk11-temurin-focal
Updating Docker to latest version helped me to launch tomcat
I had the same problem running a tomcat:9-jdk8 image, running on a debian 10.3 VM that was no more up-to-date.
Upgrading the whole system by
sudo apt-get update
sudo apt upgrade
-> reboot VM
solved the problem. Actual versions now: docker-client: 20.10.17, docker engine: 19.03.9, kernel: 4.19.0-21-amd64
Interestigly: The problem only occured when running the image that was built on this outdated system. The 'same' tomcat image built on our jenkins server started without problems on my local outdated VM.
I couldn't start container because of some issues with volumes so I tried this to make sure I understand how volumes work. And there is something strange that is happening here. Two files should be present in /data directory but instead, I see one folder named as one of the files on the source machine. I'm doing this on Windows 10.
PS C:\Users\Piotrek\source\repos\fluentd> dir
Directory: C:\Users\Piotrek\source\repos\fluentd
Mode LastWriteTime Length Name
---- ------------- ------ ----
-a---- 06.01.2019 18:50 7 abc.txt
-a---- 06.01.2019 18:50 80 test.conf
PS C:\Users\Piotrek\source\repos\fluentd> docker run -ti --rm -v ${PWD}:/data ubuntu ls -alR /data
/data:
total 4
drwxr-xr-x 3 1000 root 60 Jan 6 16:48 .
drwxr-xr-x 1 root root 4096 Jan 6 17:53 ..
drwxr-xr-x 2 1000 root 40 Jan 6 16:48 test.conf
/data/test.conf:
total 0
drwxr-xr-x 2 1000 root 40 Jan 6 16:48 .
drwxr-xr-x 3 1000 root 60 Jan 6 16:48 ..
Problem solved.
I went to Docker settings and under "Shared Drives" I clicked Reset Credentials.
I have enabled drive sharing some time ago but after that I changed password - to no password. Looks like Docker doesn't ask you to enable drive sharing again when your password is empty. It does when you change password, but not to empty one.
Versions
Host OS: Debian 4.9.110
Docker Version: 18.06.1-ce
Scenario
I have a directory where multiple users (user-a and user-b) have read/write access through a common group membership (shared), set up via chown:
/media/disk-a/shared/$ ls -la
drwxrwsr-x 4 user-a shared 4096 Oct 7 22:21 .
drwxrwxr-x 7 root root 4096 Oct 1 19:58 ..
drwxrwsr-x 5 user-a shared 4096 Oct 7 22:10 folder-a
drwxrwsr-x 3 user-a shared 4096 Nov 10 22:10 folder-b
UIDs & GIDs are as following:
uid=1000(user-a) gid=1000(user-a) groups=1000(user-a),1003(shared)
uid=1002(user-b) gid=1002(user-b) groups=1002(user-b),1003(shared)
Relevant /etc/group looks like this:
shared:x:1003:user-a,user-b
When suing into both users, files can be created as expected within the shared directory.
The shared directory is attached to a Docker container via mount binds to /shared/. The Docker container runs as user-b (using the --user "1002:1002" parameter)
$ ps aux | grep user-b
user-b 1347 0.2 1.2 1579548 45740 ? Ssl 17:47 0:02 entrypoint.sh
id from within the container prints the following, to me okay-looking result:
I have no name!#7a5d2cc27491:/$ id
uid=1002 gid=1002
Also ls -la mirrors its host system equivalent perfectly:
I have no name!#7a5d2cc27491:/shared ls -la
total 16
drwxrwsr-x 4 1000 1003 4096 Oct 7 20:21 .
drwxr-xr-x 1 root root 4096 Oct 8 07:58 ..
drwxrwsr-x 5 1000 1003 4096 Oct 7 20:10 folder-a
drwxrwsr-x 3 1000 1003 4096 Nov 10 20:10 folder-b
Problem
From within the container, I cannot write anything to the shared directory. For touch test I get the following i.e.:
I have no name!#7a5d2cc27491:/shared$ touch test
touch: cannot touch 'test': Permission denied
I can write to a directory which is directly owned by user-b (user & group) and mounted to the container... Simply the group membership seems somehow not to be respected at all.
I have looked into things like user namespace remapping and things, but these seemed to be solutions for something not applying here. What do I miss?
Your container user has gid=1002, but is not member of group shared with gid=1003.
Additionally to --user "1002:1002" you need --group-add 1003.
Than the container user is allowed to access the shared folder with gid=1003.
id should show:
I have no name!#7a5d2cc27491:/$ id
uid=1002 gid=1002 groups=1003
I have an executable written in Golang, it starts and runs fine when started from the Linux-prompt. As you can see, the executable needs an XML file when started. But when started inside a Docker environment, I get error message:
standard_init_linux.go:190: exec user process caused "no such file or directory"
Let me tell you what I tried. First, this is my Dockerfile:
FROM alpine:latest
MAINTAINER Bert Verhees "xxxxx"
ADD archibold_ucum_service /archibold_ucum_service
ADD data/ucum-essence.xml /data/ucum-essence.xml
ENTRYPOINT ["/archibold_ucum_service", "-ucumfile=/data/ucum-essence.xml"]
I build it in this way:
docker build -t=ucum_micro_service .
Then I start it in this way
docker run --name=ucum_micro_service -i -t ucum_micro_service /bin/sh
When I do this, I get the error-message, as displayed above. Then I tried commenting out the ENTRYPOINT line, and then it builds OKAY and it starts the linux prompt, so I can query what is inside.
The executable is in it, and the data-file also. And the executable also has the right attributes (it is executable inside the docker-container)
Then I try to start the executable from the linux-prompt, inside the started container, and then I get again a message that the file is not found:
/ # ./archibold_ucum_service
/bin/sh: ./archibold_ucum_service: not found
For completeness, here is partly the directory-structure in the container:
/ # ls -l
total 17484
-rwxrwxr-x 1 root root 17845706 Aug 3 13:21 archibold_ucum_service
drwxr-xr-x 2 root root 4096 Jul 5 14:47 bin
drwxr-xr-x 2 root root 4096 Aug 3 14:29 data
drwxr-xr-x 5 root root 360 Aug 4 20:27 dev
drwxr-xr-x 15 root root 4096 Aug 4 20:27 etc
drwxr-xr-x 2 root root 4096 Jul 5 14:47 home
drwxr-xr-x 5 root root ........
.......
So, what can be the problem. I am trying to solve this for over a day now. Thanks for support.
Okay, so this question must have been asked a couple of dozen of times already but I honestly went through all similiar question and none of these questions relate to my issue.
So a little bit of history and configuration
Rails 3 app, Passenger + Nginx 3 as production server
I am currently deploying my production Rails 3 app via bash script, that basically clones git repo every time and does some magic, it had its issues so we decided to migrate to Capistrano.
Wrote the deploy.rb script, specified shared folders, started it all up on a test server first and managed to get it all up and running smoothly.
Now I do same for production server, deployed it via Capistrano to separate from my manual bash script folder so they wont conflict in any way, changed nginx root from
root /var/www/public;
to
root /var/fruby/current/public;
restarting nginx and I get a 403 error and the following record in logs.
2014/06/08 18:28:32 [error] 5239#0: *1 directory index of "/var/fruby/current/public/" is forbidden, client: 109.187.177.116, server: example.com, request: "GET / HTTP/1.1", host: "example.com", referrer: ""
Since passenger configuration didn't change, its safe to assume that the problem is somewhere with folder permissions but I honestly can't seem to identify what the issue is in. Permissions so does owner really seem the same for me, perhaps you can point me in the right direction?
It has to be something to do with passenger, because if I manually start application with rails s, it starts up beautifully.
/opt/nginx/conf/nginx.conf
Inside http block:
passenger_root /usr/local/lib/ruby/gems/1.9.1/gems/passenger-3.0.18;
passenger_ruby /usr/local/bin/ruby;
proxy_read_timeout 640;
server block:
server {
listen 443;
server_name example.com;
ssl on;
ssl_certificate /opt/nginx/conf/certs/example.com.crt;
ssl_certificate_key /opt/nginx/conf/certs/example.com.key.nopass;
charset utf-8;
#root /var/www/public; # Old directory my bash script deployed to
root /var/fruby/current/public; # New directory, capistrano deploys to
passenger_enabled on;
rails_env production;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
Now fruby folder has these permissions:
drwxr-xr-x 5 root root 4096 Jun 8 18:56 fruby/
Inside fruby folder
drwxr-xr-x 5 root root 4096 Jun 8 18:56 ./
drwxr-xr-x 17 root root 4096 Jun 8 17:31 ../
lrwxrwxrwx 1 root root 34 Jun 8 18:56 current -> /var/fruby/releases/20140608145412/
drwxr-xr-x 4 root root 4096 Jun 8 18:54 releases/
drwxr-xr-x 7 root root 4096 Jun 8 17:46 repo/
-rw-r--r-- 1 root root 170 Jun 8 18:56 revisions.log
drwxr-xr-x 7 root root 4096 Jun 8 17:47 shared/
Inside shared folder
drwxr-xr-x 7 root root 4096 Jun 8 17:47 ./
drwxr-xr-x 5 root root 4096 Jun 8 18:56 ../
drwxr-xr-x 2 root root 4096 Jun 8 17:49 bin/
drwxr-xr-x 3 root root 4096 Jun 8 17:47 bundle/
drwxr-xr-x 2 root root 4096 Jun 8 17:51 log/
drwxr-xr-x 10 root root 4096 Jun 8 17:24 public/
drwxr-xr-x 6 nobody nogroup 4096 Jun 8 18:56 tmp/
Everything seems to be fine and pretty much the same permissions, there are on production server.
Let me know if you need any more output.
Any help is very much appreciated!
Managed to resolve this issue by updating Passenger to 4.0.44 and recompiling nginx (running passenger-install-nginx-module again), apparently this was the only difference with the test server I was testing with at first.
Commands I ran to resolve the issue:
user#host-$: chmod 777 -R /tmp
user#host-$: chmod o+t -R /tmp
user#host-$: gem install passenger
user#host-$: passenger-install-nginx-module
The first 2 commands are the corteusy of this answer (Getting remove_entry_secure error while using ruby application)