How to I bypass the login page on Rstudio? - docker

I am trying to bypass the login page on RStudio as we are running it in a Docker container and this step is not necessary as we authenticate before we let users launch the container.
I am using the Rocker implementation of RStudio for Docker. We are running on Centos7.
I'm fairly new to SO, so please let me know what information would be helpful for answering the question.

I figured it out.
When you start rserver, add the flag --auth-none=1, so my final CMD in my Dockerfile looked like:
USER rstudio
CMD ["/usr/lib/rstudio-server/bin/rserver","--server-daemonize=0","--auth-none=1"]
I will caution though, the first time I did it, I ran with sudo -E in front of the command and it logged into RStudio as ROOT! (this is also because I had altered the /etc/rstudio/rserver.conf with the setting auth-minimum-user-id=0 because I was trying to get the error to go away (which it did :)
The above code will change to user 'rstudio' before running the command which will log you straight in as rstudio.
Hope that helps someone out there, I know I spent the better portion of my day finding a work-around!

To bypass the login page you need also to define the environment variable USER.
need to set system environmental variable USER=rstudio in order for --auth-none 1
-- https://github.com/rstudio/rstudio/issues/1663
Here is a snippet of Dockerfile permitting to run the RStudio server and to login as the user rstudio.
ENV USER="rstudio"
CMD ["/usr/lib/rstudio-server/bin/rserver", "--server-daemonize", "0", "--auth-none", "1"]
When it's run the login page is not displayed and we can check that the server and the session are running with the rstudio user.
# Run the container
docker run --name rstudio --rm -p 8787:8787 -d rstudio
# Check processes
docker exec -it rstudio ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
rstudio+ 1 0.1 0.3 210792 13844 ? Ssl 21:10 0:00 /usr/lib/rstudi
rstudio 49 0.7 2.3 555096 82312 ? Sl 21:10 0:03 /usr/lib/rstudi
root 570 0.0 0.1 45836 3744 pts/0 Rs+ 21:18 0:00 ps aux

Related

Pulumi does not perform graceful shutdown of kubernetes pods

I'm using pulumi to manage kubernetes deployments. One of the deployments runs an image which intercepts SIGINT and SIGTERM signals to perform a graceful shutdown like so (this example is running in my IDE):
{"level":"info","msg":"trying to activate gcloud service account","time":"2021-06-17T12:19:25-05:00"}
{"level":"info","msg":"env var not found","time":"2021-06-17T12:19:25-05:00"}
{"Namespace":"default","TaskQueue":"main-task-queue","WorkerID":"37574#Paymahns-Air#","level":"error","msg":"Started Worker","time":"2021-06-17T12:19:25-05:00"}
{"Namespace":"default","Signal":"interrupt","TaskQueue":"main-task-queue","WorkerID":"37574#Paymahns-Air#","level":"error","msg":"Worker has been stopped.","time":"2021-06-17T12:19:27-05:00"}
{"Namespace":"default","TaskQueue":"main-task-queue","WorkerID":"37574#Paymahns-Air#","level":"error","msg":"Stopped Worker","time":"2021-06-17T12:19:27-05:00"}
Notice the "Signal":"interrupt" with a message of Worker has been stopped.
I find that when I alter the source code (which alters the docker image) and run pulumi up the pod doesn't gracefully terminate based on what's described in this blog post. Here's a screenshot of logs from GCP:
The highlighted log line in the image above is the first log line emitted by the app. Note that the shutdown messages aren't logged above the highlighted line which suggests to me that the pod isn't given a chance to perform a graceful shutdown.
Why might the pod not go through the graceful shutdown mechanisms that kubernetes offers? Could this be a bug with how pulumi performs updates to deployments?
EDIT: after doing more investigation I found that this problem is happening because starting a docker container with go run /path/to/main.go actually ends up created two processes like so (after execing into the container):
root#worker-ffzpxpdm-78b9797dcd-xsfwr:/gadic# ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.3 0.3 2046200 30828 ? Ssl 18:04 0:12 go run src/cmd/worker/main.go --temporal-host temporal-server.temporal.svc.cluster.local --temporal-port 7233 --grpc-port 6789 --grpc-hos
root 3782 0.0 0.5 1640772 43232 ? Sl 18:06 0:00 /tmp/go-build2661472711/b001/exe/main --temporal-host temporal-server.temporal.svc.cluster.local --temporal-port 7233 --grpc-port 6789 --
root 3808 0.1 0.0 4244 3468 pts/0 Ss 19:07 0:00 /bin/bash
root 3817 0.0 0.0 5900 2792 pts/0 R+ 19:07 0:00 ps aux
If run kill -TERM 1 then the signal isn't forwarded to the underlying binary, /tmp/go-build2661472711/b001/exe/main, which means the graceful shutdown of the application isn't executed. However, if I run kill -TERM 3782 then the graceful shutdown logic is executed.
It seems the go run spawns a subprocess and this blog post suggests the signals are only forwarded to PID 1. On top of that, it's unfortunate that go run doesn't forward signals to the subprocess it spawns.
The solution I found is to add RUN go build -o worker /path/to/main.go in my dockerfile and then to start the docker container with ./worker --arg1 --arg2 instead of go run /path/to/main.go --arg1 --arg2.
Doing it this way ensures there aren't any subprocess spawns by go and that ensures signals are handled properly within the docker container.

how to change configuration of freeradius-server in docker container?

I'm trying to bulid a freeradius-server using docker and pull a image "freeradius/freeradius server". The first time I used given command
docker run --name my-radius -t -d freeradius/freeradius-server -X
to build a containner adn successfully start debug mode. But I don't know how to quit so I used ctrl+c to stop the containner. And then I used commands below to get in the containner and want to start debug mode again so that I can change configuration or parameters.
docker start my-radius
docker exec -it my-radius /bin/bash
I got in the containner and used freeradius -X but failed. It present
Failed binding to auth address 127.0.0.1 port 18120 bound to server inner-tunnel: Address already in use
/etc/freeradius/sites-enabled/inner-tunnel[33]: Error binding to port for 127.0.0.1 port 18120
I used Google to look for solutions but failed. I guess it means the radius-server started automatically so that the address 127.0.0.1 and port 18120 were used. But I don't know how to stop it in the containner .
The official FreeRADIUS docker image will start FreeRADIUS when the container starts. This means that if you start the container and then exec a shell into it, FreeRADIUS will already be running.
The container will exit as soon as the FreeRADIUS process stops, meaning it is not possible to start the container in this way, stop FreeRADIUS running, and then continue to use the container.
In this situation, trying to run FreeRADIUS a second time in another shell will fail because the ports are already open, as you have discovered.
This can be see thus:
$ docker run --name my-radius -d freeradius/freeradius-server
106cdbc81e8e5c0257f22bebad221ed1b4ba0a14f40ce1e4110ec388380c7e62
$ docker exec -it my-radius /bin/bash
root#106cdbc81e8e:/# ps -ef
UID PID PPID C STIME TTY TIME CMD
freerad 1 0 1 23:10 ? 00:00:00 freeradius -f
root 12 0 1 23:10 pts/0 00:00:00 /bin/bash
root 22 12 0 23:10 pts/0 00:00:00 ps -ef
root#106cdbc81e8e:/# exit
exit
$ docker stop my-radius
my-radius
$ docker rm my-radius
my-radius
$
To be able to run FreeRADIUS yourself you can do two things. Firstly, don't start the container in the background, but start it in the foreground with FreeRADIUS in debug mode. The docker entrypoint will let you pass arguments directly to the daemon. This is the easiest way if you don't need to actually do anything inside the container, but just run FreeRADIUS in debug mode:
$ docker run --name my-radius -it freeradius/freeradius-server -X
FreeRADIUS Version 3.0.21
Copyright (C) 1999-2019 The FreeRADIUS server project and contributors
There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A
PARTICULAR PURPOSE
...
Listening on auth address * port 1812 bound to server default
Listening on acct address * port 1813 bound to server default
Listening on auth address :: port 1812 bound to server default
Listening on acct address :: port 1813 bound to server default
Listening on auth address 127.0.0.1 port 18120 bound to server inner-tunnel
Listening on proxy address * port 38640
Listening on proxy address :: port 49445
Ready to process requests
^C$
(note hit Ctrl-C to quit).
The alternative is to start it in the background, but instead of running FreeRADIUS run some other process. You can then exec into the container and run FreeRADIUS manually. This means you get a full shell inside the container without FreeRADIUS already running. For instance:
$ docker run --name my-radius -d freeradius/freeradius-server sleep 999999999999
23b5ddd4825a31a8fb417e1594028c6533267be4ff20a448d3844203b805dbd9
$ docker exec -it my-radius /bin/bash
root#23b5ddd4825a:/# ps -ef
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 23:16 ? 00:00:00 sleep 999999999999
root 7 0 0 23:17 pts/0 00:00:00 /bin/bash
root 17 7 0 23:17 pts/0 00:00:00 ps -ef
root#23b5ddd4825a:/# freeradius -X
FreeRADIUS Version 3.0.21
Copyright (C) 1999-2019 The FreeRADIUS server project and contributors
There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A
PARTICULAR PURPOSE
...
Listening on proxy address * port 46662
Listening on proxy address :: port 40284
Ready to process requests
^Croot#23b5ddd4825a:/# exit
exit
$ docker container kill my-radius
my-radius
$ docker container rm my-radius
my-radius
The sleep command used here will obviously quit at some point, so use a number large enough that it runs for long enough, as when that process exits the container will shut down.

who and w commands in CentOS 8 Docker container

While playing with CentOs 8 on Docker container I found out, that outputs of who and w commands are always empty.
[root#9e24376316f1 ~]# who
[root#9e24376316f1 ~]# w
01:01:50 up 7:38, 0 users, load average: 0.00, 0.04, 0.00
USER TTY FROM LOGIN# IDLE JCPU PCPU WHAT
Even when I'm logged in as a different user in second terminal.
When I want to write to this user it shows
[root#9e24376316f1 ~]# write test
write: test is not logged in
Is this because of Docker? Maybe it works in some way that disallow sessions to see each other?
Or maybe that's some other issue. I would really appreciate some explanation.
These utilities obtain the information about current logins from the utmp file (/var/run/utmp). You can easily check that in ordinary circumstances (e.g. on the desktop system) this file contains something like the following string (here qazer is my login and tty7 is a TTY where my desktop environment runs):
$ cat /var/run/utmp
tty7:0qazer:0�o^�
while in the container this file is (usually) empty:
$ docker run -it centos
[root#5e91e9e1a28e /]# cat /var/run/utmp
[root#5e91e9e1a28e /]#
Why?
The utmp file is usually modified by programs which authenticate the user and start the session: login(1), sshd(8), lightdm(1). However, the container engine cannot rely on them, as they may be absent in the container file system, so "logging in" and "executing on behalf of" is implemented in the most primitive and straightforward manner, avoiding relying on anything inside the container.
When any container is started or any command is execd inside it, the container engine just spawns the new process, arranges some security settings, calls setgid(2)/setuid(2) to forcibly (without any authentication) alter the process' UID/GID and then executes the required binary (the entry point, the command, and so on) within this process.
Say, I start the CentOS container running its main process on behalf of UID 42:
docker run -it --user 42 centos
and then try to execute sleep 1000 inside it:
docker exec -it $CONTAINER_ID sleep 1000
The container engine will perform something like this:
[pid 10170] setgid(0) = 0
[pid 10170] setuid(42) = 0
...
[pid 10170] execve("/usr/bin/sleep", ["sleep", "1000"], 0xc000159740 /* 4 vars */) = 0
There will be no writes to /var/run/utmp, thus it will remain empty, and who(1)/w(1) will not find any logins inside the container.

Where exactly do the logs of kubernetes pods come from (at the container level)?

I'm looking to redirect some logs from a command run with kubectl exec to that pod's logs, so that they can be read with kubectl logs <pod-name> (or really, /var/log/containers/<pod-name>.log). I can see the logs I need as output when running the command, and they're stored inside a separate log directory inside the running container.
Redirecting the output (i.e. >> logfile.log) to the file which I thought was mirroring what is in kubectl logs <pod-name> does not update that container's logs, and neither does redirecting to stdout.
When calling kubectl logs <pod-name>, my understanding is that kubelet gets them from it's internal /var/log/containers/ directory. But what determines which logs are stored there? Is it the same process as the way logs get stored inside any other docker container?
Is there a way to examine/trace the logging process, or determine where these logs are coming from?
Logs from the STDOUT and STDERR of containers in the pod are captured and stored inside files in /var/log/containers. This is what is presented when kubectl log is run.
In order to understand why output from commands run by kubectl exec is not shown when running kubectl log, let's have a look how it all works with an example:
First launch a pod running ubuntu that are sleeping forever:
$> kubectl run test --image=ubuntu --restart=Never -- sleep infinity
Exec into it
$> kubectl exec -it test bash
Seen from inside the container it is the STDOUT and STDERR of PID 1 that are being captured. When you do a kubectl exec into the container a new process is created living alongside PID 1:
root#test:/# ps -auxf
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 7 0.0 0.0 18504 3400 pts/0 Ss 20:04 0:00 bash
root 19 0.0 0.0 34396 2908 pts/0 R+ 20:07 0:00 \_ ps -auxf
root 1 0.0 0.0 4528 836 ? Ss 20:03 0:00 sleep infinity
Redirecting to STDOUT is not working because /dev/stdout is a symlink to the process accessing it (/proc/self/fd/1 rather than /proc/1/fd/1).
root#test:/# ls -lrt /dev/stdout
lrwxrwxrwx 1 root root 15 Nov 5 20:03 /dev/stdout -> /proc/self/fd/1
In order to see the logs from commands run with kubectl exec the logs need to be redirected to the streams that are captured by the kubelet (STDOUT and STDERR of pid 1). This can be done by redirecting output to /proc/1/fd/1.
root#test:/# echo "Hello" > /proc/1/fd/1
Exiting the interactive shell and checking the logs using kubectl logs should now show the output
$> kubectl logs test
Hello

SCADA LTS - HTTP Status 404

After starting a SCADA LTS Docker container as suggested on https://github.com/SCADA-LTS/Scada-LTS with the following command:
docker run -it -e DOCKER_HOST_IP=docker-machine ip-p 81:8080 scadalts/scadalts /root/start.sh
...The container works well for some time and then suddenly a "HTTP Status 404" error is shown, like the following:
http://[IP]/ScadaBR/
HTTP Status 404 - /ScadaBR/
type Status report
message /ScadaBR/
description The requested resource is not available.
Apache Tomcat/7.0.85
Where [IP] is the default Docker IP address and port, most of the times is localhost:81.
Any idea how to solve it?
Thank you in advance!
TL;DR
After some time running the MySQLservice dies. Is necessary to restart it manually with this:
docker exec scada service mysql restart
docker exec scada killall tail
DETAILED REPORT
When the error is shown, you can check if all the services are running on the container (in this case named 'scada'):
>docker exec scada ps -A
PID TTY TIME CMD
1 ? 00:00:00 start.sh
790 ? 01:00:22 java
791 ? 00:01:27 tail
858 ? 00:00:00 ps
As can be seen, no MySQL service is running. This explains why Tomcat is running but SCADA-LTS don't.
You can restart MySQL service inside the container with:
docker exec scada service mysql restart
After that SCADA-LTS is still down and you have to restart tomcat which can be done in this way:
docker exec scada killall tail
After a minute or less, all the services are running:
>docker exec scada ps -A
PID TTY TIME CMD
1 ? 00:00:00 start.sh
43 ? 00:00:00 mysqld_safe
398 ? 00:00:00 mysqld
481 ? 00:00:31 java
482 ? 00:00:00 sleep
618 ? 00:00:00 ps
Now SCADA-LTS is running!

Resources