who and w commands in CentOS 8 Docker container - docker

While playing with CentOs 8 on Docker container I found out, that outputs of who and w commands are always empty.
[root#9e24376316f1 ~]# who
[root#9e24376316f1 ~]# w
01:01:50 up 7:38, 0 users, load average: 0.00, 0.04, 0.00
USER TTY FROM LOGIN# IDLE JCPU PCPU WHAT
Even when I'm logged in as a different user in second terminal.
When I want to write to this user it shows
[root#9e24376316f1 ~]# write test
write: test is not logged in
Is this because of Docker? Maybe it works in some way that disallow sessions to see each other?
Or maybe that's some other issue. I would really appreciate some explanation.

These utilities obtain the information about current logins from the utmp file (/var/run/utmp). You can easily check that in ordinary circumstances (e.g. on the desktop system) this file contains something like the following string (here qazer is my login and tty7 is a TTY where my desktop environment runs):
$ cat /var/run/utmp
tty7:0qazer:0�o^�
while in the container this file is (usually) empty:
$ docker run -it centos
[root#5e91e9e1a28e /]# cat /var/run/utmp
[root#5e91e9e1a28e /]#
Why?
The utmp file is usually modified by programs which authenticate the user and start the session: login(1), sshd(8), lightdm(1). However, the container engine cannot rely on them, as they may be absent in the container file system, so "logging in" and "executing on behalf of" is implemented in the most primitive and straightforward manner, avoiding relying on anything inside the container.
When any container is started or any command is execd inside it, the container engine just spawns the new process, arranges some security settings, calls setgid(2)/setuid(2) to forcibly (without any authentication) alter the process' UID/GID and then executes the required binary (the entry point, the command, and so on) within this process.
Say, I start the CentOS container running its main process on behalf of UID 42:
docker run -it --user 42 centos
and then try to execute sleep 1000 inside it:
docker exec -it $CONTAINER_ID sleep 1000
The container engine will perform something like this:
[pid 10170] setgid(0) = 0
[pid 10170] setuid(42) = 0
...
[pid 10170] execve("/usr/bin/sleep", ["sleep", "1000"], 0xc000159740 /* 4 vars */) = 0
There will be no writes to /var/run/utmp, thus it will remain empty, and who(1)/w(1) will not find any logins inside the container.

Related

Get the PID on the host from inside the container?

Before Linux Kernel v4, I was able to obtain the host PID from inside the docker container from the process scheduling information.
For instance, if I run sleep command inside the container and my local PID is 37, then I can check the actual PID on the host via:
root#helloworld-595777cb8b-gjg4j:/# head /proc/37/sched
sleep (27062, #threads: 1)
I can verify on the host that the PID 27062 corresponds to the process within the container.
root 27062 0.0 0.0 4312 352 pts/0 S 16:29 0:00 sleep 3000
I have tried this with on RHEL7 (Kernel: Linux 3.10) with Docker version: 17.09.0-ce.
I am not able to reproduce the same result on RHEL8 (Kernel: Linux 4.18) with Docker version: 20.10. In fact, I always get the local PID from the scheduling information.
/ # head /proc/8/sched
sleep (8, #threads: 1)
I might be wrong but my assumption is that something is changed within the Kernel which forbids to obtain the host PID?
So the question is how to obtain the host PID from within the container?
The bug (or "feature" if you prefer) that allowed the host PID to be discovered from /proc/PID/sched in the container was fixed (or "broken" if you prefer) in Linux kernel 4.14 by commit 74dc3384fc79 ("sched/debug: Use task_pid_nr_ns in /proc/$pid/sched").
As a result of the change, the container cannot get the host PID of a process (at least via /proc/PID/sched) if PID namespaces are in use.

Cannot open vfio device in docker container as non-root user

I have enabled virtualization in the BIOS and enabled the IOMMU on kernel command line (intel_iommu=on).
I bound a solarflare NIC to the vfio-pci device and added a udev rule to ensure the vfio device is accessible by my non-root user (e.g., /etc/udev/rules.d/10-vfio-docker-users.rules):
SUBSYSTEM=="vfio", OWNER="myuser", GROUP=="myuser"
I've launched my container with -u 1000 and mapped /dev (-v /dev:/dev). Running in an interactive shell in the container, I am able to verify that the device is there with the permissions set by my udev rule:
bash-4.2$ whoami
whoami: unknown uid 1000
bash-4.2$ ls -al /dev/vfio/35
crw-rw---- 1 1000 1000 236, 0 Jan 25 00:23 /dev/vfio/35
However, if I try and open it (e.g., python -c "open('/dev/vfio/35', 'rb')" I get IOError: [Errno 1] Operation not permitted: '/dev/vfio/35'. However, the same command works outside the container as the normal non-root user with user-id 1000!
It seems that there are additional security measures that are not allowing me to access the vfio device within the container. What am I missing?
Docker drops a number of privileges by default, including the ability to access most devices. You can explicitly grant access to a device using the --device flag, which would look something like:
docker run --device /dev/vfio/35 ...
Alternately, you can ask Docker not to drop any privileges:
docker run --privileged ...
You'll note that in both of the above examples it was not necessary to explicitly bind-mount /dev; in the first case, the device(s) you have exposed with --device will show up, and in the second case you see the host's /dev by default.

Issue accessing vespa outside docker container

Installed Docker on Mac and trying to run Vespa on Docker following steps specified in following link
https://docs.vespa.ai/documentation/vespa-quick-start.html
I did n't had any issues till step 4. I see vespa container running after step 2 and step 3 returned 200 OK response.
But Step 5 failed to return 200 OK response. Below is the command I ran on my terminal
curl -s --head http://localhost:8080/ApplicationStatus
I keep getting
curl: (52) Empty reply from server whenever I run without -s option.
So I tried to see listening ports inside my vespa container and don't see anything for 8080 but can see for 19071(used in step 3)
➜ ~ docker exec vespa bash -c 'netstat -vatn| grep 8080'
➜ ~ docker exec vespa bash -c 'netstat -vatn| grep 19071'
tcp 0 0 0.0.0.0:19071 0.0.0.0:* LISTEN
Below doc has info related to vespa ports
https://docs.vespa.ai/documentation/reference/files-processes-and-ports.html
I'm assuming port 8080 should be active after docker run(step 2 of quick start link) and can be accessed outside container as port mapping is done.
But I don't see 8080 port active inside container in first place.
A'm I missing something. Do I need to perform any additional step than mentioned in quick start? FYI I installed Jenkins inside my docker and was able to access outside container via port mapping. But not sure why it's not working with vespa.I have been trying from quiet sometime but no progress. Please advice me if I'm missing something here.
You have too low memory for your docker container, "Minimum 6GB memory dedicated to Docker (the default is 2GB on Macs).". See https://docs.vespa.ai/documentation/vespa-quick-start.html
The deadlock detector warnings and failure to get configuration from configuration server (which is likely oom killed) indicates that you are too low on memory.
My guess is that your jdisc container had not finished initialize or did not initialize properly? Did you try to check the log?
docker exec vespa bash -c '/opt/vespa/bin/vespa-logfmt /opt/vespa/logs/vespa/vespa.log'
This should tell you if there was something wrong. When it is ready to receive requests you would see something like this:
[2018-12-10 06:30:37.854] INFO : container Container.org.eclipse.jetty.server.AbstractConnector Started SearchServer#79afa369{HTTP/1.1,[http/1.1]}{0.0.0.0:8080}
[2018-12-10 06:30:37.857] INFO : container Container.org.eclipse.jetty.server.Server Started #10280ms
[2018-12-10 06:30:37.857] INFO : container Container.com.yahoo.container.jdisc.ConfiguredApplication Switching to the latest deployed set of configurations and components. Application switch number: 0
[2018-12-10 06:30:37.859] INFO : container Container.com.yahoo.container.jdisc.ConfiguredApplication Initializing new set of configurations and components. Application switch number: 1

How to I bypass the login page on Rstudio?

I am trying to bypass the login page on RStudio as we are running it in a Docker container and this step is not necessary as we authenticate before we let users launch the container.
I am using the Rocker implementation of RStudio for Docker. We are running on Centos7.
I'm fairly new to SO, so please let me know what information would be helpful for answering the question.
I figured it out.
When you start rserver, add the flag --auth-none=1, so my final CMD in my Dockerfile looked like:
USER rstudio
CMD ["/usr/lib/rstudio-server/bin/rserver","--server-daemonize=0","--auth-none=1"]
I will caution though, the first time I did it, I ran with sudo -E in front of the command and it logged into RStudio as ROOT! (this is also because I had altered the /etc/rstudio/rserver.conf with the setting auth-minimum-user-id=0 because I was trying to get the error to go away (which it did :)
The above code will change to user 'rstudio' before running the command which will log you straight in as rstudio.
Hope that helps someone out there, I know I spent the better portion of my day finding a work-around!
To bypass the login page you need also to define the environment variable USER.
need to set system environmental variable USER=rstudio in order for --auth-none 1
-- https://github.com/rstudio/rstudio/issues/1663
Here is a snippet of Dockerfile permitting to run the RStudio server and to login as the user rstudio.
ENV USER="rstudio"
CMD ["/usr/lib/rstudio-server/bin/rserver", "--server-daemonize", "0", "--auth-none", "1"]
When it's run the login page is not displayed and we can check that the server and the session are running with the rstudio user.
# Run the container
docker run --name rstudio --rm -p 8787:8787 -d rstudio
# Check processes
docker exec -it rstudio ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
rstudio+ 1 0.1 0.3 210792 13844 ? Ssl 21:10 0:00 /usr/lib/rstudi
rstudio 49 0.7 2.3 555096 82312 ? Sl 21:10 0:03 /usr/lib/rstudi
root 570 0.0 0.1 45836 3744 pts/0 Rs+ 21:18 0:00 ps aux

How to know if my program is completely started inside my docker with compose

In my CI chain I execute end-to-end tests after a "docker-compose up". Unfortunately my tests often fail because even if the containers are properly started, the programs contained in my containers are not.
Is there an elegant way to verify that my setup is completely started before running my tests ?
You could poll the required services to confirm they are responding before running the tests.
curl has inbuilt retry logic or it's fairly trivial to build retry logic around some other type of service test.
#!/bin/bash
await(){
local url=${1}
local seconds=${2:-30}
curl --max-time 5 --retry 60 --retry-delay 1 \
--retry-max-time ${seconds} "${url}" \
|| exit 1
}
docker-compose up -d
await http://container_ms1:3000
await http://container_ms2:3000
run-ze-tests
The alternate to polling is an event based system.
If all your services push notifications to an external service, scaeda gave the example of a log file or you could use something like Amazon SNS. Your services emit a "started" event. Then you can subscribe to those events and run whatever you need once everything has started.
Docker 1.12 did add the HEALTHCHECK build command. Maybe this is available via Docker Events?
If you have control over the docker engine in your CI setup you could execute docker logs [Container_Name] and read out the last line which could be emitted by your application.
RESULT=$(docker logs [Container_Name] 2>&1 | grep [Search_String])
logs output example:
Agent pid 13
Enter passphrase (empty for no passphrase): Enter same passphrase again: Identity added: id_rsa (id_rsa)
#host SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.6
#host SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.6
parse specific line:
RESULT=$(docker logs ssh_jenkins_test 2>&1 | grep Enter)
result:
Enter passphrase (empty for no passphrase): Enter same passphrase again: Identity added: id_rsa (id_rsa)

Resources