Why is setuid dropped on execve in an alpine container? - docker

In an alpine container only: When running a setuid binary which starts an other executable (execve(2)), the kernel[1] BusyBox seems to drop privileges acquired by setuid. I think this might be by design because of security implications.
Question: I would like to understand why this is happening and what is responsible for this?
I am working on a one-shot setuid runner called kamikaze written in rust. kamikaze is a very simple binary that unlink(2) itself and then starts a new process using fork(2) and execve(2).
The main components are:
src/main.rs [a47dedc]: Implements the unlink(2) and process spawning.
use std::env;
use std::fs;
use std::process::{Command, exit};
fn usage() {
println!("usage: kamikaze <command> <arguments>");
exit(1);
}
fn main() {
// Kill myself
fs::remove_file(
env::current_exe().expect("failed to get path to executable")
).expect("kamikaze failed");
let mut args: Vec<String> = env::args().collect();
match args.len() {
0 => usage(),
1 => usage(),
_ => {
args.remove(0);
let mut child = Command::new(args.remove(0))
.args(&args)
.spawn()
.expect("failed to execute process");
exit(
child
.wait()
.expect("wait failed")
.code().unwrap()
);
},
}
}
install.sh [a47dedc]: A simple installer which downloads kamikaze, changes ownership to root and sets the setuid bit.
#!/usr/bin/env sh
set -euo pipefail
REPO="Enteee/kamikaze"
INSTALL="install -m 755 -o root kamikaze-download kamikaze && chmod u+s kamikaze"
curl -s "https://api.github.com/repos/${REPO}/releases/latest" \
| grep "browser_download_url" \
| cut -d '"' -f 4 \
| xargs -n1 curl -s -L --output kamikaze-download
trap 'rm kamikaze-download' EXIT
if [[ $(id -u) -ne 0 ]]; then
sudo sh -c "${INSTALL}"
else
eval "${INSTALL}"
fi
When I run kamikaze outside a container[2]:
$ curl https://raw.githubusercontent.com/Enteee/kamikaze/master/install.sh | sh
$ ./kamikaze ps -f
UID PID PPID C STIME TTY TIME CMD
root 3223 9587 0 08:17 pts/0 00:00:00 ./kamikaze ps -f
root 3224 3223 0 08:17 pts/0 00:00:00 ps -f
I get the expected behavior. The child process (PID=3224) runs as root. On the other hand, inside a container[2]:
$ docker build -t kamikaze - <<EOF
FROM alpine
RUN set -exuo pipefail \
&& apk add curl \
&& curl https://raw.githubusercontent.com/Enteee/kamikaze/master/install.sh | sh
USER nobody
CMD ["/kamikaze", "ps"]
EOF
$ docker run kamikaze
PID USER TIME COMMAND
1 root 0:00 /kamikaze ps
6 nobody 0:00 ps
ps runs as nobody.
[1] I first thought that this was because of some security mechanism implemented by docker and the Linux kernel. But after a deep dive into Docker Security, NO_NEW_PRIVILEGES and seccomp(2) I finally realized that BusyBox is simply dropping privileges.
[2] kamikaze [1.0.0] fixed and changed this behavior. Therefore this example does no longer work. For reproducing the example use the kamikaze [0.0.0] release.

BusyBox, which implements the ps command in alpine, drops privileges acquired by setuid by setting the effective user id to the real user id.
libbb/appletlib.c [b097a84]:
} else if (APPLET_SUID(applet_no) == BB_SUID_DROP) {
/*
* Drop all privileges.
*
* Don't check for errors: in normal use, they are impossible,
* and in special cases, exiting is harmful. Example:
* 'unshare --user' when user's shell is also from busybox.
*
* 'unshare --user' creates a new user namespace without any
* uid mappings. Thus, busybox binary is setuid nobody:nogroup
* within the namespace, as that is the only user. However,
* since no uids are mapped, calls to setgid/setuid
* fail (even though they would do nothing).
*/
setgid(rgid);
setuid(ruid);
}
procps/ps.c [b097a84]: Defines BB_SUID_DROP.
// APPLET_NOEXEC:name main location suid_type help
//applet:IF_PS( APPLET_NOEXEC(ps, ps, BB_DIR_BIN, BB_SUID_DROP, ps))
//applet:IF_MINIPS(APPLET_NOEXEC(minips, ps, BB_DIR_BIN, BB_SUID_DROP, ps))
The fix for this was simple. kamikaze just has to set the real user id to the effective user id before execve(2).
src/main.rs [f4c5501]:
extern crate exec;
extern crate users;
use std::env;
use std::fs;
use std::process::exit;
use users::{get_effective_uid, get_effective_gid};
use users::switch::{set_current_uid, set_current_gid};
fn usage() {
println!("usage: kamikaze <command> <arguments>");
}
fn main() {
// Kill myself
fs::remove_file(
env::current_exe().expect("failed to get path to executable")
).expect("kamikaze failed");
set_current_uid(
get_effective_uid()
).expect("failed setting current uid");
set_current_gid(
get_effective_gid()
).expect("failed setting current gid");
let mut args: Vec<String> = env::args().collect();
match args.len() {
0 => usage(),
1 => usage(),
_ => {
args.remove(0);
let err = exec::Command::new(args.remove(0))
.args(&args)
.exec();
println!("Error: {}", err);
},
}
// Should never get here
exit(1);
}
with the newly released kamikaze [1.0.0] we now get:
$ docker build -t kamikaze - <<EOF
FROM alpine
RUN set -exuo pipefail \
&& apk add curl \
&& curl https://raw.githubusercontent.com/Enteee/kamikaze/master/install.sh | sh
USER nobody
CMD ["/kamikaze", "ps"]
EOF
$ docker run kamikaze
PID USER TIME COMMAND
1 root 0:00 ps

Related

How to Import Streamsets pipeline in Dockerfile without container exiting

I am trying to import a pipeline into streamsets, during container start up, by using the Docker CMD command in Dockerfile. The image builds, but while creating the container there is no error but it exits with code 0. So it never comes up. Here is what I did:
Dockerfile:
FROM streamsets/datacollector:3.18.1
COPY myPipeline.json /pipelinejsonlocation/
EXPOSE 18630
ENTRYPOINT ["/bin/sh"]
CMD ["/opt/streamsets-datacollector-3.18.1/bin/streamsets","cli","-U", "http://localhost:18630", \
"-u", \
"admin", \
"-p", \
"admin", \
"store", \
"import", \
"-n", \
"myPipeline", \
"--stack", \
"-f", \
"/pipelinejsonlocation/myPipeline.json"]
Build image:
docker build -t cmp/sdc .
Run image:
docker run -p 18630:18630 -d --name sdc cmp/sdc
This outputs the container id. But the container is in the Exited status as shown below.
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
537adb1b05ab cmp/sdc "/bin/sh /opt/stream…" 5 seconds ago Exited (0) 3 seconds ago sdc
When I do not specify the CMD command in the Dockerfile, the streamsets container spins up and then when I run the streamsets import command in the running container in shell, it works. But how do I get it done during provisioning itself? Is there something I am missing in the Dockerfile?
In your Dockerfile you overwrite the default CMD and ENTRYPOINT from the StreamSets Data Collector Dockerfile. So the container only executes your command during startup and exits without errors afterwards. This is the reason why your container is in Exited (0) status.
In general this is good and expected behavior. If you want to keep your container alive you need to execute another command in the foreground, which never ends. But unfortunately, you cannot run multiple CMDs in your docker file.
I dug a little deeper. The default entry point of the image is ENTRYPOINT ["/docker-entrypoint.sh"]. This script sets up a few things and starts the Data Collector.
It is required that the Data Collector is running before the pipeline is imported. So a solution could be to copy the default docker-entrypoint.sh and modify it to start the Data Collector and import the pipeline afterwards. You could to it like this:
Dockerfile:
FROM streamsets/datacollector:3.18.1
COPY myPipeline.json /pipelinejsonlocation/
# Replace docker-entrypoint.sh
COPY docker-entrypoint.sh /docker-entrypoint.sh
EXPOSE 18630
docker-entrypoint.sh (https://github.com/streamsets/datacollector-docker/blob/master/docker-entrypoint.sh):
#!/bin/bash
#
# Copyright 2017 StreamSets Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
set -e
# We translate environment variables to sdc.properties and rewrite them.
set_conf() {
if [ $# -ne 2 ]; then
echo "set_conf requires two arguments: <key> <value>"
exit 1
fi
if [ -z "$SDC_CONF" ]; then
echo "SDC_CONF is not set."
exit 1
fi
grep -q "^$1" ${SDC_CONF}/sdc.properties && sed 's|^#\?\('"$1"'=\).*|\1'"$2"'|' -i ${SDC_CONF}/sdc.properties || echo -e "\n$1=$2" >> ${SDC_CONF}/sdc.properties
}
# support arbitrary user IDs
# ref: https://docs.openshift.com/container-platform/3.3/creating_images/guidelines.html#openshift-container-platform-specific-guidelines
if ! whoami &> /dev/null; then
if [ -w /etc/passwd ]; then
echo "${SDC_USER:-sdc}:x:$(id -u):0:${SDC_USER:-sdc} user:${HOME}:/sbin/nologin" >> /etc/passwd
fi
fi
# In some environments such as Marathon $HOST and $PORT0 can be used to
# determine the correct external URL to reach SDC.
if [ ! -z "$HOST" ] && [ ! -z "$PORT0" ] && [ -z "$SDC_CONF_SDC_BASE_HTTP_URL" ]; then
export SDC_CONF_SDC_BASE_HTTP_URL="http://${HOST}:${PORT0}"
fi
for e in $(env); do
key=${e%=*}
value=${e#*=}
if [[ $key == SDC_CONF_* ]]; then
lowercase=$(echo $key | tr '[:upper:]' '[:lower:]')
key=$(echo ${lowercase#*sdc_conf_} | sed 's|_|.|g')
set_conf $key $value
fi
done
# MODIFICATIONS:
#exec "${SDC_DIST}/bin/streamsets" "$#"
check_data_collector_status () {
watch -n 1 ${SDC_DIST}/bin/streamsets cli -U http://localhost:18630 ping | grep -q 'version' && echo "Data Collector has started!" && import_pipeline
}
function import_pipeline () {
sleep 1
echo "Start to import pipeline"
${SDC_DIST}/bin/streamsets cli -U http://localhost:18630 -u admin -p admin store import -n myPipeline --stack -f /pipelinejsonlocation/myPipeline.json
echo "Finished importing pipeline"
}
# Start checking if Data Collector is up (in background) and start Data Collector
check_data_collector_status & ${SDC_DIST}/bin/streamsets $#
I commented out the last line exec "${SDC_DIST}/bin/streamsets" "$#" of the default docker-entrypoint.sh and added two functions. check_data_collector_status () pings the Data Collector service until it is available. import_pipeline () imports your pipeline.
check_data_collector_status () runs in background and ${SDC_DIST}/bin/streamsets $# is started in foreground as before. So the pipeline is imported after the Data Collector service is started.
Run this image with sleep command:
docker run -p 18630:18630 -d --name sdc cmp/sdc sleep 300
300 is the time to sleep in seconds.
Then exec your script manually within the docker container and find out what's wrong.

issues in accessing docker environment variables in systemd service files

1) I am running a docker container with following cmd (passing few env variables with -e option)
$ docker run --name=xyz -d -e CONTAINER_NAME=xyz -e SSH_PORT=22 -e NWMODE=HOST -e XDG_RUNTIME_DIR=/run/user/0 --net=host -v /mnt:/mnt -v /dev:/dev -v /etc/sysconfig/network-scripts:/etc/sysconfig/network-scripts -v /:/hostroot/ -v /etc/hostname:/etc/host_hostname -v /etc/localtime:/etc/localtime -v /var/run/docker.sock:/var/run/docker.sock --privileged=true cf3681e04bfb
2) After running the container as above, i check the env variable NWMODE inside the container, and it shows correctly as shown below :
$ docker exec -it xyz bash
$ env | grep NWMODE
NWMODE=HOST
3) Now, i created a sample service 'b' shown below which executes a script b.sh (where i try to access NWMODE) :
root#ubuntu16:/etc/systemd/system# cat b.service
[Unit]
Description=testing service b
[Service]
ExecStart=/bin/bash /etc/systemd/system/b.sh
root#ubuntu16:/etc/systemd/system# cat b.sh
#!/bin/bash`
systemctl import-environment
echo "NWMODE:" $NWMODE`
4) Now if i start service 'b' and see its logs, it shows that it is not able to access NWMODE env variable
$ systemctl start b
$ journalctl -fu b
...
systemd[1]: Started testing service b.
bash[641]: NWMODE: //blank for $NWMODE here`
5) Now rather than having 'systemctl import-environment' in b.sh, if i do following then the b.service logs show the correct value of NWMODE env variable:
$ systemctl import-environment
$ systemctl start b
Though the step 5 above works i can't go for it, as all the services in my system will be started automatically by systemd. In that case, can anyone please let me know how can i access the environment variables (passed using 'docker run...' cmd above) in a service file (say for e.g. in b.sh above). Can this be achieved somehow with systemctl import-environment or there is some other way ?
systemd unsets all environment variables to provide a clean environment. Afaik that is intended to be a security feature.
Workaround: Create a file /etc/systemd/system.conf.d/myenvironment.conf:
[Manager]
DefaultEnvironment=CONTAINER_NAME=xyz NWMODE=HOST XDG_RUNTIME_DIR=/run/user/0
systemd will set the environment variables declared in this file.
You can set up an ENTRYPOINT script that automatically creates this file before running systemd. Example:
RUN echo '#! /bin/bash \n\
echo "[Manager] \n\
DefaultEnvironment=$(while read -r Line; do echo -n "$Line" ; done < <(env) \n\
" >/etc/systemd/system.conf.d/myenvironment.conf \n\
exec /lib/systemd/systemd \n\
' >/usr/local/bin/setmyenv && chmod +x /usr/bin/setmyenv
ENTRYPOINT /usr/bin/setmyenv
Instead of creating the script within Dockerfile you can store it outside and add it with COPY:
#! /bin/bash
echo "[Manager]
DefaultEnvironment=$(while read -r Line; do echo -n "$Line" ; done < <(env)
" >/etc/systemd/system.conf.d/myenvironment.conf
exec /lib/systemd/systemd
TL;DR
Run the the command using bash, first store the docker environment variables to a file (or just pipe them two awk), extract & export the variable and finally run your main script.
ExecStart=/bin/bash -c "cat /proc/1/environ | tr '\0' '\n' > /home/env_file; export MY_ENV_VARIABLE=$(awk -F= -v key="MY_ENV_VARIABLE" '$1==key {print $2}' /home/env_file); /usr/bin/python3 /usr/bin/my_python_script.py"
Whatever #mviereck is saying is true, still I have found another solution to this problem.
My use case is to pass an environment variable to my system-d container in the Docker run command (docker run -e MY_ENV_VARIABLE="some_val") and use that in the python script that is run through the system-d unit file.
According to this post (https://forums.docker.com/t/where-are-stored-the-environment-variables/65762) the container environment variables can be found in the running process /proc/1/environ inside the container. Performing a cat does show that the environment variable MY_ENV_VARIABLE=some_val does exist, though in some mangled form.
$ cat /proc/1/environ
HOSTNAME=271fbnd986bdMY_ENV_VARIABLE=some_valcontainer=dockerLC_ALL=CDEBIAN_FRONTEND=noninteractiveHOME=/rootroot#271fb0d986bd
The main task now would be to extract MY_ENV_VARIABLE="some_val" value and pass it to the ExecStart directive in the system-d unit file.
(extraction code referenced from How to grep for value in a key-value store from plain text)
# this outputs a nice key,value pair
$ cat /proc/1/environ | tr '\0' '\n'
HOSTNAME=861f23cd1b33
MY_ENV_VARIABLE=some_val
container=docker
LC_ALL=C
DEBIAN_FRONTEND=noninteractive
HOME=/root
# we can store this in a file for use, too
$ cat /proc/1/environ | tr '\0' '\n' > /home/env_var_file
# we can then reuse the file to extract the value of interest against a key
$ awk -F= -v key="MY_ENV_VARIABLE" '$1==key {print $2}' /home/env_file
some_val
Now in the ExecStart directive in the system-d unit file we can do this:
[Service]
Type=simple
ExecStart=/bin/bash -c "cat /proc/1/environ | tr '\0' '\n' > /home/env_file; export MY_ENV_VARIABLE=$(awk -F= -v key="MY_ENV_VARIABLE" '$1==key {print $2}' /home/env_file); /usr/bin/python3 /usr/bin/my_python_script.py"

Check mem_limit within a docker container

After some crashes with a docker container with a too low mem_limit, how can i check in a container the mem_limit of this container? I want to print an error message on startup and exit if the mem_limit is set to low.
The memory limit is enforced via cgroups. Therefore you need to use cgget to find out the memory limit of the given cgroup.
To test this you can run a container with a memory limit:
docker run --memory 512m --rm -it ubuntu bash
Run this within your container:
apt-get update
apt-get install cgroup-bin
cgget -n --values-only --variable memory.limit_in_bytes /
# will report 536870912
Docker 1.13 mounts the container's cgroup to /sys/fs/cgroup (this could change in future versions). You can check the limit using:
cat /sys/fs/cgroup/memory/memory.limit_in_bytes
On the host you can run docker stats to get a top like monitor of your running containers. The output looks like:
$ docker stats
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
729e4e0db0a9 dev 0.30% 2.876GiB / 3.855GiB 74.63% 25.3MB / 4.23MB 287kB / 16.4kB 77
This is how I discovered that docker run --memory 4096m richardbronosky/node_build_box npm run install was not getting 4G of memory because Docker was configured to limit to 2G of memory. (In the example above this has been corrected.) Without that insite I was totally lost as to why my process was ending with simply "killed".
Worked for me in the container, thanks for the ideas Sebastian
#!/bin/sh
function memory_limit
{
awk -F: '/^[0-9]+:memory:/ {
filepath="/sys/fs/cgroup/memory"$3"/memory.limit_in_bytes";
getline line < filepath;
print line
}' /proc/self/cgroup
}
if [[ $(memory_limit) < 419430400 ]]; then
echo "Memory limit was set too small. Minimum 400m."
exit 1
fi
Previously the /sys/fs/cgroup/memory/memory.limit_in_bytes worked for me, but in my ubuntu with kernel 5.8.0-53-generic seems that the correct endpoint now is /sys/fs/cgroup/memory.max to recover the memory limit from inside the container.
You have to check all values from the path defined in /prof/self/cgroup (example: /sys/fs/cgroup/memory/user.slice/user-1501.slice/session-99.scope) up to /sys/fs/cgroup/memory and look for the minimum. Here is the script:
#!/bin/bash
function memory_limit {
[ -r /proc/self/cgroup ] || { echo >&2 "Cannot read /proc/self/cgroup" ; return 1; }
path=$(grep -Poh "memory:\K.*" /proc/self/cgroup)
[ -n "$path" ] || { echo >&2 "Cannot get memory constrains from /proc/self/cgroup" ; return 1; }
full_path="/sys/fs/cgroup/memory${path}"
cd "$full_path" || { echo >&2 "cd $full_path failed" ; return 1; }
[ -r memory.limit_in_bytes ] || { echo >&2 "Cannot read 'memory.limit_in_bytes' at $(pwd)" ; return 1; }
min=$(cat memory.limit_in_bytes)
while [[ $(pwd) != /sys/fs/cgroup/memory ]]; do
cd .. || { echo >&2 "cd .. failed in $(pwd)" ; return 1; }
[ -r memory.limit_in_bytes ] || { echo >&2 "Cannot read 'memory.limit_in_bytes' at $(pwd)" ; return 1; }
val=$(cat memory.limit_in_bytes)
(( val < min )) && min=$val
done
echo "$min"
}
memory_limit
21474836480
In my situation, I have
cat /proc/self/cgroup
3:memory:/user.slice/user-1501.slice/session-99.scope
cat /sys/fs/cgroup/memory/user.slice/user-1501.slice/session-99.scope/memory.limit_in_bytes
9223372036854771712
cat /sys/fs/cgroup/memory/user.slice/user-1501.slice/memory.limit_in_bytes
21474836480 <= actual limit
cat /sys/fs/cgroup/memory/user.slice/memory.limit_in_bytes
9223372036854771712
cat /sys/fs/cgroup/memory/memory.limit_in_bytes
9223372036854771712
Thanks to Mandragor for the original idea.

How to workaround "the input device is not a TTY" when using grunt-shell to invoke a script that calls docker run?

When issuing grunt shell:test, I'm getting warning "the input device is not a TTY" & don't want to have to use -f:
$ grunt shell:test
Running "shell:test" (shell) task
the input device is not a TTY
Warning: Command failed: /bin/sh -c ./run.sh npm test
the input device is not a TTY
Use --force to continue.
Aborted due to warnings.
Here's the Gruntfile.js command:
shell: {
test: {
command: './run.sh npm test'
}
Here's run.sh:
#!/bin/sh
# should use the latest available image to validate, but not LATEST
if [ -f .env ]; then
RUN_ENV_FILE='--env-file .env'
fi
docker run $RUN_ENV_FILE -it --rm --user node -v "$PWD":/app -w /app yaktor/node:0.39.0 $#
Here's the relevant package.json scripts with command test:
"scripts": {
"test": "mocha --color=true -R spec test/*.test.js && npm run lint"
}
How can I get grunt to make docker happy with a TTY? Executing ./run.sh npm test outside of grunt works fine:
$ ./run.sh npm test
> yaktor#0.59.2-pre.0 test /app
> mocha --color=true -R spec test/*.test.js && npm run lint
[snip]
105 passing (3s)
> yaktor#0.59.2-pre.0 lint /app
> standard --verbose
Remove the -t from the docker run command:
docker run $RUN_ENV_FILE -i --rm --user node -v "$PWD":/app -w /app yaktor/node:0.39.0 $#
The -t tells docker to configure the tty, which won't work if you don't have a tty and try to attach to the container (default when you don't do a -d).
This solved an annoying issue for me. The script had these lines:
docker exec **-it** $( docker ps | grep mysql | cut -d' ' -f1) mysql --user= ..... > /var/tmp/temp.file
mutt -s "File is here" someone#somewhere.com < /var/tmp/temp.file
The script would run great if run directly and the mail would come with the correct output. However, when run from cron, (crontab -e) the mail would come with no content. Tried many things around permissions and shells and paths etc. However no joy!
Finally found this:
*/20 * * * * scriptblah.sh > $HOME/cron.log 2>&1
And on that cron.log file found this output:
the input device is not a TTY
Search led me here. And after I removed the -t, it's working great now!
docker exec **-i** $( docker ps | grep mysql | cut -d' ' -f1) mysql --user= ..... > /var/tmp/temp.file

Share files between host system and docker container using specific UID

I'm trying to share files within a Docker guest using the volume sharing. In order to get the same UID, and therefore interoperability with those files, I would like to create a user in the Docker guest with the same UID as my own user.
In order to test out the idea, I wrote the following simplistic Dockerfile:
FROM phusion/baseimage
RUN touch /root/uid-$UID
Testing it with docker build -t=docktest . and then docker run docktest ls -al /root reveals that the file is simply named uid-.
Is there a means to share host environment variables with Docker during the guest build process?
While researching a solution to this problem, I have found the following article to be a great resource: https://medium.com/#mccode/understanding-how-uid-and-gid-work-in-docker-containers-c37a01d01cf
In my scripts, the solution boiled down to the following :
docker run --user $(id -u):$(id -g) -v /hostdirectory:/containerdirectory -v /etc/passwd:/etc/passwd myimage
Of course, id -u can be replaced by other means of retrieving a user's gid, such as stat -c "%u" /somepath
The environment is not shared, you could use -e, --env options to set env variables in container.
I usually use this approach when I want to have the same owner of the mapped volume: I check uid & gid of directory in container and then create a corresponding user. Here my script (setuser.sh) which creates a user for a directory:
#!/bin/bash
setuser() {
if [ -z "$1" ]; then
echo "Usage: $0 <path>"
return
fi
CURRENT_UID=`id -u`
DEST_UID=`stat -c "%u" $1`
if [ $CURRENT_UID = $DEST_UID ]; then
return
fi
DEST_GID=`stat -c "%g" $1`
if [ -e /home/$DEST_UID ]; then
return
fi
groupadd -g $DEST_GID $DEST_GID
useradd -u $DEST_UID -g $DEST_GID $DEST_UID
mkdir -p /home/$DEST_UID
chown $DEST_UID:$DEST_GID /home/$DEST_UID
}
setuser $1
And this is the wrapper script which runs commands as the user, where the directory with permissions is specified either as $USER_DIR or in /etc/user_dir
#!/bin/bash
if [ -z "$USER_DIR" ]; then
if [ -e /etc/user_dir ]; then
export USER_DIR=`head -n 1 /etc/user_dir`
fi
fi
if [ -n "$USER_DIR" ]; then
if [ ! -d "$USER_DIR" ]; then
echo "Please mount $USER_DIR before running this script"
exit 1
fi
. `dirname $BASH_SOURCE`/setuser.sh $USER_DIR
fi
if [ -n "$USER_DIR" ]; then
cd $USER_DIR
fi
if [ -e /etc/user_script ]; then
. /etc/user_script
fi
if [ $CURRENT_UID = $DEST_UID ]; then
"$#"
else
su $DEST_UID -p -c "$#"
fi
P.S. Alleo suggested different approach: to map users and groups files into container and to specify uid and gid. So your container does not depend on built-in users/groups you could use it without additional scripts.
This is not possible and will probably never be possible because of the design philosophy of keeping builds independent of machines. Issue 6822.
I slightly modified #ISanych answer:
#!/usr/bin/env bash
user_exists() {
id -u $1 > /dev/null 2>&1
}
group_exists() {
id -g $1 > /dev/null 2>&1
}
setuser() {
if [[ "$#" != 3 ]]; then
echo "Usage: $0 <path> <user> <group>"
return
fi
local dest_uid=$(stat -c "%u" $1)
local dest_gid=$(stat -c "%g" $1)
if user_exists $dest_uid; then
id -nu $dest_uid
return
fi
local dest_user=$2
local dest_group=$3
if user_exists $dest_user; then
userdel $dest_user
fi
if group_exists $dest_group; then
groupdel $dest_user
fi
groupadd -g $dest_gid $dest_group
useradd -u $dest_uid -g $dest_gid -s $DEFAULT_SHELL -d $DEFAULT_HOME -G root $dest_user
chown -R $dest_uid:$dest_gid $DEFAULT_HOME
id -nu $dest_user
}
REAL_USER=$(setuser $SRC_DIR $DEFAULT_USER $DEFAULT_GROUP)
setuser function accepts user and group names that you want to assign to uid and gid of provided directory. Then if user with such uid exists then it simply returns login corresponding to this uid, otherwise it creates user and group and returns login originally passed to function.
So you get the login of user that owns destination directory.

Resources