Docker is "too verbose" in npm-cli-adduser - docker

When my Dockerfile does the RUN command:
RUN npm-cli-adduser -r https://$PRIVATE_REPOSITORY_URL/repository/npm-read -u $PRIVATE_USERNAME -p "$PRIVATE_PASSWORD" -e service-foobar#example.com
It echos the private environment variables in the terminal. I would like to hide these variables, so that users who read the terminal do not see it.
Is this possible?

You can use docker secrets, but you will need to store the variable in a file as far as i understand
Like this:
Dockerfile
FROM node:12
RUN --mount=type=secret,id=mysecret,dst=/foobar echo $(cat /foobar) >> test.txt
# In your case
# RUN npm-cli-adduser -r https://$PRIVATE_REPOSITORY_URL/repository/npm-read -u $PRIVATE_USERNAME -p $(cat /foobar) -e service-foobar#example.com
Build command
docker build --secret id=mysecret,src=mysecret.txt . -t docker-debug
mysecret.txt contect:
YOURPASSWORD
Reference

Related

Trying to copy a script into a detached Docker container, and execute it with docker exec

Right now I am setting my Docker instance running with:
sudo docker run --name docker_verify --rm \
-t -d daoplays/rust_v1.63
so that it runs in detached mode in the background. I then copy a script to that instance:
sudo docker cp verify_run_script.sh docker_verify:/.
and I want to be able to execute that script with what I expected to be:
sudo docker exec -d docker_verify bash \
-c "./verify_run_script.sh"
However, this doesn't seem to do anything. If from another terminal I run
sudo docker container logs -f docker_verify
nothing is shown. If I attach myself to the Docker instance then I can run the script myself but that sort of defeats the point of running in detached mode.
I assume I am just not passing the right arguments here, but I am really not clear what I should be doing!
When you run a command in a container you need to also allocate a pseudo-TTY if you want to see the results.
Your command should be:
sudo docker exec -t docker_verify bash \
-c "./verify_run_script.sh"
(note the -t flag)
Steps to reproduce it:
# create a dummy script
cat > script.sh <<EOF
echo This is running!
EOF
# run a container to work with
docker run --rm --name docker_verify -d alpine:latest sleep 3000
# copy the script
docker cp script.sh docker_verify:/
# run the script
docker exec -t docker_verify sh -c "chmod a+x /script.sh && /script.sh"
# clean up
docker container rm -f docker_verify
You should see This is running! in the output.

Replacing Docker environment variables while running the image locally

Docker Image : test
Following are default value in Dockerfile:
ENV users=2
ENV rampup=10
ENV duration=120
ENV environment=DEV
Following is entrypoint
ENTRYPOINT ["/entrypoint.sh"]
entrypoint.sh :
bash ./bin/jmeter.sh -n -t -Jenvironment=${environment} -Jusers=${users} -Jrampup=${rampup} -Jduration=${duration} -j ${workspace}report.log
Now I want to run it locally by replacing the environment variables:
docker run test -e environment=STG -e users=20 -e rampup=10 -e duration=120
But, somehow the values are not getting replaced. What is that I am doing incorrectly, can someone please help?
Any docker run options (like -e to set environment variables) need to go before the image name in the docker run command. Anything after the image name is interpreted as the command you'd like the container to run, and when you also have an entrypoint, gets passed as parameters to the entrypoint. (If you edit your script to include the line echo "$#" you'll see those -e options.)
docker run \
-e environment=STG -e users=20 -e rampup=10 -e duration=120 \
test
Dockerfile and Docker run command seems fine, I am pretty sure the issue is not with Docker, Here is the simplest example that you can.
FROM alpine
ENV users=2
ENV rampup=10
ENV duration=120
ENV environment=DEV
#COPY entrypoint.sh /entrypoint.sh
RUN echo $'#!/bin/ash \n\
echo \"env is Jenvironment=${environment} -Jusers=${users} -Jrampup=${rampup} -Jduration=${duration} -j ${workspace}report.log"' > /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
Build
Docker build -t testenv .
Run
docker run -e rampup=10 -e users=test -t alpintenv ash
So you can try to change you entrypoint.sh to
#!/bin/bash
./bin/jmeter.sh -n -t -Jenvironment=${environment} -Jusers=${users} -Jrampup=${rampup} -Jduration=${duration} -j ${workspace}report.log

How to get enviroment variables injected via -e key in entrypoint script?

I need to launch my app on the port, set via -e key in docker run command
I run my app in ENTRYPOINT script and try to get $PORT env variable, but there no any env variable, set via -e keys.
Serving the app in Dockerfile
ENTRYPOINT ["sh", "entrypoint.sh"]
entrypoint.sh script:
#!/bin/bash
func start --port $PORT
Docker run command:
docker run -d -p 20937:8081 --name queue_0_middleware -e WEBSITE_CORS_ALLOWED_ORIGINS=https://functions.azure.com,https://functions-staging.azure.com,https://functions-next.azure.com -e PORT=8081
If I run this command locally I add image name like this: sudo docker run -p 15615:8081 30c7bb13d4b4 --name queue_2_middleware -e PORT=8081
That won't do what you expect, the docker command line is order sensitive. Everything after the image name is used to replace the value of CMD inside your image. With the entrypoint defined, those are just args to your entrypoint script. In other words, the docker command looks like:
docker run ${args_to_run} ${image_name} ${cmd_override}
The fix is to reorder your command with the args to run placed before the image name:
sudo docker run -p 15615:8081 --name queue_2_middleware -e PORT=8081 30c7bb13d4b4

Connect to docker container as user other than root

BY default when you run
docker run -it [myimage]
OR
docker attach [mycontainer]
you connect to the terminal as root user, but I would like to connect as a different user. Is this possible?
For docker run:
Simply add the option --user <user> to change to another user when you start the docker container.
docker run -it --user nobody busybox
For docker attach or docker exec:
Since the command is used to attach/execute into the existing process, therefore it uses the current user there directly.
docker run -it busybox # CTRL-P/Q to quit
docker attach <container id> # then you have root user
/ # id
uid=0(root) gid=0(root) groups=10(wheel)
docker run -it --user nobody busybox # CTRL-P/Q to quit
docker attach <container id>
/ $ id
uid=99(nobody) gid=99(nogroup)
If you really want to attach to the user you want to have, then
start with that user run --user <user> or mention it in your Dockerfile using USER
change the user using `su
You can run a shell in a running docker container using a command like:
docker exec -it --user root <container id> /bin/bash
As an updated answer from 2020. --user, -u option is Username or UID (format: <name|uid>[:<group|gid>]).
Then, it works for me like this,
docker exec -it -u root:root container /bin/bash
Reference: https://docs.docker.com/engine/reference/commandline/exec/
You can specify USER in the Dockerfile. All subsequent actions will be performed using that account. You can specify USER one line before the CMD or ENTRYPOINT if you only want to use that user when launching a container (and not when building the image). When you start a container from the resulting image, you will attach as the specified user.
The only way I am able to make it work is by:
docker run -it -e USER=$USER -v /etc/passwd:/etc/passwd -v `pwd`:/siem mono bash
su - magnus
So I have to both specify $USER environment variable as well a point the /etc/passwd file. In this way, I can compile in /siem folder and retain ownership of files there not as root.
My solution:
#!/bin/bash
user_cmds="$#"
GID=$(id -g $USER)
UID=$(id -u $USER)
RUN_SCRIPT=$(mktemp -p $(pwd))
(
cat << EOF
addgroup --gid $GID $USER
useradd --no-create-home --home /cmd --gid $GID --uid $UID $USER
cd /cmd
runuser -l $USER -c "${user_cmds}"
EOF
) > $RUN_SCRIPT
trap "rm -rf $RUN_SCRIPT" EXIT
docker run -v $(pwd):/cmd --rm my-docker-image "bash /cmd/$(basename ${RUN_SCRIPT})"
This allows the user to run arbitrary commands using the tools provides by my-docker-image. Note how the user's current working directory is volume mounted
to /cmd inside the container.
I am using this workflow to allow my dev-team to cross-compile C/C++ code for the arm64 target, whose bsp I maintain (the my-docker-image contains the cross-compiler, sysroot, make, cmake, etc). With this a user can simply do something like:
cd /path/to/target_software
cross_compile.sh "mkdir build; cd build; cmake ../; make"
Where cross_compile.sh is the script shown above. The addgroup/useradd machinery allows user-ownership of any files/directories created by the build.
While this works for us. It seems sort of hacky. I'm open to alternative implementations ...
For docker-compose. In the docker-compose.yml:
version: '3'
services:
app:
image: ...
user: ${UID:-0}
...
In .env:
UID=1000
Execute command as www-data user: docker exec -t --user www-data container bash -c "ls -la"
This solved my use case that is: "Compile webpack stuff in nodejs container on Windows running Docker Desktop with WSL2 and have the built assets under your currently logged in user."
docker run -u 1000 -v "$PWD":/build -w /build node:10.23 /bin/sh -c 'npm install && npm run build'
Based on the answer by eigenfield. Thank you!
Also this material helped me understand what is going on.

Running a script in Docker container by accepting -e parameter

I have a very simple script called as myscript.sh
echo "this is test " > /tmp/myfile.txt
echo $TEST >> /tmp/myfile.txt
I have stored this script in my disk which i plan to pass it to the container as a volume like this below
docker run -d --name test \
-v /home/docker/test/myscript.sh:/tmp/myscript.sh \
-e TESTING=just-a-test \
test
The Dockerfile looks like this below
FROM ubuntu
CMD ["bash", "/tmp/myscript.sh"]
So the thought process is to get this script executed and get the result as a file myfile.txt which would contain the -e passed.
Instead i am getting
docker#boot2docker:~/test$ docker exec -it test /bin/bash Error
response from daemon: Container test is not running
Which means that this simplest program did not execute as a container.
I Could not figure it out.
The container ran, executed the script, then exited. A container only runs as long as its main process. When that stops, the container stops.
A simpler test would be to change your test script to:
#!/bin/bash
echo $TEST
I would change your Dockerfile to copy the file in and remove the "bash" part of the CMD instruction:
FROM ubuntu
COPY myscript.sh /myscript.sh
CMD /myscript.sh
Now rebuild and run:
$ docker build -t test .
...
$ docker run -e TEST=VAL test
...
The container should echo the value of the test variable and exit. (I haven't tested any of this, so apologies for any mistakes).
The answer to this question is to use the entrypoint instead of cmd.
I did some research and i came up with the solution that looks like this
ENTRYPOINT ["bash", "<script>"]
To run the script just use
docker run -d --name [--privileged] -p : \
- v /script.sh:/tmp/script.sh \
where -v <> : YOU CAN ALSO USE THE WGET TO GET THE SCRIPT LIKE MOST PEOPLE AND EXECUTE AT RUNTIME.
Appreciate all the people who tried to solve the query

Resources