I'm trying to find a way to use hosts defined in my user's ~/.ssh/config file to define a docker context.
My ~/.ssh/config file contains:
Host my-server
HostName 10.10.10.10
User remoteuser
IdentityFile /home/me/.ssh/id_rsa-mykey.pub
IdentitiesOnly yes
I'd like to create a docker context as follow:
docker context create \
--docker host=ssh://my-server \
--description="remoteuser on 10.10.10.10" \
my-server
Issuing the docker --context my-server ps command throws an error stating:
... please make sure the URL is valid ... Could not resolve hostname my-server: Name or service not known
For what I could figure out, the docker command uses the sudo mechanism to elevate its privileges. Thus I guess it searches /root/.ssh/config, since ssh doesn't use the $HOME variable.
I tried to symlink the user's config as the root one:
sudo ln -s /home/user/.ssh/config /root/.ssh/config
But this throws another error:
... please make sure the URL is valid ... Bad owner or permissions on /home/user/.ssh/config
The same happens when creating the /root/.ssh/config file simply containing:
Include /home/*/.ssh/config
Does someone have an idea on how to have my user's .ssh/config file parsed by ssh when issued via sudo ?
Thank you.
Have you confirmed your (probably correct) theory that docker is running as root, by just directly copying your user's ~/.ssh/config contents into /root/.ssh/config? If that doesn't work, you're back to square one...
Otherwise, either the symlink or the Include ought to work just fine (a symlink inherits the permissions of the file it is pointing at).
Another possibility is that your permissions actually are bad -- don't forget you have to change the permissions on both ~/.ssh AND ~/.ssh/config.
chmod 700 /home/user/.ssh
chmod 600 /home/user/.ssh/config
And maybe even:
chmod 700 /root/.ssh
chmod 600 /root/.ssh/config
Related
I'm trying to get my hands on Docker and write my own gitea image without using docker compose.
The snippet of code that doesn't seem to be working is below, the second line is what returns an error.
RUN useradd -u 8877 test
RUN wget -O /tmp/gitea <link> && mv /tmp/gitea /home/test && chmod +x /home/test/gitea
However, when moving my file from the tmp to "test" users home directory (I run the dockerfile as root), it claims that the gitea file doesn't exist. I can't see any issue in the paths, I expect it all to run without an issue as in my eyes, that directory exists.
Error message:
cannot access '/home/test/gitea': Not a directory
Is my pathing wrong or have I gone wrong somewhere else?
Edit from answer: Downloading directly to that directory throws the same error "No such file or directory"
Solved!
Turns out creating a user doesn't create a directory for them. Adding the flags -rm -d into the "useradd" adds a /home/user directory and therefore the pathing now exists.
Why not download directly?
RUN wget -O /home/test/gitea <link> && chmod +x /home/test/gitea
I want to do a binding (mount volume) between the jupyterLab and the VM. The only problem is the permissions, the folder that I create (/home/jovyan/work) always have a root permissions. I cannot create anything inside. I tried many solutions: In a docker file I tried this solution:
FROM my_image
RUN if [[ -d "$HOME/toto" ]] ; then echo OK ; else mkdir "$HOME/toto" ; fi
RUN chmod 777 $HOME/toto
==> always I got no permissions on the mounted folder
ARG NB_DIR=/tmp/lab_share
RUN mkdir "$NB_DIR"
RUN chown :100 "$NB_DIR"
RUN chmod g+rws "$NB_DIR"
RUN apt-get install acl
RUN setfacl -d -m g::rwx "$NB_DIR"
USER root
==> The problem here is the setfacl is not recognized in the container, I tried to install it just before, always not acceptable.
I tried to add a GRANT_SUDO in the jupyterHub service
extraEnv:
GRANT_SUDO: "yes"
==> The problem here is the extraEnv is not recognized
I tried to create a method in the jupyterHub_config file, just after the binding code:
notebook_dir = os.environ.get('DOCKER_NOTEBOOK_DIR') or '/home/jovyan/work'
c.DockerSpawner.notebook_dir = notebook_dir
c.DockerSpawner.volumes = { "/tmp/{username}" :notebook_dir}
c.DockerSpawner.remove_containers = True
##### CREATE HOOKER
def create_dir_hook(spawner):
username = spawner.user.name # get the username
logger.info(f"### USERNAME {username}")
volume_path = os.path.join('/tmp', username)
logger.info(f"### USERNAME {volume_path}")
if not os.path.exists(volume_path):
# create a directory with umask 0755
# hub and container user must have the same UID to be writeable
# still readable by other users on the system
logger.info(f"FOLDER FOR USER {username} not existing")
logger.info(f"CREATING A FOLDER IN {volume_path}")
os.mkdir(volume_path, 0o777)
# now do whatever you think your user needs
# ...
logger.info("coucou")
# attach the hook function to the spawner
c.Spawner.pre_spawn_hook = create_dir_hook
In this solutions the compilator don't read all the if bloc, even the else bloc. I found in the docker doc, that is a famous issue in docker, but I didn't find it's solution. Really I need your help please, if you have any solution I will be appreciate. Thanks a lot.
My Dockerfile extends from php:8.1-apache. The following happens while developing:
The application creates log files (as www-data, 33:33)
I create files (as the image's default user root, 0:0) within the container
These files are mounted on my host where I'm acting as user (1000:1000). Of course I'm running into file permission issues now. I'd like to update/delete files created in the container on my host and vice versa.
My current solution is to set the image's user to www-data. In that way, all created files belong to it. Then, I change its user and group id from 33 to 1000. That solves my file permission issues.
However, this leads to another problem:
I'm prepending sudo -E to the entrypoint and command. I'm doing that because they're normally running as root and my custom entrypoint requires root permissions. But in that way the stop signal stops working and the container has to be killed when I want it to stop:
~$ time docker-compose down
Stopping test_app ... done
Removing test_app ... done
Removing network test_default
real 0m10,645s
user 0m0,167s
sys 0m0,004s
Here's my Dockerfile:
FROM php:8.1-apache AS base
FROM base AS dev
COPY entrypoint.dev.sh /usr/local/bin/custom-entrypoint.sh
ARG user_id=1000
ARG group_id=1000
RUN set -xe \
# Create a home directory for www-data
&& mkdir --parents /home/www-data \
&& chown --recursive www-data:www-data /home/www-data \
# Make www-data's user and group id match my host user's ones (1000 and 1000)
&& usermod --home /home/www-data --uid $user_id www-data \
&& groupmod --gid $group_id www-data \
# Add sudo and let www-data execute it without asking for a password
&& apt-get update \
&& apt-get install --yes --no-install-recommends sudo \
&& rm --recursive --force /var/lib/apt/lists/* \
&& echo "www-data ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/www-data
USER www-data
# Run entrypoint and command as sudo, as my entrypoint does some config substitution and both normally run as root
ENTRYPOINT [ "sudo", "-E", "custom-entrypoint.sh" ]
CMD [ "sudo", "-E", "apache2-foreground" ]
Here's my custom-entrypoint.sh
#!/bin/sh
set -e
sed --in-place 's#^RemoteIPTrustedProxy.*#RemoteIPTrustedProxy '"$REMOTEIP_TRUSTED_PROXY"'#' $APACHE_CONFDIR/conf-available/remoteip.conf
exec docker-php-entrypoint "$#"
What do I need to do to make the container catch the stop signal (it is SIGWINCH for the Apache server) again? Or is there a better way to handle the file permission issues, so I don't need to run the entrypoint and command with sudo -E?
What do I need to do to make the container catch the stop signal (it is SIGWINCH for the Apache server) again?
First, get rid of sudo, if you need to be root in your container, run it as root with USER root in your Dockerfile. There's little value add to sudo in the container since it should be an environment to run one app and not a multi-user general purpose Linux host.
Or is there a better way to handle the file permission issues, so I don't need to run the entrypoint and command with sudo -E?
The pattern I go with is to have developers launch the container as root, and have the entrypoint detect the uid/gid of the mounted volume, and adjust the uid/gid of the user in the container to match that id before running gosu to drop permissions and run as that user. I've included a lot of this logic in my base image example (note the fix-perms script that tweaks the uid/gid). Another example of that pattern is in my jenkins-docker image.
You'll still need to either configure root's login shell to automatically run gosu inside the container, or remember to always pass -u www-data when you exec into your image, but now that uid/gid will match your host.
This is primarily for development. In production, you probably don't want host volumes, use named volumes instead, or at least hardcode the uid/gid of the user in the image to match the desired id on the production hosts. That means the Dockerfile would still have USER www-data but the docker-compose.yml for developers would have user: root that doesn't exist in the compose file in production. You can find a bit more on this in my DockerCon 2019 talk (video here).
You can use user namespace to map different user/group in your docker to you on the host.
For example, the group www-data/33 in the container could be the group docker-www-data/100033 on the host, you just have be in the group to access log files.
I am trying to run metricbeat using docker in windows machine and I have changed metricbeat.yml as per my requirement.
docker run -v /c/Users/someuser/docker/metricbeat.yml:/usr/share/metricbeat/metricbeat.yml docker.elastic.co/beats/metricbeat:5.6.0
but getting these error
metricbeat2017/09/17 10:13:19.285547 beat.go:346: CRIT Exiting: error
loading config file: config file ("metricbeat.yml") can only be
writable by the owner but the permissions are "-rwxrwxrwx" (to fix the
permissions use: 'chmod go-w /usr/share/metricbeat/metricbeat.yml')
Exiting: error loading config file: config file ("metricbeat.yml") can only be writable by the owner but the permissions are "-rwxrwxrwx"
(to fix the permissions use: 'chmod go-w /
usr/share/metricbeat/metricbeat.yml')
Why I am getting this?
What is the right way to make permanent change in file content in docker container (As I don't want to change configuration file each time when container start)
Edit:
Container is not meant to be edited / changed.If necessary, docker volume management is available to externalize all configuration related works.Thanks
So there are 2 options you can do here I think.
The first is that you can ensure the file has the proper permissions:
chmod 644 metricbeat.yml
Or you can run your docker command with -strict.perms=false which flags that metricbeat shouldn't care about what permissions are on the metricbeat.yml file.
docker run \
docker.elastic.co/beats/metricbeat:5.6.0 \
--volume="/c/Users/someuser/docker/metricbeat.yml:/usr/share/metricbeat/metricbeat.yml" \
-strict.perms=false
You can see more documentation about that flag in the link below:
https://www.elastic.co/guide/en/beats/metricbeat/current/command-line-options.html#global-flags
I'm currently running Openshift, but I am running into a problem when I try to build/deploy my custom Docker container. The container works properly on my local machine, but once it gets built in openshift and I try to deploy it, I get the error message. I believe the problem is because I am trying to run commands inside of the container as root.
(13)Permission denied: AH00058: Error retrieving pid file /run/httpd/httpd.pid
My Docker file that I am deploying looks like this -
FROM centos:7
MAINTAINER me<me#me>
RUN yum update -y
RUN yum install -y git https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
RUN yum install -y ansible && yum clean all -y
RUN git clone https://github.com/dockerFileBootstrap.git
RUN ansible-playbook "-e edit_url=andrewgarfield edit_alias=emmastone site_url=testing.com" dockerAnsible/dockerFileBootstrap.yml
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
COPY supervisord.conf /usr/etc/supervisord.conf
RUN rm -rf supervisord.conf
VOLUME [ "/sys/fs/cgroup" ]
EXPOSE 80 443
#CMD ["/usr/bin/supervisord"]
CMD ["/usr/sbin/httpd", "-D", "FOREGROUND"]
Ive run into a similar problem multiple times where it will say things like Permission Denied on file /supervisord.log or something similar.
How can I set it up so that my container doesnt run all of the commands as root? It seems to be causing all of the problems that I am having.
Openshift has strictly security policy regarding custom Docker builds.
Have a look a this OpenShift Application Platform
In particular at point 4 into the FAQ section, here quoted.
4. Why doesn't my Docker image run on OpenShift?
Security! Origin runs with the following security policy by default:
Containers run as a non-root unique user that is separate from other system users
They cannot access host resources, run privileged, or become root
They are given CPU and memory limits defined by the system administrator
Any persistent storage they access will be under a unique SELinux label, which prevents others from seeing their content
These settings are per project, so containers in different projects cannot see each other by default
Regular users can run Docker, source, and custom builds
By default, Docker builds can (and often do) run as root. You can control who can create Docker builds through the builds/docker and builds/custom policy resource.
Regular users and project admins cannot change their security quotas.
Many Docker containers expect to run as root (and therefore edit all the contents of the filesystem). The Image Author's guide gives recommendations on making your image more secure by default:
Don't run as root
Make directories you want to write to group-writable and owned by group id 0
Set the net-bind capability on your executables if they need to bind to ports <1024
Otherwise, you can see the security documentation for descriptions on how to relax these restrictions.
I hope it helps.
Although you don't have access to root, your OpenShift container, by default, is a member of the root group. You can change some dir/file permissions to avoid the Permission Denied errors.
If you're using a Dockerfile to deploy an image to OpenShift, you can add the following RUN command to your Dockerfile:
RUN chgrp -R 0 /run && chmod -R g=u /run
This will change the group for everything in the /run directory to the root group and then set the group permission on all files to be equivalent to the owner (group equals user) of the file. Essentially, any user in the root group has the same permissions as the owner for every file.
You can run docker as any user , also root (and not Openshift default build-in account UID - 1000030000 when issuing this two commands in sequence on command line oc cli tools
oc login -u system:admin -n default following with oc adm policy add-scc-to-user anyuid -z default -n projectname where projectname is name of your project inside which you assigned under your docker