Run command on host - docker

I have the following job definition here:
run-phpunit:
needs: [build-image, repository-name]
runs-on: ubuntu-latest
defaults:
run:
working-directory: /app
container:
image: ${{ needs.repository-name.outputs.slug}}/ci:${{github.sha}}
volumes:
- ${{github.WORKSPACE}}/.output/:/app/.output
credentials:
username: ${{ github.actor }}
password: ${{ secrets.DOCKER_REGISTRY_GITHUB }}
The mounted volume inside the container own by the root user. But my container runs as an app user. So it can't write to the mounted directory.
Mounted volume has permission. I want to change permission before running the container to 766 to write to that volume.
How do run a step on the host not inside a container? Basically I wanna do chmod 766 .output but not inside container but in host machine.
steps:
- run: (ls -al)
The permission inside container
drwxr-xr-x 2 root root 4096 Nov 30 13:30 .output
-rw-r--r-- 1 app app 1255 Nov 30 13:30 Makefile

Related

Creating file via ansible directly in container

I want to create a file, directly in a container directory.
I created a directory before:
- name: create private in container
ansible.builtin.file:
path: playcontainer:/etc/ssl/private/
state: directory
mode: 0755
But it doesn´t let me create a file in that directory
- name: openssl key
openssl_privatekey:
path: playcontainer:/etc/ssl/private/playkey.key
size: "{{ key_size }}"
type: "{{ key_type }}"`
What am I missing?
From scratch full example to interact with a container from ansible.
Please note that this is not always what you want to do. In this specific case, unless if testing an ansible role for example, the key should be written inside the image at build time when running your Dockerfile, or bind mounted from host at container start. You should not mess with the container filesystem once started on production.
First we create a container for our test:
docker run -d --rm --name so_example python:latest sleep infinity
Now we need an inventory to target that container (inventories/default/main.yml)
---
all:
vars:
ansible_connection: docker
hosts:
so_example:
Finally a test playbook.yml to achieve your goal:
---
- hosts: all
gather_facts: false
vars:
key_path: /etc/ssl/private
key_size: 4096
key_type: RSA
tasks:
- name: Make sure package requirements are met
apt:
name: python3-pip
state: present
- name: Make sure python requirements are met
pip:
name: cryptography
state: present
- name: Create private directory
file:
path: "{{ key_path }}"
state: directory
owner: root
group: root
mode: 0750
- name: Create a key
openssl_privatekey:
path: "{{ key_path }}/playkey.key"
size: "{{ key_size }}"
type: "{{ key_type }}"
Running the playbook gives:
$ ansible-playbook -i inventories/default/ playbook.yml
PLAY [all] *****************************************************************************************************************************************************************************************
TASK [Make sure package requirements are met] ******************************************************************************************************************************************************
changed: [so_example]
TASK [Make sure python requirements are met] *******************************************************************************************************************************************************
changed: [so_example]
TASK [Create private directory] ********************************************************************************************************************************************************************
changed: [so_example]
TASK [Create a key] ********************************************************************************************************************************************************************************
changed: [so_example]
PLAY RECAP *****************************************************************************************************************************************************************************************
so_example : ok=4 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
We can now check that the file is there
$ docker exec so_example ls -l /etc/ssl/private
total 5
-rw------- 1 root root 3243 Sep 15 13:28 playkey.key
$ docker exec so_example head -2 /etc/ssl/private/playkey.key
-----BEGIN RSA PRIVATE KEY-----
MIIJKQIBAAKCAgEA6xrz5kQuXbd59Bq0fqnwJ+dhkcHWCMh4sZO6UNCfodve7JP0
Clean-up:
docker stop so_example

Using Kaniko inside Kubernetes - Error: error resolving dockerfile path

I'm getting the same error message as this SO post. However after trying all the suggestions on that post, I'm still unable to resolve my issue, which is described in the following.
I'm using Kaniko to build and push images for later use. I've ensured the image pushing portion of the job works by building a test Dockerfile and --context from a publicly-accessible git repo.
So now I'm trying to build images by using a mounted hostPath directory /root/app/containerization-engine/docker-service as the build context, which you can see indeed exists, along with its Dockerfile, in the following shell output:
[root#ip-172-31-60-18 kaniko-jobs]# ll -d /root/app/containerization-engine/docker-service/
drwxr-xr-x. 8 root root 4096 May 24 17:52 /root/app/containerization-engine/docker-service/
[root#ip-172-31-60-18 kaniko-jobs]#
[root#ip-172-31-60-18 kaniko-jobs]#
[root#ip-172-31-60-18 kaniko-jobs]#
[root#ip-172-31-60-18 kaniko-jobs]# ll -F /root/app/containerization-engine/docker-service/
total 52
drwxr-xr-x. 6 root root 104 May 9 01:50 app/
-rw-r--r--. 1 root root 20376 May 25 12:02 batch_metrics.py
-rw-r--r--. 1 root root 7647 May 25 12:02 batch_predict.py
-rw-r--r--. 1 root root 14 May 25 12:02 dev_requirements.txt
-rw-r--r--. 1 root root 432 May 25 12:02 Dockerfile
-rw-r--r--. 1 root root 136 May 25 12:02 gunicorn_config.py
drwxr-xr-x. 2 root root 19 May 9 01:50 hooks/
drwxr-xr-x. 2 root root 37 May 9 01:50 jenkins/
-rw-r--r--. 1 root root 158 May 25 12:02 manage.py
drwxr-xr-x. 2 root root 37 May 9 01:50 models/
-rw-r--r--. 1 root root 0 May 25 12:02 README.md
-rw-r--r--. 1 root root 247 May 25 12:02 requirements.txt
drwxr-xr-x. 2 root root 94 May 9 01:50 utils/
-rw-r--r--. 1 root root 195 May 25 12:02 wsgi.py
The job manifest containerization-test.yaml that I'm running with kubectl apply -f is defined below:
apiVersion: batch/v1
kind: Job
metadata:
name: kaniko-containerization-test
spec:
template:
spec:
containers:
- name: kaniko
image: gcr.io/kaniko-project/executor:latest
args: ["--dockerfile=Dockerfile",
"--context=dir:///docker-service",
"--destination=jethrocao/containerization-test:v0",
"--verbosity=trace"]
volumeMounts:
- name: docker-service-build-context
mountPath: "/docker-service"
volumeMounts:
- name: kaniko-secret
mountPath: "/kaniko/.docker"
readOnly: true
restartPolicy: Never
volumes:
- name: docker-service-build-context
hostPath:
path: "/root/app/containerization-engine/docker-service"
type: Directory
- name: kaniko-secret
secret:
secretName: regcred-ca5e
items:
- key: .dockerconfigjson
path: config.json
optional: false
The job is created successfully, but the pods that are created to run the job keep on erroring out, and inspecting the log from one of these failed attempts, I see:
[root#ip-172-31-60-18 kaniko-jobs]# kubectl logs kaniko-containerization-test-rp8lh | head
DEBU[0000] Getting source context from dir:///docker-service
DEBU[0000] Build context located at /docker-service
Error: error resolving dockerfile path: please provide a valid path to a Dockerfile within the build context with --dockerfile
Usage:
executor [flags]
executor [command]
Available Commands:
completion Generate the autocompletion script for the specified shell
help Help about any command
To triple confirm that the hostPath directory and the Dockerfile it contains are both accessible when mounted as a volume into a container, I changed the batch job into a deployment object (running a different image not Kaniko), applied that, kubectl exec -it into the running pod, and inspected the mounted path /docker-service, which exists, along with the full contents of the directory. I then wrote to the Dockerfile within just to test write accessibility, and it worked as expected; and the written change persisted outside of the container in the cluster's node too.
I'm really at a loss at what could be the problem, any ideas?

Permission denied in mounted volume on Docker with SELinux

I've tried to mount a folder using the following docker compose file (partially reproduced, the rest aren't relevant):
version: '3'
services:
web:
build: .
environment:
- DEBUG=0
volumes:
- /usr/share/nginx/html/assets:/assets:Z
However, aside from cd-ing into the folder /assets in the docker container, I get the following error for other operations in the folder (including chmod and chcon):
ls: cannot open directory '.': Permission denied
The folder UID and GID are 0 (i.e. root) and the UID of the bash in docker is also 0.
However, by removing the Z flag, the docker container is able to read content off the volume, but not write into it.
Here is the output of ls -laZ with the Z flag on:
drwx------. 2 root root system_u:object_r:container_var_run_t:s0 160 Jan 27 15:33 assets
and here is without Z flag:
drwxr-xr-x. 4 root root unconfined_u:object_r:httpd_sys_content_t:s0 72 Jan 23 09:01 assets
It seems that with the Z flag, my group and others permission disappears, but that does not matter because the UID is the same, right?
My question is, how can I get write access to the mounted directory in the docker container?

Permission issues in nexus3 docker container

When I start nexus3 in a docker container I get the following error messages.
$ docker run --rm sonatype/nexus3:3.8.0
Warning: Cannot open log file: ../sonatype-work/nexus3/log/jvm.log
Warning: Forcing option -XX:LogFile=/tmp/jvm.log
Java HotSpot(TM) 64-Bit Server VM warning: Cannot open file ../sonatype-work/nexus3/log/jvm.log due to Permission denied
Unable to update instance pid: Unable to create directory /nexus-data/instances
/nexus-data/log/karaf.log (Permission denied)
Unable to update instance pid: Unable to create directory /nexus-data/instances
It indicates that there is a file permission issue.
I am using Red Hat Enterprise Linux 7.5 as host machine and the most recent docker version.
On another machine (ubuntu) it works fine.
The issue occurs in the persistent volume (/nexus-data). However, I do not mount a specific volume and let docker use a anonymous one.
If I compare the volumes on both machines I can see the following permissions:
For Red Hat, where it is not working is belongs to root.
$ docker run --rm sonatype/nexus3:3.8.0 ls -l /nexus-data
total 0
drwxr-xr-x. 2 root root 6 Mar 1 00:07 etc
drwxr-xr-x. 2 root root 6 Mar 1 00:07 log
drwxr-xr-x. 2 root root 6 Mar 1 00:07 tmp
On ubuntu, where it is working it belongs to nexus. Nexus is also the default user in the container.
$ docker run --rm sonatype/nexus3:3.8.0 ls -l /nexus-data
total 12
drwxr-xr-x 2 nexus nexus 4096 Mar 1 00:07 etc
drwxr-xr-x 2 nexus nexus 4096 Mar 1 00:07 log
drwxr-xr-x 2 nexus nexus 4096 Mar 1 00:07 tmp
Changing the user with the options -u is not an option.
I could solve it by deleting all local docker images: docker image prune -a
Afterwards it downloaded the image again and it worked.
This is strange because I also compared the fingerprints of the images and they were identical.
An example of docker-compose for Nexus :
version: "3"
services:
#Nexus
nexus:
image: sonatype/nexus3:3.39.0
expose:
- "8081"
- "8082"
- "8083"
ports:
# UI
- "8081:8081"
# repositories http
- "8082:8082"
- "8083:8083"
# repositories https
#- "8182:8182"
#- "8183:8183"
environment:
- VIRTUAL_PORT=8081
volumes:
- "./nexus/data/nexus-data:/nexus-data"
Setup the volume :
mkdir -p ./nexus/data/nexus-data
sudo chown -R 200 nexus/ # 200 because it's the UID of the nexus user inside the container
Start Nexus
sudo docker-compose up -d
hf
You should attribute correct right to the folder where the persistent volume is located.
chmod u+wxr -R <folder of /nexus-data volumes>
Be carefull, if you execute previous command, it would give write, read and execution right to all users. If you want to give more restricted right, you should modify the command.

Permission denied to access /var/run/docker.sock mounted in a OpenShift container

Objective
Know how to trouble shoot and what knowledge is required to trouble shoot permission issues of Docker container accessing host files.
Problem
Access to /var/run/docker.sock mounted inside a OpenShift container via hostPath causes permission denied. The issue does not happen if the same container is deployed to K8S 1.9.x, hence it is OpenShift specific issue.
[ec2-user#ip-10-0-4-62 ~]$ ls -laZ /var/run/docker.sock
srw-rw----. root docker system_u:object_r:container_var_run_t:s0 /var/run/docker.sock
[ec2-user#ip-10-0-4-62 ~]$ docker exec 9d0c6763d855 ls -laZ /var/run/docker.sock
srw-rw----. 1 root 1002 system_u:object_r:container_var_run_t:s0 0 Jan 16 09:54 /var/run/docker.sock
https://bugzilla.redhat.com/show_bug.cgi?id=1244634 says svirt_sandbox_file_t SELinux label is required for RHEL, so changed the label.
$ chcon -Rt container_runtime_t docker.sock
[ec2-user#ip-10-0-4-62 ~]$ ls -aZ /var/run/docker.sock
srw-rw----. root docker system_u:object_r:svirt_sandbox_file_t:s0 /var/run/docker.sock
Redeploy the container but still permission denied.
$ docker exec -it 9d0c6763d855 curl -ivs --unix-socket /var/run/docker.sock http://localhost/version
* Trying /var/run/docker.sock...
* Immediate connect fail for /var/run/docker.sock: Permission denied
* Closing connection 0
OpenShift by default does not allow hostPath so it was addressed.
oc adm policy add-scc-to-user privileged system:serviceaccount:{{ DATADOG_NAMESPACE }}:{{ DATADOG_SERVICE_ACCOUNT }}
I suppose SELinux or OpenShift SCC or other container/docker permission is causing this but need a clue how to find the cause.
Openshift requires special permissions for in order to allow pods to use volumes in nodes.
Do the following:
Create standard security-context yaml:
kind: SecurityContextConstraints
apiVersion: v1
metadata:
name: scc-hostpath
allowPrivilegedContainer: true
runAsUser:
type: RunAsAny
seLinuxContext:
type: RunAsAny
fsGroup:
type: RunAsAny
supplementalGroups:
type: RunAsAny
users:
- my-admin-user
groups:
- my-admin-group
oc create -f scc-hostpath.yam
Add the "allowHostDirVolumePlugin" privilege to this security-context:
oc patch scc scc-hostpath -p '{"allowHostDirVolumePlugin": true}'
Associate the pod's service account with the above security context
oc adm policy add-scc-to-user scc-hostpath system:serviceaccount:<service_account_name>

Resources