Running 'docker-compose up' throws permission denied when trying official samaple of Docker - docker

I am using Docker 1.13 community edition on a CentOS 7 x64 machine. When I was following a Docker Compose sample from Docker official tutorial, all things were OK until I added these lines to the docker-compose.yml file:
volumes:
- .:/code
After adding it, I faced the following error:
can't open file 'app.py': [Errno 13] Permission denied. It seems that the problem is due to a SELinux limit. Using this post I ran the following command:
su -c "setenforce 0"
to solve the problem temporarily, but running this command:
chcon -Rt svirt_sandbox_file_t /path/to/volume
couldn't help me.

Finally I found the correct rule to add to SELinux:
# ausearch -c 'python' --raw | audit2allow -M my-python
# semodule -i my-python.pp
I found it when I opened the SELinux Alert Browser and clicked on 'Details' button on the row related to this error. The more detailed information from SELinux:
SELinux is preventing /usr/local/bin/python3.4 from read access on the
file app.py.
***** Plugin catchall (100. confidence) suggests **************************
If you believe that python3.4 should be allowed read access on the
app.py file by default. Then you should report this as a bug. You can
generate a local policy module to allow this access. Do allow this
access for now by executing:
ausearch -c 'python' --raw | audit2allow -M my-python
semodule -i my-python.pp

Related

Limactl: How to fix chown problems?

I tried running this docker compose file
https://docs.sorry-cypress.dev/guide/dashboard-and-api
I used the docker template
limactl start --name=docker template://docker
With the following mounts inside the docker template
mounts:
- location: "~"
writable: true
When I run
docker-compose -f ./docker-compose.minio.yml up
I get
Error response from daemon: error while creating mount source path '/Users/me/code/cypress/data/data-mongo-cypress':
chown /Users/me/code/cypress/data/data-mongo-cypress: permission denied
When I run it with sudo, the first issue is bypassed but then I get this error from the mongo container
cypress-mongo-1 | chown: changing ownership of '/data/db': Permission denied
I tried setting the chmod of Users/me/code/cypress/data/ to 0777 but it doesn't change anything
How to overcome these chown problems?
System
MACOS 13 Ventura
I found a solution in the github issues
I edited my lima instance like this
limactl edit docker
Then changed the mountType and the mounts props like this
mountType: "9p"
mounts:
- location: "~"
writable: true
9p:
securityModel: mapped-xattr
cache: "mmap"

Pihole deployment restarting with helm

I'm trying to install pihole on a Kubernetes cluster on Docker via helm. I'm following this guide to do so. Everything seems to go smoothly. I get a completion:
NAME: pihole
LAST DEPLOYED: Wed Sep 30 22:22:15 2020
NAMESPACE: pihole
STATUS: deployed
REVISION: 1
TEST SUITE: None
But the pihole never reaches the ready state, it just restarts after a couple minutes. Upon inspecting the pod I see:
lastState:
terminated:
containerID: docker://16e2a318b460d4d5aebd502175fb688fc150993940181827a506c086e2cb326a
exitCode: 0
finishedAt: "2020-09-30T22:01:55Z"
reason: Completed
startedAt: "2020-09-30T21:59:17Z"
How do I prevent this from continually restarting once it's complete?
Here is the output of kubectl logs <POD_NAME>:
[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] 01-resolver-resolv: applying...
[fix-attrs.d] 01-resolver-resolv: exited 0.
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] 20-start.sh: executing...
::: Starting docker specific checks & setup for docker pihole/pihole
[✓] Update local cache of available packages
[i] Existing PHP installation detected : PHP version 7.0.33-0+deb9u8
[i] Installing configs from /etc/.pihole...
[i] Existing dnsmasq.conf found... it is not a Pi-hole file, leaving alone!
[✓] Copying 01-pihole.conf to /etc/dnsmasq.d/01-pihole.conf
chown: cannot access '': No such file or directory
chmod: cannot access '': No such file or directory
chown: cannot access '/etc/pihole/dhcp.leases': No such file or directory
::: Pre existing WEBPASSWORD found
Using default DNS servers: 8.8.8.8 & 8.8.4.4
DNSMasq binding to default interface: eth0
Added ENV to php:
"PHP_ERROR_LOG" => "/var/log/lighttpd/error.log",
"ServerIP" => "0.0.0.0",
"VIRTUAL_HOST" => "pi.hole",
Using IPv4 and IPv6
::: Preexisting ad list /etc/pihole/adlists.list detected ((exiting setup_blocklists early))
https://raw.githubusercontent.com/StevenBlack/hosts/master/hosts
https://mirror1.malwaredomains.com/files/justdomains
::: Testing pihole-FTL DNS: FTL started!
::: Testing lighttpd config: Syntax OK
::: All config checks passed, cleared for startup ...
::: Docker start setup complete
[✗] DNS resolution is currently unavailable
You are not alone with this issue.
Resolution is here - chown: cannot access '/etc/pihole/dhcp.leases': No such file or directory
This happens for me as well. I used that same tutorial to set up my
cluster. If you are using a persistent volume as well, use a ssh
connection to get to your drive and run these two commands.
ls -l ----> this will show the owner and user of each file they all should be www-data if not run this cmd
sudo chown -R www-data:www-data pihole from the /mnt/ssd directory described in the tutorial. This will allow you to add more whitelists/blacklists/adlists from the web portal.

Docker-in-Docker issues with connecting to internal container network (Anchore Engine)

I am having issues when trying to connect to a docker-compose network from inside of a container. These are the files I am working with. The whole thing runs when I ./run.sh.
Dockerfile:
FROM docker/compose:latest
WORKDIR .
# EXPOSE 8228
RUN apk update
RUN apk add py-pip
RUN apk add jq
RUN pip install anchorecli
COPY dockertest.sh ./dockertest.sh
COPY docker-compose.yaml docker-compose.yaml
CMD ["./dockertest.sh"]
docker-compose.yaml
services:
# The primary API endpoint service
engine-api:
image: anchore/anchore-engine:v0.6.0
depends_on:
- anchore-db
- engine-catalog
#volumes:
#- ./config-engine.yaml:/config/config.yaml:z
ports:
- "8228:8228"
..................
## A NUMBER OF OTHER CONTAINERS THAT ANCHORE-ENGINE USES ##
..................
networks:
default:
external:
name: anchore-net
dockertest.sh
echo "------------- INSTALL ANCHORE CLI ---------------------"
engineid=`docker ps | grep engine-api | cut -f 1 -d ' '`
engine_ip=`docker inspect $engineid | jq -r '.[0].NetworkSettings.Networks."cws-anchore-net".IPAddress'`
export ANCHORE_CLI_URL=http://$engine_ip:8228/v1
export ANCHORE_CLI_USER='user'
export ANCHORE_CLI_PASS='pass'
echo "System status"
anchore-cli --debug system status #This line throws error (see below)
run.sh:
#!/bin/bash
docker build . -t anchore-runner
docker network create anchore-net
docker-compose up -d
docker run --network="anchore-net" -v //var/run/docker.sock:/var/run/docker.sock anchore-runner
#docker network rm anchore-net
Error Message:
System status
INFO:anchorecli.clients.apiexternal:As Account = None
DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): 172.19.0.6:8228
Error: could not access anchore service (user=user url=http://172.19.0.6:8228/v1): HTTPConnectionPool(host='172.19.0.6', port=8228): Max retries exceeded with url: /v1
(Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',))
Steps:
run.sh builds container image and creates network anchore-net
the container has an entrypoint script, which does multiple things
firstly, it brings up the docker-compose network as detached FROM inside the container
secondly, nstalls anchore-cli so I can run commands against container network
lastly, attempts to get a system status of the anchore-engine (d.c network) but thats where I am running into HTTP request connection issues.
I am dynamically getting the IP of the api endpoint container of anchore-engine and setting the URL of the request to do that. I have also tried passing those variables from command line such as:
anchore-cli --u user --p pass --url http://$engine_ip/8228/v1 system status but that throws the same error.
For those of you who took the time to read through this, I highly appreciate any input you can give me as to where the issue may be lying. Thank you very much.

Unable to build cloned Jekyll site - jekyll 3.8.5 | Error: Permission denied # dir_s_mkdir - /srv/jekyll/_site

I am unable to build the jekyll site I cloned with git due to a permission error. I am using Ubuntu 18.04. I've looked at most SO posts regarding this error, but none of the solutions have worked for me.
The command to build the site is docker-compose up. Running this command with or without sudo does not change the error. I am in the docker group. I am able to build the site using bundle exec jekyll serve. This command successfully creates the _site folder.
I tried adding the _site manually, but this results in a different error.
$ docker-compose up
Starting site ... done
Attaching to site
site | Configuration file: /srv/jekyll/_config.yml
site | Source: /srv/jekyll
site | Destination: /srv/jekyll/_site
site | Incremental build: disabled. Enable with --incremental
site | Generating...
site | Jekyll Feed: Generating feed for posts
site | jekyll 3.8.5 | Error: Permission denied # dir_s_mkdir - /srv/jekyll/_site
site exited with code 1
docker-compose.yml
version: '3'
services:
doc:
image: jekyll/jekyll:3.8
volumes:
- .:/srv/jekyll
container_name: site
ports:
- "4000:4000"
stdin_open: true
tty: true
command: bash -c "bundle install && bundle exec jekyll serve --host 0.0.0.0"
I am expecting to get no errors and have the _site directory successfully created.
I have the same issue, what I understand from the problem, is that the image jekyll/jekyll:3.8 create a group and user called jekyll under the UID 1000 which is the first user from Ubuntu(in my case).
When you try to run your compose file with another user where is different from the user UID 1000 the volume you mount has the owner from this current user, which is not the same UID of jekyll, then when your service try to run something he has no permission.
To solve that I found in this repository this environment variables:
JEKYLL_UID = 1000
JEKYLL_GID = 1000
adding in the compose file the env variables and changing the value 1000 to the UID from the current user.
environment:
JEKYLL_UID: 1001
JEKYLL_GID: 1001
This works fine to me.
To print your current UID :
$ echo $UID
1001

Live migration of a jboss/wildfly container with CRIU failed

I've tried to live migrate a wildfly-container to another host like described here. The example with the np container works well. When I replace the example with a simple jboss/wildfly container, I just received this error when criu tries to restore the container on the other host :
Error response from daemon: Cannot restore container <CONTAINER-ID>: criu failed: type NOTIFY errno 0
Error: failed to restore one or more containers
Because I didn't found a solution to this error, I've compiled the linux kernel like described on the criu website and here.
After that sudo criu check prints:
Warn (criu/libnetlink.c:54): ERROR -2 reported by netlink
Warn (criu/libnetlink.c:54): ERROR -2 reported by netlink
Warn (criu/sockets.c:711): The current kernel doesn't support packet_diag
Warn (criu/libnetlink.c:54): ERROR -2 reported by netlink
Warn (criu/sockets.c:721): The current kernel doesn't support netlink_diag
Info prctl: PR_SET_MM_MAP_SIZE is not supported
Looks good.
criu --version
Version: 2.11
docker --version
Docker version 1.6.2, build 7c8fca2
Checkpoint/Restore for an example shell script example worked very well. But when I want to checkpoint a container
docker run -d --name looper busybox /bin/sh -c 'i=0; while true; do echo $i; i=$(expr $i + 1); sleep 1; done'
with
criu dump -t $PID --images-dir /tmp/looper
I receive this output
Error (criu/sockets.c:132): Diag module missing (-2)
Error (criu/sockets.c:132): Diag module missing (-2)
Error (criu/sockets.c:132): Diag module missing (-2)
Error (criu/mount.c:701): mnt: 87:./etc/hosts doesn't have a proper root mount
Error (criu/cr-dump.c:1641): Dumping FAILED.`
I can't find some solutions with these errors. Is there any known solution to live migrate a wildfly-container?
Thanks in advance

Resources