Disable Selinux from docker container service - docker

In my docker container , i notice that wehn running ll on the majority of my file , i ve got a "." in the end of the access codes
Example :
[user#3a18ecb8ccd4 opt]$ ll
total 4
drwxr-xr-x. 1 root root 4096 Sep 5 14:52 application
Since its affects my volume mounting , i want to disable this from my container : drwxr-xr-x. -> drwxr-xr-x
Since i'm using docker-compose , i ve tried the option :z within my volumes mapping , but that didn't ameliorate it .
how my i do it ?
Dokcer compose file :
myapp:
image: myImage3
networks:
- default
stdin_open: true
volumes:
- /opt/application/i99/current/logs:/opt/application/i99/current/logs:z
tty: true
ports:
- target: 8300
published: 8300
protocol: tcp
mode: host
deploy:
mode: global
Suggestions ?

Related

How can I get docker running in Jenkins nodes which are containers?

I am trying to get docker running on Jenkins which itself is a container. Below is part of the Pod spec.
cyrilpanicker/jenkins is an image with Jenkins and docker-cli installed.
For Docker daemon, I am running another container with docker:dind image (The nodes are running on a k8s cluster).
And to get docker.sock linked between them, I am using volume mounts.
spec:
containers:
- name: jenkins
image: cyrilpanicker/jenkins
volumeMounts:
- mountPath: /var/run/docker.sock
name: docker-socket
- name: docker
image: docker:dind
securityContext:
privileged: true
volumeMounts:
- mountPath: /var/run/docker.sock
name: docker-socket
volumes:
- name: docker-socket
hostPath:
path: /docker.sock
type: FileOrCreate
But this is not working. Below are the logs from the docker container.
time="2021-06-04T20:47:26.059792967Z" level=info msg="Starting up"
time="2021-06-04T20:47:26.061956820Z" level=warning msg="could not change group /var/run/docker.sock to docker: group docker not found"
failed to load listeners: can't create unix socket /var/run/docker.sock: device or resource busy
Can anyone suggest another way to get this working?
According to the kubernetes docs, hostPath mounts a path from node filesystem, so if I understand correctly, this is not what you want to achieve.
I'm afraid that it isn't possible do mount single file as a volume, so even if you remove hostPath from volumes, docker.sock will be mounted as directory:
jenkins#static-web:/$ ls -la /var/run/
total 20
drwxr-xr-x 1 root root 4096 Jun 5 14:44 .
drwxr-xr-x 1 root root 4096 Jun 5 14:44 ..
drwxrwxrwx 2 root root 4096 Jun 5 14:44 docker.sock
I would try to run docker daemon in dind container with TCP listener instead of sock file:
spec:
containers:
- name: jenkins
image: cyrilpanicker/jenkins
- name: docker
image: docker:dind
command: ["dockerd"]
args: ["-H", "tcp://127.0.0.1:2376"]
ports:
- containerPort: 2376
securityContext:
privileged: true
jenkins#static-web:/$ docker -H tcp://127.0.0.1:2376 ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
And then configure jenkins to use tcp://127.0.0.1:2376 as a remote docker daemon.

Docker swarm : can't curl to a service container

I ve a service running under a stack swarm :
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
de74ba4d48c1 myregistry/myApi:1.0 "java -Dfile.encodin…" 3 minutes ago Up 3 minutes 8300/tcp myApiCtn
As you can see , my service is running on the 8300 port.
The probleme is that when i run curl ; it seems to not reply:
[user#server home]$ curl http://localhost:8300/api/elk/batch
curl: (52) Empty reply from server
In another side if i ran my container manually (without stack and without swarm services )
(docker run ...)
-> curl works well
My docker-compose is the following :
---
version: '3.4'
services:
api-batch:
image: myRegistry/myImageApi
networks:
- net_common
- default
stdin_open: true
volumes:
- /opt/application/current/logs:/opt/application/current/logs
- /var/opt/data/flat/flf/:/var/opt/data/flat/flf/
tty: true
ports:
- target: 8300
published: 8300
protocol: tcp
deploy:
mode: global
resources:
limits:
memory: 1024M
placement:
constraints:
- node.labels.type == test
healthcheck:
disable: true
networks:
net_common:
external: true
Where my networks list is the following :
NETWORK ID NAME DRIVER SCOPE
17795bfee9ca bridge bridge local
0faecb070730 docker_gwbridge bridge local
51c34d251495 host host local
j2nnf26asn3k ingress overlay swarm
3all3tmn3qn9 net_common overlay swarm
b7alw2yi5fk9 srcd-current_default overlay swarm
Any suggestion to make it work under swarm service ?

Why the volume folder is empty after a redis save?

Code
I'm trying to run a redis service defined inside a docker-compose.yml as follows:
version: '3'
services:
redis:
image: "redis:5-alpine"
volumes:
- ./redis-vol:/home/data
app:
build: .
ports:
- 8080:8080
volumes:
- .:/home/app/
This is the Dockerfile:
FROM python:2.7-alpine3.8
WORKDIR /home/app
COPY ./requirements.txt .
RUN apk add python2-dev build-base linux-headers pcre-dev && \
pip install -r requirements.txt
# Source files
COPY ./api.py .
COPY ./conf.ini .
CMD ["uwsgi", "--ini", "conf.ini"]
The app consists of this snippet running a uwsgi interface on port 8080
import uwsgi
import redis
r = redis.Redis('redis')
def application(env, start_response):
start_response('200 OK', [('Content-Type','text/html')])
r.append('hello', 'world!')
r.save()
return [b"Hello World"]
And this is the conf.ini file:
[uwsgi]
http = :8080
wsgi-file = api.py
master = true
process = 2
enable-threads = true
uid = 1001
gid = 1001
The app service is supposed to save a key:value pair through redis every time it receives a request to http://localhost:8080.
Upon a successful request, the docker-compose process returns the following log:
redis_1_bdf757fbb2bf | 1:M 26 Nov 2018 15:38:20.399 * DB saved on disk
app_1_5f729e6bcd36 | [pid: 17|app: 0|req: 1/1] 172.21.0.1 () {38 vars in 690 bytes} [Mon Nov 26 15:38:20 2018] GET / => generated 11 bytes in 8 msecs (HTTP/1.1 200) 1 headers in 44 bytes (1 switches on core 0)
redis_1_bdf757fbb2bf | 1:M 26 Nov 2018 15:38:20.998 * DB saved on disk
app_1_5f729e6bcd36 | [pid: 17|app: 0|req: 2/2] 172.21.0.1 () {40 vars in 691 bytes} [Mon Nov 26 15:38:20 2018] GET /favicon.ico => generated 11 bytes in 4 msecs (HTTP/1.1 200) 1 headers in 44 bytes (1 switches on core 0)
Problem
Despite the DB saved on disk log, the redis_vol folder is empty and the dump.rdb file doesn't seem to be saved anywhere else.
What I am doing wrong? I've also tried to use redis:alpine as image but I have the following error at startup:
redis_1_bdf757fbb2bf | 1:M 26 Nov 14:57:27.003 # Can't handle RDB format version 9
redis_1_bdf757fbb2bf | 1:M 26 Nov 14:57:27.003 # Fatal error loading the DB: Invalid argument. Exiting.
And I've also tried to map the dump.rdb in the redis service as follows:
redis:
image: "redis:5-alpine"
volumes:
- ./redis-vol/dump.rdb:/home/data/dump.rdb
but the docker creates a folder named dump.rdb/ instead of a readable file.
If you still face the problem even after changing the volume map to
<your-volume>:/data
Make sure to delete the previous container with
docker container prune
before restarting your container again
Accoring to the redis documentation on the DockerHub page
If persistence is enabled, data is stored in the VOLUME /data
So you are using the wrong volume path. Yout should use /data instead
volumes:
- ./redis-vol:/data
To be able to mount a single file for your container with docker-compose, use absolute path of the file you want to mount from your filesystem:
redis:
image: "redis:5-alpine"
volumes:
- /Users/username/dirname/redis-vol/dump.rdb:/home/data/dump.rdb
As codestation correctly pointed out, it is on the
official documenation but he is suggesting
mount point instead of volumes which has some additional cons.
In both cases in documentation there is also statement about "persistence
enabled":
$ docker run --name some-redis -d redis redis-server --appendonly yes
or in your docker-compose file:
redis:
image: "redis:alpine"
command: redis-server --appendonly yes
volumes:
- your-volume:/data

Docker: Permission denied when PHP upload file to mounted data volume

I'm using docker quickstart terminal on Win10.
Client:
Version: 17.06.0-ce,
API version: 1.30
Go version: go1.8.3
Git commit: 02c1d87
Built: Fri Jun 23 21:30:30 2017
OS/Arch: windows/amd64
I have a simple document upload php script that saves an uploaded document to a file location called '/uploads'.
I want to make the '/uploads' folder a volume attached to the php:apache container, so i can easily share the contents with the python back-end.
I'm using the following docker-compose.yml file to build a web service with a python back-end.
Note: The php works on the php:apache environment without the volume.
$ docker volume create fileul --opt o=uid=197609,gid=0
version: '3.2'
services:
Python_app:
build: ./product
volumes:
- ./product:/usr/src/app
ports:
- 5001:80
website:
image: php:apache
volumes:
- ./website:/var/www/html
- type: volume
source: fileuld
target: /var/www/html/uploads
read_only: false
ports:
- 5000:80
depends_on:
- Python_app
volumes:
fileuld:
I get a permission error on the web service when I try to upload a document to the attached volume fileuld. failed to open stream: Permission denied in /var/www/html/upload.php
I have read some other stackoverflow posts on this and they talk about uid and gid and i have tried using :
$ls -ls
Which gives the following:
4 -rw-r--r-- 1 ASUSNJHOME 197609 343 Sep 23 14:49 docker-compose.yml
32 drwxr-xr-x 1 ASUSNJHOME 197609 0 Sep 22 07:10 product/
0 drwxr-xr-x 1 ASUSNJHOME 197609 0 Sep 23 15:38 volume-example/
0 drwxr-xr-x 1 ASUSNJHOME 197609 0 Sep 22 21:32 website/
But i can't work out how to have the volume able to accept a document getting written into or how to change the permissions of it when it is being created from the docker-compose file.
Can anyone point me in the right direction?
Thanks
Michael

call a docker container from another container

I have deployed two docker containers which hosts two REST services deployed in Jetty.
Container 1 hosts service 1 and it Listens to 7070
Container 2 hosts service 2 and it Listens to 9090
Endpoints:-
service1:
/ping
/service1/{param}
service2:
/ping
/service2/callService1
curl -X GET http://localhost:7070/ping [Works]
curl -X GET http://localhost:7070/service1/hello [Works]
curl -X GET http://localhost:9090/ping [Works]
I have configured the containers in such a way that:
http://localhost:9090/serivce2/callService1
calls
http://localhost:7070/service1/hello
This throws a connection refused exception. Here's the configuration I have.
docker-compose.yml
------------------
service1:
build: microservice/
ports:
- "7070:7070"
expose:
- "7070"
service2:
build: microservice_link/
ports:
- "9090:9090"
expose:
- "9090"
links:
- service1
service1 Dockerfile
-------------------
FROM localhost:5000/java:7
COPY ./target/service1.jar /opt
WORKDIR /opt
ENTRYPOINT ["java", "-jar", "service1.jar","7070"]
CMD [""]
service2 Dockerfile
-------------------
FROM localhost:5000/java:7
COPY ./target/service2.jar /opt
WORKDIR /opt
ENTRYPOINT ["java", "-jar", "service2.jar","9090"]
CMD [""]
docker info
-----------
root#LT-NB-108U:~# docker info
Containers: 3
Running: 2
Paused: 0
Stopped: 1
Images: 12
Server Version: 1.10.1
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 28
Dirperm1 Supported: false
Execution Driver: native-0.2
Logging Driver: json-file
Plugins:
Volume: local
Network: null host bridge
Kernel Version: 3.13.0-48-generic
Operating System: Ubuntu precise (12.04.5 LTS)
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 3.47 GiB
Name: LT-NB-108U
ID: BS52:XURM:3SD7:TC3R:7YVA:ZBZK:CCL2:7AVC:RNZV:RBGW:2X2T:7C46
WARNING: No swap limit support
root#LT-NB-108U:~#
Question:-
I am trying to access the endpoint deployed in Container 1 from Container 2. However, I get a connection refused exception.
I tried exposing port 7070 in container 2. That didn't work.
curl http://service1:7070/
use - host1_name:inner_port_of_host1
That host is called "service1" in container2. Use that as the host name and the port is the inner port listener in service1's container.
If you have an express server on service1, listen on port 7070.

Resources