How to deploy container Gitlab on a mounted point? - docker

I'm trying to mount one container Gitlab on my CIFS mounted point, but that returns errors.
I have my Nas CIFS mounted point at /mnt/serveurwiki/ and i have 2 folders in "gitlab" and "wiki".
As you can see, I have all rights on the folder.
And this is my docker-compose.yml:
version: '3.6'
services:
gitlab:
image: gitlab/gitlab-ce:latest
user: root
ports:
- '42007:80'
- '42008:443'
- '42009:22'
volumes:
- /mnt/serveurwiki/gitlab/config:/etc/gitlab
- /mnt/serveurwiki/gitlab/logs:/var/log/gitlab
- /mnt/serveurwiki/gitlab/data:/var/opt/gitlab
networks:
- network
networks:
network:
For precision, I tested the compose in other positions and all work, but when I try to deploy my container in my point mount (/mnt/serveurwiki), I have that error in the logs of the container.
[2022-07-11T08:56:12+00:00] FATAL: Stacktrace dumped to /opt/gitlab/embedded/cookbooks/cache/chef-stacktrace.out
[2022-07-11T08:56:12+00:00] FATAL: ---------------------------------------------------------------------------------------
[2022-07-11T08:56:12+00:00] FATAL: PLEASE PROVIDE THE CONTENTS OF THE stacktrace.out FILE (above) IF YOU FILE A BUG REPORT
[2022-07-11T08:56:12+00:00] FATAL: ---------------------------------------------------------------------------------------
[2022-07-11T08:56:12+00:00] FATAL: Mixlib::ShellOut::ShellCommandFailed: storage_directory[/var/opt/gitlab/.ssh] (gitlab::gitlab-shell line 34) had an error: Mixlib::ShellOut::ShellCommandFailed: ruby_block[directory resource: /var/opt/gitlab/.ssh] (gitlab::gitlab-shell line 36) had an error: Mixlib::ShellOut::ShellCommandFailed: Expected process to exit with [0], but received '1'
---- Begin output of chgrp git /var/opt/gitlab/.ssh ----
STDOUT:
STDERR: chgrp: changing group of '/var/opt/gitlab/.ssh': Operation not permitted
---- End output of chgrp git /var/opt/gitlab/.ssh ----
Ran chgrp git /var/opt/gitlab/.ssh returned 1
Anyone have an idea what I can do for that and why I have that error?

For testing, you could:
build your own image based on gitlab/gitlab-ce:latest
change the ENTRYPOINT to call your own script before calling the one from GitLab
check/change the permissions of /var/opt/gitlab in your custom ENTRYPOINT script.
That way, you have a better control of the environment used by the GitLab image.

Related

Can't find project repository in git-server for custom-hooks

I used docker-compose to run a local gitlab server.
git:
container_name: git-server
image: gitlab/gitlab-ce:latest
hostname: 'gitlab.example.com'
ports:
- '8090:80'
- '22:22'
volumes:
- "$PWD/srv/gitlab/config:/etc/gitlab"
- "$PWD/srv/gitlab/logs:/var/log/gitlab"
- "$PWD/srv/gitlab/data:/var/opt/gitlab"
networks:
- net
I want to setup custom-hooks for a project repo i created in the gitlab webUI so that it triggers a jenkins job. As per gitlab documentation, this is the path for repos in omnibus installations where i will have to create custom-hooks directory
/var/opt/gitlab/git-data/repositories/<group>/<project>.git
But inside of /var/opt/gitlab/git-data/repositories , I don't see a group directory or project directory at all
root#gitlab:~# ls -lt /var/opt/gitlab/git-data/repositories
total 0
drwxr-s---. 3 git root 16 Apr 18 04:05 #hashed
drwxr-sr-x. 3 git root 17 Apr 18 04:00 +gitaly
root#gitlab:~#
I tried searching using find. But it returned nothing. I tried searching by name of files in my project repo, but that didn't return anything as well.
In the gitlab webUI, I am able to see it all. But in the server, none of the file and dir exists.
How is it that I am not able to find any of the file in my repos when i ssh to gitlab-server?
Since I am not able to go this way, I tried by creating a post-receive.d directory under the global hooks directory /opt/gitlab/embedded/service/gitlab-shell/hooks and then adding my post-receive file as below
#!/bin/bash
# Get branch name from ref head
if ! [ -t 0 ]; then
read -a ref
fi
IFS='/' read -ra REF <<< "${ref[2]}"
branch="${REF[2]}"
if [ "$branch" == "master" ]; then
crumb=$(curl -u "jenkins:1234" -s 'http://jenkins:8080/crumbIssuer/api/xml?xpath=concat(//crumbRequestField,":",//crumb)')
curl -u "jenkins:1234" -H "$crumb" -X POST http://jenkins:8080/job/maven/build?delay=0sec
if [ $? -eq 0 ] ; then
echo "*** Ok"
else
echo "*** Error"
fi
jenkins is the name of the container which is in the same network as that of gitlab server.
gitlab docs says then I will have to change permission of the file to git and then make it executable. I did so. But it didn't work either. Also, I find all of the git directories is owned by root in my container.
After pushing code, i figured the hook I put in the /opt/gitlab/embedded/service/gitlab-shell/hooks/post-receive.d directory is not working and in the logs, I see below error right after I push code changes to my maven repo
==> /var/log/gitlab/nginx/gitlab_error.log <==
2020/04/18 04:57:31 [crit] 832#0: *256 connect() to unix:/var/opt/gitlab/gitlab-workhorse/socket failed (13: Permission denied) while connecting to upstream, client: <my_public_ip>, server: gitlab.example.com, request: "GET /jenkins/maven.git/info/refs?service=git-receive-pack HTTP/1.1", upstream: "http://unix:/var/opt/gitlab/gitlab-workhorse/socket:/jenkins/maven.git/info/refs?service=git-receive-pack", host: "gitlab.example.com:8090"
Here, gitlab.example.com is mapped to my public ip in the /etc/hosts file of my host on which I am running docker.
If you run the following command inside of the container you should see your group repos
gitlab-rake gitlab:storage:rollback_to_legacy
inside of /var/opt/gitlab/git-data/repositories , I don't see a group directory or project directory at all
The documentation "Install GitLab using docker-compose" includes the following volumes:
volumes:
- '$GITLAB_HOME/gitlab/config:/etc/gitlab'
- '$GITLAB_HOME/gitlab/logs:/var/log/gitlab'
- '$GITLAB_HOME/gitlab/data:/var/opt/gitlab'
That means if you see locally some repos in $GITLAB_HOME/gitlab/data/git-data/repositories, you should see the same in /var/opt/gitlab/git-data/repositories/.
Assuming, of course, that you have created at least one projet/repo in your GitLab instance.

Molecule : Testing roles : Failed to get Dbus Connection Operation not permitted

I facing an issue on my Molecule Test. I have begin to study this tool 2 days ago for information.
on a Ubuntu VM running with Vagrant,I have create a role and initialze Molecule's folder and create a testinfra test file ( with the docker provider ).
The error is when my task's role are running, at the step of checking service running, it failed.
fatal: [instance]: FAILED! => {"changed": false, "msg": "Could not find the requested service httpd: "}
I was design to simply install 2 packages including httpd on a Centos Image.
When im loggin directly to the Molecule VM ( so through docker ), when i simply type systemctl the error message is
Failed to get D-Bus connection: Operation not permitted
As adviced Geerlingguy, i have specify volume mapped on cgroup folder
platforms:
- name: instance
#image: docker.io/pycontribs/centos:7
image: geerlingguy/docker-${MOLECULE_DISTRO:-centos7}-ansible:latest
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
The error is not related to Testinfra but only the docker built image.
Could someone help me to understand why this error message ?
Is that because im on a VirtualBox ran by Vagrant ?
Thanks all for reading :-)
I have added that on my mocule.yml file config according molecule documentation ( https://molecule.readthedocs.io/en/latest/examples.html#docker ) :
platforms:
- name: instance
#image: docker.io/pycontribs/centos:7
image: geerlingguy/docker-centos7-ansible:latest
capabilities:
- SYS_ADMIN
command: /sbin/init
systemctl working fine now

I cannot use --package option on bitnami/spark docker container

I pulled docker image and executed below command to run image.
docker run -it bitnami/spark:latest /bin/bash
spark-shell --packages="org.elasticsearch:elasticsearch-spark-20_2.11:7.5.0"
and i got message like below
Ivy Default Cache set to: /opt/bitnami/spark/.ivy2/cache
The jars for the packages stored in: /opt/bitnami/spark/.ivy2/jars
:: loading settings :: url = jar:file:/opt/bitnami/spark/jars/ivy-2.4.0.jar!/org/apache/ivy/core/settings/ivysettings.xml
org.elasticsearch#elasticsearch-spark-20_2.11 added as a dependency
:: resolving dependencies :: org.apache.spark#spark-submit-parent-c785f3e6-7c78-469f-ab46-451f8be61a4c;1.0
confs: [default]
Exception in thread "main" java.io.FileNotFoundException: /opt/bitnami/spark/.ivy2/cache/resolved-org.apache.spark-spark-submit-parent-c785f3e6-7c78-469f-ab46-451f8be61a4c-1.0.xml (No such file or directory)
at java.io.FileOutputStream.open0(Native Method)
at java.io.FileOutputStream.open(FileOutputStream.java:270)
at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
at java.io.FileOutputStream.<init>(FileOutputStream.java:162)
at org.apache.ivy.plugins.parser.xml.XmlModuleDescriptorWriter.write(XmlModuleDescriptorWriter.java:70)
at org.apache.ivy.plugins.parser.xml.XmlModuleDescriptorWriter.write(XmlModuleDescriptorWriter.java:62)
at org.apache.ivy.core.module.descriptor.DefaultModuleDescriptor.toIvyFile(DefaultModuleDescriptor.java:563)
at org.apache.ivy.core.cache.DefaultResolutionCacheManager.saveResolvedModuleDescriptor(DefaultResolutionCacheManager.java:176)
at org.apache.ivy.core.resolve.ResolveEngine.resolve(ResolveEngine.java:245)
at org.apache.ivy.Ivy.resolve(Ivy.java:523)
at org.apache.spark.deploy.SparkSubmitUtils$.resolveMavenCoordinates(SparkSubmit.scala:1300)
at org.apache.spark.deploy.DependencyUtils$.resolveMavenDependencies(DependencyUtils.scala:54)
at org.apache.spark.deploy.SparkSubmit.prepareSubmitEnvironment(SparkSubmit.scala:304)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:774)
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:920)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:929)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
I tried other package, but it is not working with all same error message.
Can you give some advice to avoid this error?
Found the solution to it
as given in https://github.com/bitnami/bitnami-docker-spark/issues/7
what we have to do is create a volume on host mapped to docker path
volumes:
- ./jars_dir:/opt/bitnami/spark/ivy:z
give this path as cache path like this
spark-shell --conf spark.jars.ivy=/opt/bitnami/spark/ivy --conf
spark.cassandra.connection.host=127.0.0.1 --packages
com.datastax.spark:spark-cassandra-connector_2.12:3.0.0-beta --conf
spark.sql.extensions=com.datastax.spark.connector.CassandraSparkExtensions
All happened because /opt/bitnami/spark is not writable and we have to mount a volume to bypass that.
The error "java.io.FileNotFoundException: /opt/bitnami/spark/.ivy2/" occured because the location /opt/bitnami/spark/ is not writable. so in order to resolve this issue do modify the master spark service like this.
Added user as root and add mounted volume path for required jars.
see the working block of spark service written in docker compose:
spark:
image: docker.io/bitnami/spark:3
container_name: spark
environment:
- SPARK_MODE=master
- SPARK_RPC_AUTHENTICATION_ENABLED=no
- SPARK_RPC_ENCRYPTION_ENABLED=no
- SPARK_LOCAL_STORAGE_ENCRYPTION_ENABLED=no
- SPARK_SSL_ENABLED=no
user: root
ports:
- '8880:8080'
volumes:
- ./spark-defaults.conf:/opt/bitnami/spark/conf/spark-defaults.conf
- ./jars_dir:/opt/bitnami/spark/ivy:z

Filebeat not running using docker-compose: setting 'filebeat.prospectors' has been removed

I'm trying to launch filebeat using docker-compose (I intend to add other services later on) but every time I execute the docker-compose.yml file, the filebeat service always ends up with the following error:
filebeat_1 | 2019-08-01T14:01:02.750Z ERROR instance/beat.go:877 Exiting: 1 error: setting 'filebeat.prospectors' has been removed
filebeat_1 | Exiting: 1 error: setting 'filebeat.prospectors' has been removed
I discovered the error by accessing the docker-compose logs.
My docker-compose file is as simple as it can be at the moment. It simply calls a filebeat Dockerfile and launches the service immediately after.
Next to my Dockerfile for filebeat I have a simple config file (filebeat.yml), which is copied to the container, replacing the default filebeat.yml.
If I execute the Dockerfile using the docker command, the filebeat instance works just fine: it uses my config file and identifies the "output.json" file as well.
I'm currently using version 7.2 of filebeat and I know that the "filebeat.prospectors" isn't being used. I also know for sure that this specific configuration isn't coming from my filebeat.yml file (you'll find it below).
It seems that, when using docker-compose, the container is accessing another configuration file instead of the one that is being copied to the container, by the Dockerfile, but so far I haven't been able to figure it out how, why and how can I fix it...
Here's my docker-compose.yml file:
version: "3.7"
services:
filebeat:
build: "./filebeat"
command: filebeat -e -strict.perms=false
The filebeat.yml file:
filebeat.inputs:
- paths:
- '/usr/share/filebeat/*.json'
fields_under_root: true
fields:
tags: ['json']
output:
logstash:
hosts: ['localhost:5044']
The Dockerfile file:
FROM docker.elastic.co/beats/filebeat:7.2.0
COPY filebeat.yml /usr/share/filebeat/filebeat.yml
COPY output.json /usr/share/filebeat/output.json
USER root
RUN chown root:filebeat /usr/share/filebeat/filebeat.yml
RUN mkdir /usr/share/filebeat/dockerlogs
USER filebeat
The output I'm expecting should be similar to the following, which comes from the successful executions I'm getting when I'm executing it as a single container.
The ERROR is expected because I don't have logstash configured at the moment.
INFO crawler/crawler.go:72 Loading Inputs: 1
INFO log/input.go:148 Configured paths: [/usr/share/filebeat/*.json]
INFO input/input.go:114 Starting input of type: log; ID: 2772412032856660548
INFO crawler/crawler.go:106 Loading and starting Inputs completed. Enabled inputs: 1
INFO log/harvester.go:253 Harvester started for file: /usr/share/filebeat/output.json
INFO pipeline/output.go:95 Connecting to backoff(async(tcp://localhost:5044))
ERROR pipeline/output.go:100 Failed to connect to backoff(async(tcp://localhost:5044)): dial tcp [::1]:5044: connect: cannot assign requested address
INFO pipeline/output.go:93 Attempting to reconnect to backoff(async(tcp://localhost:5044)) with 1 reconnect attempt(s)
ERROR pipeline/output.go:100 Failed to connect to backoff(async(tcp://localhost:5044)): dial tcp [::1]:5044: connect: cannot assign requested address
INFO pipeline/output.go:93 Attempting to reconnect to backoff(async(tcp://localhost:5044)) with 2 reconnect attempt(s)
I managed to figure out what the problem was.
I needed to map the location of the config file and logs directory in the docker-compose file, using the volumes tag:
version: "3.7"
services:
filebeat:
build: "./filebeat"
command: filebeat -e -strict.perms=false
volumes:
- ./filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml
- ./filebeat/logs:/usr/share/filebeat/dockerlogs
Finally I just had to execute the docker-compose command and everything start working properly:
docker-compose -f docker-compose.yml up -d

docker-compose issue: Permission denied when attempting to create/mount volume

I have the following docker-compose.yml file:
version: "3"
services:
dbs-poa-loc001d:
image: percona
volumes:
- ./mysql_backup:/var/lib/mysql
- ./create_databases:/docker-entrypoint-initdb.d
hostname: "dbs-poa-loc001d"
container_name: dbs-poa-loc001d
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: "yes"
ports:
- "3306:3306"
networks:
- azion-network
...
When I try to create the dbs-poa-loc001d service (database for the project), I get the following error:
Starting dbs-poa-loc001d ... done
Attaching to dbs-poa-loc001d
dbs-poa-loc001d | Initializing database
dbs-poa-loc001d | mysqld: Can't create/write to file '/var/lib/mysql/is_writable' (Errcode: 13 - Permission denied)
dbs-poa-loc001d | 2019-01-11T01:17:52.060984Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
dbs-poa-loc001d | 2019-01-11T01:17:52.062286Z 0 [ERROR] --initialize specified but the data directory exists and is not writable. Aborting.
dbs-poa-loc001d | 2019-01-11T01:17:52.062299Z 0 [ERROR] Aborting
dbs-poa-loc001d |
dbs-poa-loc001d exited with code 1
This error doesn't happen on my MacOS computer at my job, but in my home computer (running Ubuntu 16.04) it does. I do noticed the mysql_backup folder on the host created to hold the volume data is set to group AND user root. Can anybody tell me what is going on, and how do I fix this? Already tried without success:
Running docker-compose commands using sudo
Manually changing the owner and user of the folder to my actual (low privileged) user.
My current setup and installed versions are:
Ubuntu 16.04
Docker version 18.09.0, build 4d60db4
docker-compose version 1.23.2, build 1110ad0
docker-compose was installed using sudo pip install docker-compose
Can you try to set permissions of mysql_backup to 1001:0?
something like sudo chown -R 1001:0 ./mysql_backup
or as an alternative but only if the folder is empty sudo chmod 777 ./mysql_backup
regarding to percona Dockerfile mysql user id is 1001
https://github.com/percona/percona-docker/blob/master/percona-server.80/Dockerfile

Resources