Can't find project repository in git-server for custom-hooks - jenkins

I used docker-compose to run a local gitlab server.
git:
container_name: git-server
image: gitlab/gitlab-ce:latest
hostname: 'gitlab.example.com'
ports:
- '8090:80'
- '22:22'
volumes:
- "$PWD/srv/gitlab/config:/etc/gitlab"
- "$PWD/srv/gitlab/logs:/var/log/gitlab"
- "$PWD/srv/gitlab/data:/var/opt/gitlab"
networks:
- net
I want to setup custom-hooks for a project repo i created in the gitlab webUI so that it triggers a jenkins job. As per gitlab documentation, this is the path for repos in omnibus installations where i will have to create custom-hooks directory
/var/opt/gitlab/git-data/repositories/<group>/<project>.git
But inside of /var/opt/gitlab/git-data/repositories , I don't see a group directory or project directory at all
root#gitlab:~# ls -lt /var/opt/gitlab/git-data/repositories
total 0
drwxr-s---. 3 git root 16 Apr 18 04:05 #hashed
drwxr-sr-x. 3 git root 17 Apr 18 04:00 +gitaly
root#gitlab:~#
I tried searching using find. But it returned nothing. I tried searching by name of files in my project repo, but that didn't return anything as well.
In the gitlab webUI, I am able to see it all. But in the server, none of the file and dir exists.
How is it that I am not able to find any of the file in my repos when i ssh to gitlab-server?
Since I am not able to go this way, I tried by creating a post-receive.d directory under the global hooks directory /opt/gitlab/embedded/service/gitlab-shell/hooks and then adding my post-receive file as below
#!/bin/bash
# Get branch name from ref head
if ! [ -t 0 ]; then
read -a ref
fi
IFS='/' read -ra REF <<< "${ref[2]}"
branch="${REF[2]}"
if [ "$branch" == "master" ]; then
crumb=$(curl -u "jenkins:1234" -s 'http://jenkins:8080/crumbIssuer/api/xml?xpath=concat(//crumbRequestField,":",//crumb)')
curl -u "jenkins:1234" -H "$crumb" -X POST http://jenkins:8080/job/maven/build?delay=0sec
if [ $? -eq 0 ] ; then
echo "*** Ok"
else
echo "*** Error"
fi
jenkins is the name of the container which is in the same network as that of gitlab server.
gitlab docs says then I will have to change permission of the file to git and then make it executable. I did so. But it didn't work either. Also, I find all of the git directories is owned by root in my container.
After pushing code, i figured the hook I put in the /opt/gitlab/embedded/service/gitlab-shell/hooks/post-receive.d directory is not working and in the logs, I see below error right after I push code changes to my maven repo
==> /var/log/gitlab/nginx/gitlab_error.log <==
2020/04/18 04:57:31 [crit] 832#0: *256 connect() to unix:/var/opt/gitlab/gitlab-workhorse/socket failed (13: Permission denied) while connecting to upstream, client: <my_public_ip>, server: gitlab.example.com, request: "GET /jenkins/maven.git/info/refs?service=git-receive-pack HTTP/1.1", upstream: "http://unix:/var/opt/gitlab/gitlab-workhorse/socket:/jenkins/maven.git/info/refs?service=git-receive-pack", host: "gitlab.example.com:8090"
Here, gitlab.example.com is mapped to my public ip in the /etc/hosts file of my host on which I am running docker.

If you run the following command inside of the container you should see your group repos
gitlab-rake gitlab:storage:rollback_to_legacy

inside of /var/opt/gitlab/git-data/repositories , I don't see a group directory or project directory at all
The documentation "Install GitLab using docker-compose" includes the following volumes:
volumes:
- '$GITLAB_HOME/gitlab/config:/etc/gitlab'
- '$GITLAB_HOME/gitlab/logs:/var/log/gitlab'
- '$GITLAB_HOME/gitlab/data:/var/opt/gitlab'
That means if you see locally some repos in $GITLAB_HOME/gitlab/data/git-data/repositories, you should see the same in /var/opt/gitlab/git-data/repositories/.
Assuming, of course, that you have created at least one projet/repo in your GitLab instance.

Related

Unable to deploy to remote ssh server in CircleCI

Part of my CircleCI config is to deploy to a remote server using scp, now I added SSH private key (https://circleci.com/docs/add-ssh-key) and it looks like (the values masked intentionally):
And here is a snapshot of my config:
deploy-web:
working_directory: ~/subdir/web
docker:
- image: cimg/node:16.16
steps:
- add_ssh_keys:
fingerprints:
- "d7:*****fa"
- checkout:
path: ~/subdir
- node/install-packages:
pkg-manager: yarn
- run:
name: Build
command: yarn build
- run:
name: Deploy
command: |
SSH_DEPLOY_PATH=/apps/my-app
scp -r dist/* "$SSH_USER#$SSH_HOST:$SSH_DEPLOY_PATH"
Everything runs fine but the ssh part outputs:
The authenticity of host '************** (**************)' can't be established.
ECDSA key fingerprint is SHA256:6pix3P******M.
Are you sure you want to continue connecting (yes/no/[fingerprint])?
Please not that i copied the fingerprint that is in the config from the web (in the screenshot). Now, is there anything am doing wrong or how do I go about it, because so far, google has not been resourceful.
I managed to resolve this, and this is the hack (I can't believe I didn't think of this sooner), I added this step just before the scp step:
- run:
name: Add SSH host to known
command: ssh-keyscan -H $SSH_HOST >> ~/.ssh/known_hosts

Apache Nifi (on docker): only one of the HTTP and HTTPS connectors can be configured at one time error

Have a problem adding authentication due to a new needs while using Apache NiFi (NiFi) without SSL processing it in a container.
The image version is apache/nifi:1.13.0
It's said that SSL is unconditionally required to add authentication. It's recommended to use tls-toolkit in the NiFi image to add SSL. Worked on the following process:
Except for environment variable nifi.web.http.port for HTTP communication, and executed up the standalone mode container with nifi.web.https.port=9443
docker-compose up
Joined to the container and run the tls-toolkit script in the nifi-toolkit.
cd /opt/nifi/nifi-toolkit-1.13.0/bin &&\
sh tls-toolkit.sh standalone \
-n 'localhost' \
-C 'CN=yangeok,OU=nifi' \
-O -o $NIFI_HOME/conf
Attempt 1
Organized files in directory $NIFI_HOME/conf. Three files keystore.jks, truststore.jsk, and nifi.properties were created in folder localhost that entered the value of the option -n of the tls-toolkit script.
cd $NIFI_HOME/conf &&
cp localhost/*.jks .
The file $NIFI_HOME/conf/localhost/nifi.properties was not overwritten as it is, but only the following properties were imported as a file $NIFI_HOME/conf/nifi.properties:
nifi.web.http.host=
nifi.web.http.port=
nifiweb.https.host=localhost
nifiweb.https.port=9443
Restarted container
docker-compose restart
The container died with below error log:
Only one of the HTTP and HTTPS connectors can be configured at one time
Attempt 2
After executing the tls-toolkit script, all files a were overwritten, including file nifi.properties
cd $NIFI_HOME/conf &&
cp localhost/* .
Restarted container
docker-compose restart
The container died with the same error log
Hint
The dead container volume was also accessible, so copied and checked file nifi.properties, and when did docker-compose up or restart, it changed as follows:
The part I overwritten or modified:
nifi.web.http.host=
nifi.web.http.port=
nifi.web.http.network.interface.default=
#############################################
nifi.web.https.host=localhost
nifi.web.https.port=9443
The changed part after re-executing the container:
nifi.web.http.host=a8e283ab9421
nifi.web.http.port=9443
nifi.web.http.network.interface.default=
#############################################
nifi.web.https.host=a8e283ab9421
nifi.web.https.port=9443
I'd like to know how to execute the container with http.host, http.port empty. docker-compose.yml file is as follows:
version: '3'
services:
nifi:
build:
context: .
args:
NIFI_VERSION: ${NIFI_VERSION}
container_name: nifi
user: root
restart: unless-stopped
network_mode: bridge
ports:
- ${NIFI_HTTP_PORT}:8080/tcp
- ${NIFI_HTTPS_PORT}:9443/tcp
volumes:
- ./drivers:/opt/nifi/nifi-current/drivers
- ./templates:/opt/nifi/nifi-current/templates
- ./data:/opt/nifi/nifi-current/data
environment:
TZ: 'Asia/Seoul'
########## JVM ##########
NIFI_JVM_HEAP_INIT: ${NIFI_HEAP_INIT} # The initial JVM heap size.
NIFI_JVM_HEAP_MAX: ${NIFI_HEAP_MAX} # The maximum JVM heap size.
########## Web ##########
# NIFI_WEB_HTTP_HOST: ${NIFI_HTTP_HOST} # nifi.web.http.host
# NIFI_WEB_HTTP_PORT: ${NIFI_HTTP_PORT} # nifi.web.http.port
NIFI_WEB_HTTPS_HOST: ${NIFI_HTTPS_HOST} # nifi.web.https.host
NIFI_WEB_HTTP_PORT: ${NIFI_HTTPS_PORT} # nifi.web.https.port
Thank you

Running 'docker-compose up' throws permission denied when trying official samaple of Docker

I am using Docker 1.13 community edition on a CentOS 7 x64 machine. When I was following a Docker Compose sample from Docker official tutorial, all things were OK until I added these lines to the docker-compose.yml file:
volumes:
- .:/code
After adding it, I faced the following error:
can't open file 'app.py': [Errno 13] Permission denied. It seems that the problem is due to a SELinux limit. Using this post I ran the following command:
su -c "setenforce 0"
to solve the problem temporarily, but running this command:
chcon -Rt svirt_sandbox_file_t /path/to/volume
couldn't help me.
Finally I found the correct rule to add to SELinux:
# ausearch -c 'python' --raw | audit2allow -M my-python
# semodule -i my-python.pp
I found it when I opened the SELinux Alert Browser and clicked on 'Details' button on the row related to this error. The more detailed information from SELinux:
SELinux is preventing /usr/local/bin/python3.4 from read access on the
file app.py.
***** Plugin catchall (100. confidence) suggests **************************
If you believe that python3.4 should be allowed read access on the
app.py file by default. Then you should report this as a bug. You can
generate a local policy module to allow this access. Do allow this
access for now by executing:
ausearch -c 'python' --raw | audit2allow -M my-python
semodule -i my-python.pp

how to make ansible get access to an sshd container?

I use an ansible script to load & start the https://hub.docker.com/r/rastasheep/ubuntu-sshd/ container.
so it starts well of course :
bash-4.4$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8bedbd3b7d88 rastasheep/ubuntu-sshd "/usr/sbin/sshd -D" 37 minutes ago Up 36 minutes 0.0.0.0:49154->22/tcp test
bash-4.4$
so after ansible failure on ssh access to it I tested manually from shell
this is also ok.
bash-4.4$ ssh root#172.17.0.2
The authenticity of host '172.17.0.2 (172.17.0.2)' can't be established.
ECDSA key fingerprint is SHA256:YtTfuoRRR5qStSVA5UuznGamA/dvf+djbIT6Y48IYD0.
ECDSA key fingerprint is MD5:43:3f:41:e9:89:45:06:6f:f6:42:c4:6a:70:37:f8:1d.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '172.17.0.2' (ECDSA) to the list of known hosts.
root#172.17.0.2's password:
root#8bedbd3b7d88:~# logout
Connection to 172.17.0.2 closed.
bash-4.4$
so the step that failed is trying to get on it from ansible script & make access to ssh-copy-id
ansible error message is :
Fatal: [172.17.0.2]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Warning: Permanently added '172.17.0.2' (ECDSA) to the list of known hosts.\r\nPermission denied (publickey,password).\r\n", "unreachable": true}
---
- hosts: 127.0.0.1
tasks:
- name: start docker service
service:
name: docker
state: started
- name: load and start the container we wanna use
docker_container:
name: test
image: rastasheep/ubuntu-sshd
state: started
ports:
- "49154:22"
- name: Wait maximum of 300 seconds for ports to be available
wait_for:
host: 0.0.0.0
port: 49154
state: started
- hosts: 172.17.0.2
vars:
passwordadmin: $6$pbE6yznA$AeFIdI.....K0
passwordroot: $6$TMrxQUxT$I8.JIzR.....TV1
ansible_ssh_extra_args: "-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null"
tasks:
- name: Build test container root user rsa ssh-key
shell: docker exec test ssh-keygen -b 2048 -t rsa -f /root/.ssh/id_rsa -q -N ""
so I cannot even run the needed step to build ssh
how to do then ??
1st step (ansible task) : load docker container
2cd step (ansible task on only 172.17.0.2) : connect to it & setup it
there will be 3rd step to run application on it after that.
the problem occurs only when starting the 2cd step
Ok after many trys on a second container
conclusion is my procedure was bad
what I have done to solve that :
build a diroctory tree separating ./ ./inventory ./includes
build 1 yaml file by host (local, docker, labo)
build 1 main yaml file on ./
build 1 new host file in ./inventory
connect forced by sshpass to docker on default password
changed it
add the host key on authorized key to a login dedicated usage
installed pyhton (needed to answer ansible host else it makes
randomly module errors or refused connections depending on current
action)
setup a ssh login user in sudoers
then I can un the docker.yaml actions
then only at last I can run the labo.yaml actions.
Thanks for help
now I'm able to build the missing tools.

Filebeat not pushing logs to Elasticsearch

I am new to docker and all this logging stuff so maybe I'm making a stuipd mistake so thanks for helping in advance. I have ELK running a a docker container (6.2.2) via Dockerfile line:
FROM sebp/elk:latest
In a separate container I am installing and running Filebeat via the folling Dockerfile lines:
RUN curl -L -O -k https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.2.2-amd64.deb
RUN dpkg -i filebeat-6.2.2-amd64.deb
COPY resources/filebeat/filebeat.yml /etc/filebeat/filebeat.yml
RUN chmod go-w /etc/filebeat/filebeat.yml
RUN /usr/share/filebeat/bin/filebeat -e -d "publish" &
My Filebeat configuration is:
filebeat.prospectors:
- type: log
enabled: true
paths:
- /jetty/jetty-distribution-9.3.8.v20160314/logs/*.log
output.logstash:
enabled: false
hosts: ["elk-stack:9002"]
#index: 'audit'
output.elasticsearch:
enabled: true
hosts: ["elk-stack:9200"]
#index: "audit-%{+yyyy.MM.dd}"
path.config: "/etc/filebeat"
#setup.template.name: "audit"
#setup.template.pattern: "audit-*"
#setup.template.fields: "${path.config}/fields.yml"
As you can see I was trying to do a custom index into elasticsearch, but now I'm just trying to get the default working first. The jetty logs all have global read permissions.
The docker container logs show no errors and after running I make sure the config and output are OK:
# filebeat test config
Config OK
# filebeat test output
elasticsearch: http://elk-stack:9200...
parse url... OK
connection...
parse host... OK
dns lookup... OK
addresses: 172.17.0.3
dial up... OK
TLS... WARN secure connection disabled
talk to server... OK
version: 6.2.2
/var/log/filebeat/filebeat shows:
2018-03-15T13:23:38.859Z INFO instance/beat.go:468 Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2018-03-15T13:23:38.860Z INFO instance/beat.go:475 Beat UUID: ed5cecaf-cbf5-438d-bbb9-30bab80c4cb9
2018-03-15T13:23:38.860Z INFO elasticsearch/client.go:145 Elasticsearch url: http://elk-stack:9200
2018-03-15T13:23:38.891Z INFO elasticsearch/client.go:690 Connected to Elasticsearch version 6.2.2
However when i hit localhost:9200/_cat/indices?v it doesn't return any indices:
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
How do I get this working? I am out of ideas. Thanks again for any help.
To answer my own question you can't start filebeat with:
RUN /usr/share/filebeat/bin/filebeat -e -d "publish" &
and have it keep running once the container starts. Need to manually start it or have it start in its own container with an ENTRYPOINT tag.

Resources