Traefik don't keep real ip - docker

I have a problem with Traefik, I want to log from a server with syslog-ng (docker).
I have logs but I have reverse_proxy name and I want source IP not the name of traefik. I wish to keep source IP from the host.
traefik.yml :
global:
sendAnonymousUsage: false
api:
dashboard: true
insecure: true
providers:
docker:
endpoint: "unix:///var/run/docker.sock"
watch: true
useBindPortIP: true
exposedByDefault: false
file:
filename: /etc/traefik/config.yml
watch: true
log:
level: INFO
format: common
entryPoints:
http:
address: ":80"
http:
redirections:
entryPoint:
to: https
scheme: https
https:
address: ":443"
udp:
address: ":514/udp"
tcp:
address: ":514"
udp:
services:
syslog-ng:
loadBalancer:
servers:
- address: ":514/udp"
tcp:
services:
syslog-ng:
loadBalancer:
servers:
- address: ":514"
forwardedHeaders: true
certificatesResolvers:
le:
acme:
email: responsable.informatique#exemple.com
storage: acme.json
httpChallenge:
# used during the challenge
entryPoint: http
syslog-ng.conf:
# rsyslog configuration file
# For more information see /usr/share/doc/rsyslog-*/rsyslog_conf.html
# If you experience problems, see http://www.rsyslog.com/doc/troubleshoot.html
#### MODULES ####
# The imjournal module bellow is now used as a message source instead of imuxsock.
$ModLoad imuxsock # provides support for local system logging (e.g. via logger command)
$ModLoad imjournal # provides access to the systemd journal
#$ModLoad imklog # reads kernel messages (the same are read from journald)
#$ModLoad immark # provides --MARK-- message capability
# Provides UDP syslog reception
#$ModLoad imudp
#$InputUDPServerRun 514
#$UDPServerRun 514
# Provides TCP syslog reception
#$ModLoad imtcp
#$InputTCPServerRun 514
#### GLOBAL DIRECTIVES ####
# Where to place auxiliary files
$WorkDirectory /var/lib/rsyslog
# Use default timestamp format
$ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat
# File syncing capability is disabled by default. This feature is usually not required,
# not useful and an extreme performance hit
$ActionFileEnableSync on
# Include all config files in /etc/rsyslog.d/
$IncludeConfig /etc/rsyslog.d/*.conf
#*.* #lysca-app0037.ds-001.net
*.* ##10.84.50.186
#*.* action(type=omfwd" target="10.84.50.186" port="601" protocol="tcp"
# action.resumeRetryCount ="100"
# queue.type="linkedList" queue.size="10000")
# Turn off message reception via local log socket;
# local messages are retrieved through imjournal now.
$OmitLocalLogging on
# File to store the position in the journal
$IMJournalStateFile imjournal.state
#### RULES ####
# Log all kernel messages to the console.
# Logging much else clutters up the screen.
#kern.* /dev/console
# Log anything (except mail) of level info or higher.
# Don't log private authentication messages!
*.info;mail.none;authpriv.none;cron.none /var/log/messages
# The authpriv file has restricted access.
authpriv.* /var/log/secure
# Log all the mail messages in one place.
mail.* -/var/log/maillog
# Log cron stuff
cron.* /var/log/cron
# Everybody gets emergency messages
*.emerg :omusrmsg:*
# Save news errors of level crit and higher in a special file.
uucp,news.crit /var/log/spooler
# Save boot messages also to boot.log
local7.* /var/log/boot.log
# ### begin forwarding rule ###
# The statement between the begin ... end define a SINGLE forwarding
# rule. They belong together, do NOT split them. If you create multiple
# forwarding rules, duplicate the whole block!
# Remote Logging (we use TCP for reliable delivery)
#
# An on-disk queue is created for this action. If the remote host is
# down, messages are spooled to disk and sent when it is up again.
#$ActionQueueFileName fwdRule1 # unique name prefix for spool files
#$ActionQueueMaxDiskSpace 1g # 1gb space limit (use as much as possible)
#$ActionQueueSaveOnShutdown on # save messages to disk on shutdown
#$ActionQueueType LinkedList # run asynchronously
#$ActionResumeRetryCount -1 # infinite retries if host is down
# remote host is: name/ip:port, e.g. 192.168.0.1:514, port optional
#*.* ##remote-host:514
# ### end of the forwarding rule ###
It's like NAT reverse but I don't know here I can find the configuration in traefik.
Thank for read me
Regards

Related

Core log Lua in Haproxy does not log to the default haproxy log file

I have setup a Lua script to process the request in HAProxy. I am using Core class to log information in the log file.
Here is my config file
sudo nano /etc/haproxy/haproxy.cfg
global
lua-load /etc/haproxy/route_req.lua
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats timeout 30s
user haproxy
group haproxy
daemon
#HAProxy for web servers
frontend web-frontend
bind 10.122.0.2:80
bind 139.59.75.106:80
mode http
use_backend %[lua.routeIP]
Here is my route_req.lua file
local function getIP(txn)
local clientip = txn.f:src()
backend = ""
-- MY CODE GOES HERE
core.log(core.info, "This is an example\n")
return backend
end
core.register_fetches('routeIP', getIP)
I don't see any logging in my log file, /var/log/haproxy.log. Also there was no logging regarding the same in /var/log/syslog file.
Make sure to include log global in your frontend stanza.

Jenkins seemingly stops listening to the port 8080 on Ubuntu 16.04.4

I have installed and started jenkins successfully. The problem is that after working for a while, the web interface returns "refused to connect".
After lsof -i -P -n | grep LISTEN, I don't see anything listening to the port 8080.
The weird thing is cat /var/log/jenkins/jenkins.log returns nothing.
When I run service jenkins start nothing happens. service jenkins restart on the other hand gets things back to normal.
Here is the contents of /etc/default/jenkins:
# defaults for Jenkins automation server
# pulled in from the init script; makes things easier.
NAME=jenkins
# arguments to pass to java
# Allow graphs etc. to work even when an X server is present
JAVA_ARGS="-Djava.awt.headless=true"
#JAVA_ARGS="-Xmx1024m"
# make jenkins listen on IPv4 address
#JAVA_ARGS="-Djava.net.preferIPv4Stack=true"
PIDFILE=/var/run/$NAME/$NAME.pid
# user and group to be invoked as (default to jenkins)
JENKINS_USER=$NAME
JENKINS_GROUP=$NAME
# location of the jenkins war file
JENKINS_WAR=/usr/share/$NAME/$NAME.war
# jenkins home location
JENKINS_HOME=/var/lib/$NAME
# set this to false if you don't want Jenkins to run by itself
# in this set up, you are expected to provide a servlet container
# to host jenkins.
RUN_STANDALONE=true
# log location. this may be a syslog facility.priority
JENKINS_LOG=/var/log/$NAME/$NAME.log
#JENKINS_LOG=daemon.info
# Whether to enable web access logging or not.
# Set to "yes" to enable logging to /var/log/$NAME/access_log
JENKINS_ENABLE_ACCESS_LOG="no"
# OS LIMITS SETUP
# comment this out to observe /etc/security/limits.conf
# this is on by default because http://github.com/jenkinsci/jenkins/commit/2fb288474e980d0e7ff9c4a3b768874835a3e92e
# reported that Ubuntu's PAM configuration doesn't include pam_limits.so, and as a result the # of file
# descriptors are forced to 1024 regardless of /etc/security/limits.conf
MAXOPENFILES=8192
# set the umask to control permission bits of files that Jenkins creates.
# 027 makes files read-only for group and inaccessible for others, which some security sensitive users
# might consider benefitial, especially if Jenkins runs in a box that's used for multiple purposes.
# Beware that 027 permission would interfere with sudo scripts that run on the master (JENKINS-25065.)
#
# Note also that the particularly sensitive part of $JENKINS_HOME (such as credentials) are always
# written without 'others' access. So the umask values only affect job configuration, build records,
# that sort of things.
#
# If commented out, the value from the OS is inherited, which is normally 022 (as of Ubuntu 12.04,
# by default umask comes from pam_umask(8) and /etc/login.defs
# UMASK=027
# port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8080
# servlet context, important if you want to use apache proxying
PREFIX=/$NAME
# arguments to pass to jenkins.
# --javahome=$JAVA_HOME
# --httpListenAddress=$HTTP_HOST (default 0.0.0.0)
# --httpPort=$HTTP_PORT (default 8080; disable with -1)
# --httpsPort=$HTTP_PORT
# --argumentsRealm.passwd.$ADMIN_USER=[password]
# --argumentsRealm.roles.$ADMIN_USER=admin
# --webroot=~/.jenkins/war
# --prefix=$PREFIX
JENKINS_ARGS="--webroot=/var/cache/$NAME/war --httpPort=$HTTP_PORT"
I got the same issue. I'm deploying jenkins on vultr vps with limited ram memory. I believe the limited memory is the cause. I fixed this by following this link Jenkins build throwing an out of memory error
sudo fallocate -l 1G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
Check again:
free -m

How to create proper .lando.yml custom file?

is there any way how to create a proper, really custom .lando.yml file so it will not use any recipe? How do I specify "just give me Apache, MariaDB, PHP" in Lando?
I tried this
# The name of the app
name: mariadb
# Give me http://mariadb.lndo.site and https://mariadb.lndo.site
proxy:
html:
- mariadb.lndo.site
# Set up my services
services:
# Set up a basic webserver running the latest nginx with ssl turned on
html:
type: nginx
ssl: true
webroot: www
# Spin up a mariadb container called "database"
# NOTE: "database" is arbitrary, you could just as well call this "db" or "kanye"
database:
# Use mariadb version 10.1
type: mariadb:10.1
# Optionally allow access to the database at localhost:3307
# You will need to make sure port 3307 is open on your machine
#
# You can also set `portforward: true` to have Lando dynamically assign
# a port. Unlike specifying an actual port setting this to true will give you
# a different port every time you restart your app
portforward: 3307
# Optionally set the default db credentials
#
# Note: You will need to `lando destroy && lando start` to change these if you've
# already started your app
# See: https://docs.devwithlando.io/tutorials/lando-info.html
creds:
user: mariadb
password: mariadb
database: mariadb
# Optionally load in all the mariadb config files in the config directory
# This is relative to the app root
# NOTE: these files need to end in .cnf
config:
confd: config
but after lando start I am getting a ERROR: No such service: appserver error and the documentation for this is extremely confusing.
Thanks.
You'll want to look at the Building a Custom Stack section of the lando custom project page.
I won't do your entire project, but the basics are as follows:
# LAMP stack example
name: lamp
proxy:
appserver:
- lamp.lndo.site # Allows you to access the site at http[s]://lamp.lndo.site
# This may actually get done automatically
services: # Define your services
appserver: # Create a web server container
type: php:5.3 # Specify what version of php to use
via: apache # This could be nginx, should you choose so
webroot: www # Specify webroot
config: # If you want to add/edit
server: config/apache/lamp.conf # Use an alternate apache config file
conf: path/from/app/root/php.ini # Alter php configuration with a custom file
database: # Create a database server container
type: mysql
portforward: 3308
creds: # Specify what creds/db to use
user: lamp
password: lamp
database: lamp
tooling: # These toolings allow you to connect land <command> to the appropriate containers
composer: # Call with "lando composer..."
service: appserver
description: Run composer commands
cmd: composer --ansi
php: # Call with "lando php..."
service: appserver
mysql: # Call with "lando mysql..."
user: root
service: database
description: Drop into a MySQL shell

IBM Cloud Private 2.1.0.3 The conditional check failed

I am trying to install the IBM Private Cloud Community Edition but struggle with the execution of the sudo docker run command from the installation instructions:
> sudo docker run --net=host -t -e LICENSE=accept -v "$(pwd)":/installer/cluster ibmcom/icp-inception:2.1.0.3 install
When I execute it, it returns following output with error message (below):
user#kim:/opt/ibm-cloud-private-ce-2.1.0.3/cluster$ sudo docker run --net=host -t -e LICENSE=accept -v "$(pwd)":/installer/cluster ibmcom/icp-inception:2.1.0.3 install
PLAY [Checking Python interpreter] *********************************************
TASK [Checking Python interpreter] *********************************************
changed: [127.0.0.1]
PLAY [Checking prerequisites] **************************************************
TASK [Gathering Facts] *********************************************************
ok: [127.0.0.1]
TASK [docker-engine-check : Getting Docker engine version] *********************
changed: [127.0.0.1]
TASK [docker-engine-check : Checking docker engine if installed] ***************
changed: [127.0.0.1]
TASK [docker-engine : include] *************************************************
TASK [docker-engine : include] *************************************************
TASK [containerd-engine-check : Getting containerd version] ********************
TASK [containerd-engine-check : Checking cri-containerd if installed] **********
TASK [containerd-engine : include] *********************************************
TASK [containerd-engine : include] *********************************************
TASK [network-check : Checking for the network pre-check file] *****************
ok: [127.0.0.1 -> localhost]
TASK [network-check : include_tasks] *******************************************
included: /installer/playbook/roles/network-check/tasks/calico.yaml for 127.0.0.1
TASK [network-check : Calico Validation - Verifying hostname for lowercase] ****
TASK [network-check : Calico Validation - Initializing interface list to be verified] ***
ok: [127.0.0.1]
TASK [network-check : Calico Validation - Finding Interface when autodetection_method is first-found] ***
TASK [network-check : Calico Validation - Updating regex string to match interfaces to be excluded] ***
TASK [network-check : Calico Validation - Getting list of interfaces to be considered] ***
TASK [network-check : Calico Validation - Excluding default interface if defined] ***
TASK [network-check : Calico Validation - Finding Interface reg-ex when autodetection_method is interface(reg-ex)] ***
TASK [network-check : Calico Validation - Finding Interface when autodetection_method is interface(reg-ex)] ***
TASK [network-check : Calico Validation - Finding Domain/IP when autodetection_method is can-reach] ***
ok: [127.0.0.1]
TASK [network-check : Calico Validation - Finding IP for the Domain when autodetection_method is can-reach] ***
changed: [127.0.0.1]
TASK [network-check : Calico Validation - Finding Interface when autodetection_method is can-reach] ***
changed: [127.0.0.1]
TASK [network-check : Calico Validation - Finding Interface when lo is found] ***
changed: [127.0.0.1]
TASK [network-check : Calico Validation - Finding Interface when autodetection_method is can-reach] ***
ok: [127.0.0.1]
TASK [network-check : Calico Validation - Finding MTU for the detected Interface(s)] ***
fatal: [127.0.0.1]: FAILED! => {"msg": "The conditional check 'hostvars[inventory_hostname]['ansible_'~item]['mtu'] is defined' failed. The error was: error while evaluating conditional (hostvars[inventory_hostname]['ansible_'~item]['mtu'] is defined): 'dict object' has no attribute u'ansible_'\n\nThe error appears to have been in '/installer/playbook/roles/network-check/tasks/calico.yaml': line 86, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: Calico Validation - Finding MTU for the detected Interface(s)\n ^ here\n"}
NO MORE HOSTS LEFT *************************************************************
NO MORE HOSTS LEFT *************************************************************
PLAY RECAP *********************************************************************
127.0.0.1 : ok=12 changed=6 unreachable=0 failed=1
Playbook run took 0 days, 0 hours, 0 minutes, 13 seconds
user#kim:/opt/ibm-cloud-private-ce-2.1.0.3/cluster$
I am working on Ubuntu 14.04 with Docker version 17.12.1-ce, build 7390fc6.
My host file looks like this:
[master]
127.0.0.1 ansible_user="user" ansible_ssh_pass="6CEd29CN" ansible_become=true ansible_become_pass="6CEd29CN" ansible_port="22" ansible_ssh_common_args="-oPubkeyAuthentication=no"
[worker]
127.0.0.1 ansible_user="user" ansible_ssh_pass="6CEd29CN" ansible_become=true ansible_become_pass="6CEd29CN" ansible_port="22" ansible_ssh_common_args="-oPubkeyAuthentication=no"
[proxy]
127.0.0.1 ansible_user="user" ansible_ssh_pass="6CEd29CN" ansible_become=true ansible_become_pass="6CEd29CN" ansible_port="22" ansible_ssh_common_args="-oPubkeyAuthentication=no"
#[management]
#4.4.4.4
#[va]
#5.5.5.5
The yaml.cfg file looks like this:
# Licensed Materials - Property of IBM
# IBM Cloud private
# # Copyright IBM Corp. 2017 All Rights Reserved
# US Government Users Restricted Rights - Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
---
###### docker0: 172.17.0.1
###### eth0: 192.168.240.14
## Network Settings
network_type: calico
# network_helm_chart_path: < helm chart path >
## Network in IPv4 CIDR format
network_cidr: 127.0.0.1/8
## Kubernetes Settings
service_cluster_ip_range: 127.0.0.1/24
## Makes the Kubelet start if swap is enabled on the node. Remove
## this if your production env want to disble swap.
kubelet_extra_args: ["--fail-swap-on=false"]
# cluster_domain: cluster.local
# cluster_name: mycluster
# cluster_CA_domain: "{{ cluster_name }}.icp"
# cluster_zone: "myzone"
# cluster_region: "myregion"
## Etcd Settings
etcd_extra_args: ["--grpc-keepalive-timeout=0", "--grpc-keepalive-interval=0", "--snapshot-count=10000"]
## General Settings
# wait_for_timeout: 600
# docker_api_timeout: 100
## Advanced Settings
default_admin_user: user
default_admin_password: 6CEd29CN
# ansible_user: <username>
# ansible_become: true
# ansible_become_password: <password>
## Kubernetes Settings
# kube_apiserver_extra_args: []
# kube_controller_manager_extra_args: []
# kube_proxy_extra_args: []
# kube_scheduler_extra_args: []
## Enable Kubernetes Audit Log
# auditlog_enabled: false
## GlusterFS Settings
# glusterfs: false
## GlusterFS Storage Settings
# storage:
# - kind: glusterfs
# nodes:
# - ip: <worker_node_m_IP_address>
# device: <link path>/<symlink of device aaa>,<link path>/<symlink of device bbb>
# - ip: <worker_node_n_IP_address>
# device: <link path>/<symlink of device ccc>
# - ip: <worker_node_o_IP_address>
# device: <link path>/<symlink of device ddd>
# storage_class:
# name:
# default: false
# volumetype: replicate:3
## Network Settings
## Calico Network Settings
# calico_ipip_enabled: true
# calico_tunnel_mtu: 1430
calico_ip_autodetection_method: can-reach=127.0.0.1
## IPSec mesh Settings
## If user wants to configure IPSec mesh, the following parameters
## should be configured through config.yaml
# ipsec_mesh:
# enable: true
# interface: <interface name on which IPsec will be enabled>
# subnets: []
# exclude_ips: "<list of IP addresses separated by a comma>"
# kube_apiserver_secure_port: 8001
## External loadbalancer IP or domain
## Or floating IP in OpenStack environment
# cluster_lb_address: none
## External loadbalancer IP or domain
## Or floating IP in OpenStack environment
# proxy_lb_address: none
## Install in firewall enabled mode
# firewall_enabled: false
## Allow loopback dns server in cluster nodes
# loopback_dns: false
## High Availability Settings
# vip_manager: etcd
## High Availability Settings for master nodes
# vip_iface: eth0
# cluster_vip: 127.0.1.1
## High Availability Settings for Proxy nodes
# proxy_vip_iface: eth0
# proxy_vip: 127.0.1.1
## Federation cluster Settings
# federation_enabled: false
# federation_cluster: federation-cluster
# federation_domain: cluster.federation
# federation_apiserver_extra_args: []
# federation_controllermanager_extra_args: []
# federation_external_policy_engine_enabled: false
## vSphere cloud provider Settings
## If user wants to configure vSphere as cloud provider, vsphere_conf
## parameters should be configured through config.yaml
# kubelet_nodename: hostname
# cloud_provider: vsphere
# vsphere_conf:
# user: <vCenter username for vSphere cloud provider>
# password: <password for vCenter user>
# server: <vCenter server IP or FQDN>
# port: [vCenter Server Port; default: 443]
# insecure_flag: [set to 1 if vCenter uses a self-signed certificate]
# datacenter: <datacenter name on which Node VMs are deployed>
# datastore: <default datastore to be used for provisioning volumes>
# working_dir: <vCenter VM folder path in which node VMs are located>
## Disabled Management Services Settings
## You can disable the following management services: ["service-catalog", "metering", "monitoring", "istio", "vulnerability-advisor", "custom-metrics-adapter"]
disabled_management_services: ["istio", "vulnerability-advisor", "custom-metrics-adapter"]
## Docker Settings
# docker_env: []
# docker_extra_args: []
## The maximum size of the log before it is rolled
# docker_log_max_size: 50m
## The maximum number of log files that can be present
# docker_log_max_file: 10
## Install/upgrade docker version
# docker_version: 17.12.1
## ICP install docker automatically
# install_docker: true
## Ingress Controller Settings
## You can add your ingress controller configuration, and the allowed configuration can refer to
## https://github.com/kubernetes/ingress-nginx/blob/nginx-0.9.0/docs/user-guide/configmap.md#configuration-options
# ingress_controller:
# disable-access-log: 'true'
## Clean metrics indices in Elasticsearch older than this number of days
# metrics_max_age: 1
## Clean application log indices in Elasticsearch older than this number of days
# logs_maxage: 1
## Uncomment the line below to install Kibana as a managed service.
# kibana_install: true
# STARTING_CLOUDANT
# cloudant:
# namespace: kube-system
# pullPolicy: IfNotPresent
# pvPath: /opt/ibm/cfc/cloudant
# database:
# password: fdrreedfddfreeedffde
# federatorCommand: hostname
# federationIdentifier: "-0"
# readinessProbePeriodSeconds: 2
# readinessProbeInitialDelaySeconds: 90
# END_CLOUDANT
My goal is to set up ICP on my local machine (single node) and I'm very thankful for any help regarding this issue.
So I resolved this error by uncommenting and setting # calico_ipip_enabled: true to false.
After that though I got another error because of my loopback ip:
fatal: [127.0.0.1] => A loopback IP is used in your DNS server configuration. For more details, see https://ibm.biz/dns-fails.
But there is an fix/workaround by setting loopback_dns: true as mentioned in the link.
I can't close this question here but this is how I resolved it.
IBM Cloud private suppoted OS is ubuntu 16.04 .Please check the below url
https://www.ibm.com/support/knowledgecenter/SSBS6K_2.1.0.3/supported_system_config/supported_os.html
Please check the system requirement..
I am also trying to install the setup.. I didnt face this issue.
Normally, we can not specify 127.0.0.1 as ICP node in ICP hosts file, thanks.

GitLab 7-0 stable not able to push or clone

Hi, I tried various links and configuration setting to resolve my issue. but I’m not able to do so.
If you still feel the question is copy or not useful I'm requesting you to please and a comment and I will delete the question without bother anyone.
I did gitlab 7-0 stable setup in a fresh ubuntu 12.04 64 bit machine in a local network.
my sever domain name is 192.168.1.1(some static IP within my LAN).
After this I can able to login both admin and user. I can create project ,group all basic work through web UI.
But I'm not able to clone or pushing code to the server using ssh from some other system in my LAN.
the result for sudo ./bin/check is
Check GitLab API access: OK
Check directories and files:
/home/git/repositories/: OK
/home/git/.ssh/authorized_keys: OK
Test redis-cli executable: redis-cli 2.2.12
Send ping to redis server: PONG
and all my system status are GREEN.
suspecting that it might be a problem of gitlab.yml , nginx,unicorn configuration.
can anyone please help me?
Update:
my config/gitlab.yml
production: &base
## GitLab settings
gitlab:
## Web server settings (note: host is the FQDN, do not include http://)
host: 192.168.1.37
port: 80
https: false
email_from: example#example.com
default_projects_limit: 10
## Default project features settings
default_projects_features:
issues: true
merge_requests: true
wiki: true
snippets: false
visibility_level: "private" # can be "private" | "internal" | "public"
issues_tracker:
## Gravatar
gravatar:
enabled: true # Use user avatar image from Gravatar.com (default: true)
ldap:
enabled: false
host: '_your_ldap_server'
port: 636
uid: 'sAMAccountName'
method: 'ssl' # "tls" or "ssl" or "plain"
bind_dn: '_the_full_dn_of_the_user_you_will_bind_with'
password: '_the_password_of_the_bind_user'
allow_username_or_email_login: true
satellites:
# Relative paths are relative to Rails.root (default: tmp/repo_satellites/)
path: /home/git/gitlab-satellites/
## Backup settings
backup:
path: "tmp/backups" # Relative paths are relative to Rails.root (default: tmp/backups/)
# keep_time: 604800 # default: 0 (forever) (in seconds)
## GitLab Shell settings
gitlab_shell:
path: /home/git/gitlab-shell/
# REPOS_PATH MUST NOT BE A SYMLINK!!!
repos_path: /home/git/repositories/
hooks_path: /home/git/gitlab-shell/hooks/
# Git over HTTP
upload_pack: true
receive_pack: true
ssh_port: 22
git:
bin_path: /usr/local/bin/git
max_size: 5242880 # 5.megabytes
# Git timeout to read a commit, in seconds
timeout: 10
extra:
development:
<<: *base
test:
<<: *base
gravatar:
enabled: true
gitlab:
host: localhost
And /home/git/gitlab-shell/config.yml
user: git
gitlab_url: http://192.168.1.37
http_settings:
self_signed_cert: false
repos_path: /home/git/repositories/
auth_file: /home/git/.ssh/authorized_keys
redis:
bin: /usr/bin/redis-cli
host: localhost
port: 6379
namespace: resque:gitlab
log_level: INFO
audit_usernames: false
~
I had a similar problem.
When following the installation instructions, one create a git user with the nologin option. But on my Linux box (openSUSE) this blocks pull and push via SSH.
I edited the file /etc/passwd looking for the line starting with git and I replaced the end of the line with /bin/bash (was before /bin/false).
No need to restart anything. You should be able to pull and push.

Resources