Hi, I tried various links and configuration setting to resolve my issue. but I’m not able to do so.
If you still feel the question is copy or not useful I'm requesting you to please and a comment and I will delete the question without bother anyone.
I did gitlab 7-0 stable setup in a fresh ubuntu 12.04 64 bit machine in a local network.
my sever domain name is 192.168.1.1(some static IP within my LAN).
After this I can able to login both admin and user. I can create project ,group all basic work through web UI.
But I'm not able to clone or pushing code to the server using ssh from some other system in my LAN.
the result for sudo ./bin/check is
Check GitLab API access: OK
Check directories and files:
/home/git/repositories/: OK
/home/git/.ssh/authorized_keys: OK
Test redis-cli executable: redis-cli 2.2.12
Send ping to redis server: PONG
and all my system status are GREEN.
suspecting that it might be a problem of gitlab.yml , nginx,unicorn configuration.
can anyone please help me?
Update:
my config/gitlab.yml
production: &base
## GitLab settings
gitlab:
## Web server settings (note: host is the FQDN, do not include http://)
host: 192.168.1.37
port: 80
https: false
email_from: example#example.com
default_projects_limit: 10
## Default project features settings
default_projects_features:
issues: true
merge_requests: true
wiki: true
snippets: false
visibility_level: "private" # can be "private" | "internal" | "public"
issues_tracker:
## Gravatar
gravatar:
enabled: true # Use user avatar image from Gravatar.com (default: true)
ldap:
enabled: false
host: '_your_ldap_server'
port: 636
uid: 'sAMAccountName'
method: 'ssl' # "tls" or "ssl" or "plain"
bind_dn: '_the_full_dn_of_the_user_you_will_bind_with'
password: '_the_password_of_the_bind_user'
allow_username_or_email_login: true
satellites:
# Relative paths are relative to Rails.root (default: tmp/repo_satellites/)
path: /home/git/gitlab-satellites/
## Backup settings
backup:
path: "tmp/backups" # Relative paths are relative to Rails.root (default: tmp/backups/)
# keep_time: 604800 # default: 0 (forever) (in seconds)
## GitLab Shell settings
gitlab_shell:
path: /home/git/gitlab-shell/
# REPOS_PATH MUST NOT BE A SYMLINK!!!
repos_path: /home/git/repositories/
hooks_path: /home/git/gitlab-shell/hooks/
# Git over HTTP
upload_pack: true
receive_pack: true
ssh_port: 22
git:
bin_path: /usr/local/bin/git
max_size: 5242880 # 5.megabytes
# Git timeout to read a commit, in seconds
timeout: 10
extra:
development:
<<: *base
test:
<<: *base
gravatar:
enabled: true
gitlab:
host: localhost
And /home/git/gitlab-shell/config.yml
user: git
gitlab_url: http://192.168.1.37
http_settings:
self_signed_cert: false
repos_path: /home/git/repositories/
auth_file: /home/git/.ssh/authorized_keys
redis:
bin: /usr/bin/redis-cli
host: localhost
port: 6379
namespace: resque:gitlab
log_level: INFO
audit_usernames: false
~
I had a similar problem.
When following the installation instructions, one create a git user with the nologin option. But on my Linux box (openSUSE) this blocks pull and push via SSH.
I edited the file /etc/passwd looking for the line starting with git and I replaced the end of the line with /bin/bash (was before /bin/false).
No need to restart anything. You should be able to pull and push.
Related
I have a problem with Traefik, I want to log from a server with syslog-ng (docker).
I have logs but I have reverse_proxy name and I want source IP not the name of traefik. I wish to keep source IP from the host.
traefik.yml :
global:
sendAnonymousUsage: false
api:
dashboard: true
insecure: true
providers:
docker:
endpoint: "unix:///var/run/docker.sock"
watch: true
useBindPortIP: true
exposedByDefault: false
file:
filename: /etc/traefik/config.yml
watch: true
log:
level: INFO
format: common
entryPoints:
http:
address: ":80"
http:
redirections:
entryPoint:
to: https
scheme: https
https:
address: ":443"
udp:
address: ":514/udp"
tcp:
address: ":514"
udp:
services:
syslog-ng:
loadBalancer:
servers:
- address: ":514/udp"
tcp:
services:
syslog-ng:
loadBalancer:
servers:
- address: ":514"
forwardedHeaders: true
certificatesResolvers:
le:
acme:
email: responsable.informatique#exemple.com
storage: acme.json
httpChallenge:
# used during the challenge
entryPoint: http
syslog-ng.conf:
# rsyslog configuration file
# For more information see /usr/share/doc/rsyslog-*/rsyslog_conf.html
# If you experience problems, see http://www.rsyslog.com/doc/troubleshoot.html
#### MODULES ####
# The imjournal module bellow is now used as a message source instead of imuxsock.
$ModLoad imuxsock # provides support for local system logging (e.g. via logger command)
$ModLoad imjournal # provides access to the systemd journal
#$ModLoad imklog # reads kernel messages (the same are read from journald)
#$ModLoad immark # provides --MARK-- message capability
# Provides UDP syslog reception
#$ModLoad imudp
#$InputUDPServerRun 514
#$UDPServerRun 514
# Provides TCP syslog reception
#$ModLoad imtcp
#$InputTCPServerRun 514
#### GLOBAL DIRECTIVES ####
# Where to place auxiliary files
$WorkDirectory /var/lib/rsyslog
# Use default timestamp format
$ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat
# File syncing capability is disabled by default. This feature is usually not required,
# not useful and an extreme performance hit
$ActionFileEnableSync on
# Include all config files in /etc/rsyslog.d/
$IncludeConfig /etc/rsyslog.d/*.conf
#*.* #lysca-app0037.ds-001.net
*.* ##10.84.50.186
#*.* action(type=omfwd" target="10.84.50.186" port="601" protocol="tcp"
# action.resumeRetryCount ="100"
# queue.type="linkedList" queue.size="10000")
# Turn off message reception via local log socket;
# local messages are retrieved through imjournal now.
$OmitLocalLogging on
# File to store the position in the journal
$IMJournalStateFile imjournal.state
#### RULES ####
# Log all kernel messages to the console.
# Logging much else clutters up the screen.
#kern.* /dev/console
# Log anything (except mail) of level info or higher.
# Don't log private authentication messages!
*.info;mail.none;authpriv.none;cron.none /var/log/messages
# The authpriv file has restricted access.
authpriv.* /var/log/secure
# Log all the mail messages in one place.
mail.* -/var/log/maillog
# Log cron stuff
cron.* /var/log/cron
# Everybody gets emergency messages
*.emerg :omusrmsg:*
# Save news errors of level crit and higher in a special file.
uucp,news.crit /var/log/spooler
# Save boot messages also to boot.log
local7.* /var/log/boot.log
# ### begin forwarding rule ###
# The statement between the begin ... end define a SINGLE forwarding
# rule. They belong together, do NOT split them. If you create multiple
# forwarding rules, duplicate the whole block!
# Remote Logging (we use TCP for reliable delivery)
#
# An on-disk queue is created for this action. If the remote host is
# down, messages are spooled to disk and sent when it is up again.
#$ActionQueueFileName fwdRule1 # unique name prefix for spool files
#$ActionQueueMaxDiskSpace 1g # 1gb space limit (use as much as possible)
#$ActionQueueSaveOnShutdown on # save messages to disk on shutdown
#$ActionQueueType LinkedList # run asynchronously
#$ActionResumeRetryCount -1 # infinite retries if host is down
# remote host is: name/ip:port, e.g. 192.168.0.1:514, port optional
#*.* ##remote-host:514
# ### end of the forwarding rule ###
It's like NAT reverse but I don't know here I can find the configuration in traefik.
Thank for read me
Regards
I'm trying to set up a local Kibana instance with ActiveMQ for testing purposes. I've created a docker network called elastic-network. I have 3 containers in my network: elasticsearch, kibana and finally activemq. In my kibana container, I downloaded metric beats using the following shell command
curl -L -O https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-7.11.2-linux-x86_64.tar.gz
In the configuration file metricbeat.reference.yml, I've changed the host for my ActiveMQ instance running under the container activemq
- module: activemq
metricsets: ['broker', 'queue', 'topic']
period: 10s
hosts: ['activemq:8161']
path: '/api/jolokia/?ignoreErrors=true&canonicalNaming=false'
username: admin # default username
password: admin # default passwor
When I run metricbeat using the verbose parameter ./metricbeat -e I get some error mentioning that ActiveMQ API is unreachable. My problem is that metricbeat ignore my active mq broker configuration and tries to connect to localhost.
Is there a reason why my configuration could be ignored?
After looking through the documentation, I saw that for Linux, unlike the other OS, you also have to change the configuration in the module directory module.d/activemq.yml, not just the metricbeat.reference.yml
# Module: activemq
# Docs: https://www.elastic.co/guide/en/beats/metricbeat/7.11/metricbeat-module-activemq.html
- module: activemq
metricsets: ['broker', 'queue', 'topic']
period: 10s
hosts: ['activemq:8161']
path: '/api/jolokia/?ignoreErrors=true&canonicalNaming=false'
username: admin # default username
password: admin # default password
I'm following this tutorial to get logs from my docker containers stored in elasticsearch via filebeat and logstash Link to tutorial
However, nothing is being shown in kibana and when I run a docker-logs on my filebeat container, I'm getting the following error
2019-03-30T22:22:40.353Z ERROR log/harvester.go:281 Read line error: parsing CRI timestamp: parsing time "-03-30T21:59:16,113][INFO" as "2006-01-02T15:04:05Z07:00": cannot parse "-03-30T21:59:16,113][INFO" as "2006"; File: /usr/share/dockerlogs/data/2f3164397450efdd5851c3fad67fe405ab3dd822bbea1d807a993844e9143d5e/2f3164397450efdd5851c3fad67fe405ab3dd822bbea1d807a993844e9143d5e-json.log
My containers are hosted on a linux virtual machine where the virtual machine is running on a windows machine (Not sure if this could be causing the error due to the locations specified)
Below I'll describe what's running and some files in case the article is deleted in the future etc
One container is running which is simply running the following command, printing lines that filebeat should be able to read
CMD while true; do sleep 2 ; echo "{\"app\": \"dummy\", \"foo\": \"bar\"}"; done
My filebeat.yml file is as follows
filebeat.inputs:
- type: docker
combine_partial: true
containers:
path: "/usr/share/dockerlogs/data"
stream: "stdout"
ids:
- "*"
exclude_files: ['\.gz$']
ignore_older: 10m
processors:
# decode the log field (sub JSON document) if JSON encoded, then maps it's fields to elasticsearch fields
- decode_json_fields:
fields: ["log", "message"]
target: ""
# overwrite existing target elasticsearch fields while decoding json fields
overwrite_keys: true
- add_docker_metadata:
host: "unix:///var/run/docker.sock"
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
# setup filebeat to send output to logstash
output.logstash:
hosts: ["logstash"]
# Write Filebeat own logs only to file to avoid catching them with itself in docker log files
logging.level: error
logging.to_files: false
logging.to_syslog: false
loggins.metrice.enabled: false
logging.files:
path: /var/log/filebeat
name: filebeat
keepfiles: 7
permissions: 0644
ssl.verification_mode: none
Any suggestions on why filebeat is failing to forward my logs and how to fix it would be appreciated. Thanks
is there any way how to create a proper, really custom .lando.yml file so it will not use any recipe? How do I specify "just give me Apache, MariaDB, PHP" in Lando?
I tried this
# The name of the app
name: mariadb
# Give me http://mariadb.lndo.site and https://mariadb.lndo.site
proxy:
html:
- mariadb.lndo.site
# Set up my services
services:
# Set up a basic webserver running the latest nginx with ssl turned on
html:
type: nginx
ssl: true
webroot: www
# Spin up a mariadb container called "database"
# NOTE: "database" is arbitrary, you could just as well call this "db" or "kanye"
database:
# Use mariadb version 10.1
type: mariadb:10.1
# Optionally allow access to the database at localhost:3307
# You will need to make sure port 3307 is open on your machine
#
# You can also set `portforward: true` to have Lando dynamically assign
# a port. Unlike specifying an actual port setting this to true will give you
# a different port every time you restart your app
portforward: 3307
# Optionally set the default db credentials
#
# Note: You will need to `lando destroy && lando start` to change these if you've
# already started your app
# See: https://docs.devwithlando.io/tutorials/lando-info.html
creds:
user: mariadb
password: mariadb
database: mariadb
# Optionally load in all the mariadb config files in the config directory
# This is relative to the app root
# NOTE: these files need to end in .cnf
config:
confd: config
but after lando start I am getting a ERROR: No such service: appserver error and the documentation for this is extremely confusing.
Thanks.
You'll want to look at the Building a Custom Stack section of the lando custom project page.
I won't do your entire project, but the basics are as follows:
# LAMP stack example
name: lamp
proxy:
appserver:
- lamp.lndo.site # Allows you to access the site at http[s]://lamp.lndo.site
# This may actually get done automatically
services: # Define your services
appserver: # Create a web server container
type: php:5.3 # Specify what version of php to use
via: apache # This could be nginx, should you choose so
webroot: www # Specify webroot
config: # If you want to add/edit
server: config/apache/lamp.conf # Use an alternate apache config file
conf: path/from/app/root/php.ini # Alter php configuration with a custom file
database: # Create a database server container
type: mysql
portforward: 3308
creds: # Specify what creds/db to use
user: lamp
password: lamp
database: lamp
tooling: # These toolings allow you to connect land <command> to the appropriate containers
composer: # Call with "lando composer..."
service: appserver
description: Run composer commands
cmd: composer --ansi
php: # Call with "lando php..."
service: appserver
mysql: # Call with "lando mysql..."
user: root
service: database
description: Drop into a MySQL shell
I am new to docker and all this logging stuff so maybe I'm making a stuipd mistake so thanks for helping in advance. I have ELK running a a docker container (6.2.2) via Dockerfile line:
FROM sebp/elk:latest
In a separate container I am installing and running Filebeat via the folling Dockerfile lines:
RUN curl -L -O -k https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.2.2-amd64.deb
RUN dpkg -i filebeat-6.2.2-amd64.deb
COPY resources/filebeat/filebeat.yml /etc/filebeat/filebeat.yml
RUN chmod go-w /etc/filebeat/filebeat.yml
RUN /usr/share/filebeat/bin/filebeat -e -d "publish" &
My Filebeat configuration is:
filebeat.prospectors:
- type: log
enabled: true
paths:
- /jetty/jetty-distribution-9.3.8.v20160314/logs/*.log
output.logstash:
enabled: false
hosts: ["elk-stack:9002"]
#index: 'audit'
output.elasticsearch:
enabled: true
hosts: ["elk-stack:9200"]
#index: "audit-%{+yyyy.MM.dd}"
path.config: "/etc/filebeat"
#setup.template.name: "audit"
#setup.template.pattern: "audit-*"
#setup.template.fields: "${path.config}/fields.yml"
As you can see I was trying to do a custom index into elasticsearch, but now I'm just trying to get the default working first. The jetty logs all have global read permissions.
The docker container logs show no errors and after running I make sure the config and output are OK:
# filebeat test config
Config OK
# filebeat test output
elasticsearch: http://elk-stack:9200...
parse url... OK
connection...
parse host... OK
dns lookup... OK
addresses: 172.17.0.3
dial up... OK
TLS... WARN secure connection disabled
talk to server... OK
version: 6.2.2
/var/log/filebeat/filebeat shows:
2018-03-15T13:23:38.859Z INFO instance/beat.go:468 Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2018-03-15T13:23:38.860Z INFO instance/beat.go:475 Beat UUID: ed5cecaf-cbf5-438d-bbb9-30bab80c4cb9
2018-03-15T13:23:38.860Z INFO elasticsearch/client.go:145 Elasticsearch url: http://elk-stack:9200
2018-03-15T13:23:38.891Z INFO elasticsearch/client.go:690 Connected to Elasticsearch version 6.2.2
However when i hit localhost:9200/_cat/indices?v it doesn't return any indices:
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
How do I get this working? I am out of ideas. Thanks again for any help.
To answer my own question you can't start filebeat with:
RUN /usr/share/filebeat/bin/filebeat -e -d "publish" &
and have it keep running once the container starts. Need to manually start it or have it start in its own container with an ENTRYPOINT tag.