How to properly setup docker-sync to exclude folders - docker

I am trying to setup docker-sync to exclude my app/cache and app/logs folder but it is not working.
Things I've tried:
Using sync_excludes: ['.idea', 'app/cache/', 'app/logs/'] but it will be translated to something like this
command unison -ignore='Name .idea' -ignore='Name app/cache/*'
-ignore='Name app/logs/*'
So then I tried using sync_args and set like this:
sync_args:
- "-debug verbose"
- "-ignore='Path app/cache'"
- "-ignore='Path app/logs'"
command unison -ignore='Name .idea' -ignore='Name systems' \
-debug verbose -ignore='Path app/cache/*' -ignore='Path app/logs/*'
looking at the logs I can see this
[unox][DEBUG]: recvCmd: DIR
[unox][DEBUG]: sendCmd: OK
[fswatch+] >> OK
[pred] immutable 'app/cache' = false
[pred] ignore 'app/cache/prod' = true
[pred] ignorenot 'app/cache/prod' = false
[ignore] buildUpdateChildren: ignoring path app/cache/prod
But this still seeing the events being triggered and it is still syncing to my host machine.
[pred] ignore 'app/cache/prod/annotations/5702f47f407ddb07532bfd60d8ea2919489ef4bc#__construct.cache.php' = false
[pred] ignore 'app/cache/prod/annotations/f21b469a2214195ff16e2af43f249bdbfa245c25#findPublishedOr404.cache.php' = false
Anyone know what I am missing?
my last version look like this:
version: "2"
options:
# optional, activate this if you need to debug something, default is false
# IMPORTANT: do not run stable with this, it creates a memory leak, turn off verbose when you are done testing
verbose: true
syncs:
#IMPORTANT: ensure this name is unique
dt-akeneo-unison-sync:
notify_terminal: true
# which folder to watch / sync from - you can use tilde (~), it will get expanded. Be aware that the trailing slash makes a difference
# if you add them, only the inner parts of the folder gets synced, otherwise the parent folder will be synced as top-level folder
src: './'
# the files should be own by root in the target cointainer
sync_userid: 1000
sync_strategy: 'unison'
# optional, a list of regular expressions to exclude from the fswatch - see fswatch docs for details
watch_excludes: ['\.git', '\.gitignore', '.*\.md']
sync_args:
- "-debug verbose" #force Unison to choose the file with the later (earlier) modtime
- "-ignore='Path app/cache'"
- "-ignore='Path app/logs'"
- "-ignore='Path .git'"
- "-ignore='Path .git'"
- "-ignore='Path vendor'"
- "-ignore='Path upgrades'"
- "-ignore='Path systems'"
# optional: use this to switch to fswatch verbose mode
watch_args: '-v'
My Env:
{11:41}~ ➭ docker -v
Docker version 17.09.0-ce, build afdb6d4
{11:42}~ ➭ docker-sync -v
0.4.6
SO: OSx 10.11.6
Running my projects on a mounted Filesystem Case-Sensitive created using https://gist.github.com/scottsb/479bebe8b4b86bf17e2d
/dev/disk2s2 on /Users/neisantos/src (hfs, local, nodev, nosuid, journaled, noowners, nobrowse)

Don't know if it's too late.
I have almost the same setup:
OSx 10.12.6
Docker version 18.06.0-ce, build 0ffa825
docker-sync v 0.5.7
My project structure looks like this
./
build
changes
docker
etc
var
some-yml-files.yml
My docker-sync.yml looks like this
version: "2"
options:
verbose: true
syncs:
my-appcode-sync: # tip: add -sync and you keep consistent names as a convention
src: '.'
# sync_strategy: 'native_osx' # not needed, this is the default now
sync_excludes: ['docker/var', 'changes', '.git', '.idea']
I took this from https://github.com/EugenMayer/docker-sync/issues/421#issuecomment-309244156
Note the src: '.'. Initially I used src: './' like you did and ignoring folders didn't work either. After removing the / it worked for me.

Related

Mutagen.io does not sync beta -> alpha on OSX

I'm having a problem when trying to sync beta => alpha using mutagen on OSX
When a file is created on beta (Docker container) it won't get synchronized into alpha (host) resulting in error:
public/asd.txt: unable to create file: unable to set staged file permissions: unable to set ownership information: chown /Users/xxx/.mutagen/staging/sync_ORDrmPtp9gAdxdF0NC6EkEp3Eqaf9MMUgaXGn2CJx8H-alpha/da/3a404e69fec4666cf122d6588790e042a663d9d5_da39a3ee5e6b4b0d3255bfef95601890afd80709: operation not permitted
My mutagen.yml file says:
defaults:
permissions:
defaultOwner: "id:10"
defaultGroup: "id:3"
ignore:
paths:
- .DS_Store
code:
alpha: "./"
beta: "docker://php/app"
mode: "two-way-resolved"
permissions:
defaultFileMode: 666
defaultDirectoryMode: 777
ignore:
vcs: true
paths:
- "/vendor/"
Directory /Users/xxx/.mutagen/staging is owned by same user starting mutagen project start
Also tried starting mutagen sync as superuser with same result.
Any answers appreciated!
The problem is that mutagen tries to apply the defaultOwner on the host system also, but chown is not permitted on Mac (see this question for further information).
The solution is to set the defaultOwner and defaultGroup for the beta system only like this:
code:
alpha: "./"
beta: "docker://php/app"
mode: "two-way-resolved"
ignore:
vcs: true
paths:
- "/vendor/"
configurationBeta:
permissions:
defaultOwner: "id:10"
defaultGroup: "id:3"
This way the owner and group settings remain "default" for the host system, which is the user running mutagen.
I found it really helpful to confirm my settings for a running sync with mutagen sync monitor -l. This helped me to resolve this issue.

Starting Zabbix Server within docker replaces strings with nothing in config file →

→ or totally ignored strings like name of new DB for testing purposes.
Firstly tries to add something about ~250 to 250 already added hosts and Z-server shutted down. I've restarted it and inside docker logs I saw this:
6:20191014:091840.201 using configuration file: /etc/zabbix/zabbix_server.conf
6:20191014:091840.223 current database version (mandatory/optional): 04020000/04020001
6:20191014:091840.223 required mandatory version: 04020000
6:20191014:091840.484 __mem_malloc: skipped 7 asked 108424 skip_min 304 skip_max 12192
6:20191014:091840.484 [file:dbconfig.c,line:94] __zbx_mem_realloc(): out of memory (requested 108424 bytes)
6:20191014:091840.484 [file:dbconfig.c,line:94] __zbx_mem_realloc(): please increase CacheSize configuration parameter
6:20191014:091840.484 === memory statistics for configuration cache ===
Solution for those problem was to increase CacheSize in zabbix_server.conf . Okay, that's not a problem and after this Im push a new config to Z-server and restart it... → and z-server stops already after start and logs says the same problem. After reading config in container I saw what string what I corrected to matching my wishes are missing O_o. Strings are deleted.
My config:
LogType=console
DBHost=postgres-server
DBName=zabbix_pwd
DBSchema=public
DBUser=zabbix
DBPassword=zabbix
DBPort=5432
StartPollers=5
StartIPMIPollers=5
StartPollersUnreachable=5
SNMPTrapperFile=/var/lib/zabbix/snmptraps/snmptraps.log
StartSNMPTrapper=1
CacheSize=512M
HistoryCacheSize=512M
HistoryIndexCacheSize=512M
TrendCacheSize=512m
ValueCacheSize=256M
AlertScriptsPath=/usr/lib/zabbix/alertscripts
ExternalScripts=/usr/lib/zabbix/externalscripts
FpingLocation=/usr/sbin/fping
Fping6Location=/usr/sbin/fping6
SSHKeyLocation=/var/lib/zabbix/ssh_keys
SSLCertLocation=/var/lib/zabbix/ssl/certs/
SSLKeyLocation=/var/lib/zabbix/ssl/keys/
SSLCALocation=/var/lib/zabbix/ssl/ssl_ca/
LoadModulePath=/var/lib/zabbix/modules/
And what I've getting after starting z-server:
LogType=console
DBHost=postgres-server
DBName=zabbix_pwd
DBSchema=public
DBUser=zabbix
DBPassword=zabbix
DBPort=5432
SNMPTrapperFile=/var/lib/zabbix/snmptraps/snmptraps.log
StartSNMPTrapper=1
AlertScriptsPath=/usr/lib/zabbix/alertscripts
ExternalScripts=/usr/lib/zabbix/externalscripts
FpingLocation=/usr/sbin/fping
Fping6Location=/usr/sbin/fping6
SSHKeyLocation=/var/lib/zabbix/ssh_keys
SSLCertLocation=/var/lib/zabbix/ssl/certs/
SSLKeyLocation=/var/lib/zabbix/ssl/keys/
SSLCALocation=/var/lib/zabbix/ssl/ssl_ca/
LoadModulePath=/var/lib/zabbix/modules/
Any suggestions to how-to rule the world and don't be captured by doctors ?
With docker you need to send conf parameters in the docker-compose.yml file, or in your docker run command using the -e :
For example from my docker yml file:
zabbix-server:
image: zabbix/zabbix-server-pgsql:ubuntu-4.2.6
environment:
ZBX_MAXHOUSEKEEPERDELETE: 5000
ZBX_STARTPOLLERS: 15
ZBX_CACHESIZE: 8M
ZBX_STARTDBSYNCERS: 4
ZBX_HISTORYCACHESIZE: 16M
ZBX_TRENDCACHESIZE: 4M
ZBX_VALUECACHESIZE: 8M
ZBX_LOGSLOWQUERIES: 3000
Another way to work with zabbix:
https://hub.docker.com/r/monitoringartist/zabbix-3.0-xxl/

How can I change the UUID of my nixos OS partition, and update the bootloader?

So essentially I've got an exact clone of my partition (I've changed the UUID though), and I'd now like to change over the bootloader to load the new partition.
What I tried:
I naively (while booted / running on the original partition) tried to modify the hardware-configuration.nix (on the original partition) with the new UUID and then tried to:
sudo nixos-rebuild switch
sudo nixos-rebuild boot
Both which fails** at the point of mounting the drives (I think).
updating GRUB 2 menu...
lsblk: /dev/mapper/no*[0-9]: not a block device
lsblk: /dev/mapper/raid*[0-9]: not a block device
lsblk: /dev/mapper/disks*[0-9]: not a block device
Found Arch Linux on /dev/sdb3
Also, I'd assume I'd possibly need to mount this new partition somewhere (unless, this isn't required to actually boot into it (after a reboot)?).
** Actually although it appears to 'fail', when I reboot, and select the usual nixos grub entry, I see the following (the UUID mentioned is the UUID that does exist - and it's the new partition):
Worst case scenario, it seems I'd be able to use a nixos live USB to mount the new partition to /mnt and then just follow the usual nixos-install (which has worked in the past - with only the /etc/nixos directory present though)?
Firstly, get the system in working order again by changing the UUID back in hardware-configuration.nix and making sure it boots OK.
Next, change the UUID in hardware-configuration.nix, like you have done before, but this time run sudo nixos-rebuild boot.
When you reboot you'll have a new entry in your systemd-boot or GRUB2 menu. The new entry will boot NixOS from the new partition.
I tried using the nixos-install route.
I had issues with my existing hardware-configuration.nix it seems as I ran into the exact same issue waiting for device....
Finally I ran nixos-generate-config --root /mnt which generated a new config which had the following differences:
diff -u nixos.backup/hardware-configuration.nix /etc/nixos/hardware-configuration.nix
--- nixos.backup/hardware-configuration.nix 2018-11-22 20:18:01.361647120 +0000
+++ /etc/nixos/hardware-configuration.nix 2018-11-22 20:18:41.818644420 +0000
## -8,8 +8,8 ##
[ <nixpkgs/nixos/modules/installer/scan/not-detected.nix>
];
- boot.initrd.availableKernelModules = [ "xhci_pci" "ehci_pci" "ahci" "usb_storage" "sd_mod" "rtsx_pci_sdmmc" ];
- boot.kernelModules = [ "kvm-intel" ];
+ boot.initrd.availableKernelModules = [ "nvme" "xhci_pci" "ahci" "usb_storage" "usbhid" "sd_mod" ];
+ boot.kernelModules = [ "kvm-amd" ];
boot.extraModulePackages = [ ];
fileSystems."/" =
## -20,6 +20,4 ##
swapDevices = [ ];
nix.maxJobs = lib.mkDefault 4;
- powerManagement.cpuFreqGovernor = "powersave";
}
-
So probably the nvme bit. Also add that I had kvm-intel as my CPU stayed the same (which is an AMD).

Not able to find curator.yml (elasticsearch-curator) in linux

Official site of elasticsearch says the default config file exists in /home/username/.curator/curator.yml
https://www.elastic.co/guide/en/elasticsearch/client/curator/current/command-line.html
But there is no such folder.
Also, I tried creating curator.yml and give path using --config option. But, it throws me error
curator --config ./curator.yml
Error: no such option: --config
Installation was done using apt
sudo apt-get update && sudo apt-get install elasticsearch-curator
Help me create a config file as I want to delete my log-indexes
Please note that the documentation does not say it that file exists after creation, it says:
If --config and CONFIG.YML are not provided, Curator will look in ~/.curator/curator.yml for the configuration file.
The file must be created by the end user.
Also, if you installed via:
sudo apt-get update && sudo apt-get install elasticsearch-curator
but did not add the official Elastic repository for Curator, then you installed an older version. Please check which version you are running with:
$ curator --version
curator, version 5.4.1
If you do not see the current version (5.4.1 at the time this answer was added), then you do not have the appropriate repository installed.
The official documentation provides an example client configuration file here.
There are also many examples of action files in the examples
Yes, one needs to create both the curator.yml as well as action.yml files.
Since I am on centos 7, I happened to install curator from RPM, and in its default /opt/elastic-curator' I could follow up this good blog (but badly formatted!) : https://anchormen.nl/blog/big-data-services/monitoring-aws-redshift-with-elasticsearch/ to ge the files as follows(you may modify according to your needs) :
curator.yml
---
# Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType"
client:
hosts:
- <host1>
- <host2, likewise upto hostN >
port: 9200
url_prefix:
use_ssl: False
certificate:
client_cert:
client_key:
ssl_no_validate: False
http_auth:
timeout: 30
master_only: False
logging:
loglevel: INFO
logfile: /var/log/curator.log
logformat: default
blacklist: []
and an action.yml as follows :
---
# Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType"
#
# Also remember that all examples have 'disable_action' set to True. If you
# want to use this action as a template, be sure to set this to False after
# copying it.
actions:
1:
action: rollover
description: Rollover the index associated with index 'name', which should be in the form of prefix-000001 (or similar),or prefix-YYYY.MM.DD-1.
options:
disable_action: False
name: redshift_metrics_daily
conditions:
max_age: 1d
extra_settings:
index.number_of_shards: 2
index.number_of_replicas: 1
2:
action: rollover
description: Rollover the index associated with index 'name' , which should be in the form of prefix-000001 (or similar), or prefix-YYYY.MM.DD-1.
options:
disable_action: False
name: redshift_query_metadata_daily
conditions:
max_age: 1d
extra_settings:
index.number_of_shards: 2
index.number_of_replicas: 1

SaltStack: dockerng is not available

I'm quite new to SaltStack. I've setup a salt-master and a salt-minion (via salt-cloud on my ESXi). It works fine so far. However, I cannot get dockerng to run any function on my minion. It always returns 'dockerng.xxxx' is not available:
# salt '*' test.ping
minion1:
True
$ salt '*' dockerng.version
minion1:
'dockerng.version' is not available.
However, When I call the same with salt-call directly on the minion:
$ salt-call dockerng.version
[INFO ] Determining pillar cache
local:
----------
ApiVersion:
1.23
Any hints/ideas?
Have you installed the python docker module on the minion itself? That's a requirement.
I just encountered exactly the same situation. Installing 'docker-py' on the salt-master worked for me. Of course, as suggested by Utah_Dave, docker-py would also be needed on any minion that would be targeted by dockerng.
I encountered this problem while using an image with docker pre-installed. The solution that works for me is to restart the salt-minion daemon:
salt-minion:
pkg:
- installed
- name: salt-minion
service.running:
- enable: True
- require:
- pkg: salt-minion
- service: docker
- pip: docker-py
- watch:
- pip: docker-py
taken from http://humankeyboard.com/saltstack/2013/how-to-restart-salt-minion.html
Unfortunately, the dockerng module doesn't work until the second run from the master. I'm still playing with watch and reload_modules trying to get this to work.
https://docs.saltstack.com/en/latest/ref/states/index.html#reloading-modules

Resources