SaltStack: dockerng is not available - docker

I'm quite new to SaltStack. I've setup a salt-master and a salt-minion (via salt-cloud on my ESXi). It works fine so far. However, I cannot get dockerng to run any function on my minion. It always returns 'dockerng.xxxx' is not available:
# salt '*' test.ping
minion1:
True
$ salt '*' dockerng.version
minion1:
'dockerng.version' is not available.
However, When I call the same with salt-call directly on the minion:
$ salt-call dockerng.version
[INFO ] Determining pillar cache
local:
----------
ApiVersion:
1.23
Any hints/ideas?

Have you installed the python docker module on the minion itself? That's a requirement.

I just encountered exactly the same situation. Installing 'docker-py' on the salt-master worked for me. Of course, as suggested by Utah_Dave, docker-py would also be needed on any minion that would be targeted by dockerng.

I encountered this problem while using an image with docker pre-installed. The solution that works for me is to restart the salt-minion daemon:
salt-minion:
pkg:
- installed
- name: salt-minion
service.running:
- enable: True
- require:
- pkg: salt-minion
- service: docker
- pip: docker-py
- watch:
- pip: docker-py
taken from http://humankeyboard.com/saltstack/2013/how-to-restart-salt-minion.html
Unfortunately, the dockerng module doesn't work until the second run from the master. I'm still playing with watch and reload_modules trying to get this to work.
https://docs.saltstack.com/en/latest/ref/states/index.html#reloading-modules

Related

How to Test Cookbook in CHEF using DOCKER

I am trying to test a simple cookbook recipe using docker driver. But, I get an error.
Can someone please help? I really gave my all effort trying several things but none of them worked. I had installed docker kitchen driver also using command: chef gem install kitchen-docker. Below are my configurational files, chef versions, kitchen.yml file etc.
Chef version
Chef Workstation version: 22.6.973
Chef InSpec version: 4.56.20
Chef CLI version: 5.6.1
Chef Habitat version: 1.6.420
Test Kitchen version: 3.2.2
Cookstyle version: 7.32.1
Chef Infra Client version: 17.10.0
chef gem list kitchen-docker gives output as: kitchen-docker (2.13.0)
Below is my kitchen.yml file
---
driver:
name: docker
provision_command: curl -L https://www.chef.io/chef/install.sh | bash
provisioner:
name: chef_zero
## product_name and product_version specifies a specific Chef product and version to install.
## see the Chef documentation for more details: https://docs.chef.io/workstation/config_yml_kitchen/
# product_name: chef
# product_version: 17
verifier:
name: inspec
platforms:
- name: ubuntu
- name: centos-7
driver_config:
image: 'centos:7'
platform: centos
transport:
name: docker
suites:
- name: default
run_list:
- recipe[docker-cookbook::default]
verifier:
inspec_tests:
- test/integration/default
attributes:
My recipe: default.rb
#
# Cookbook:: docker-cookbook
# Recipe:: default
#
# Copyright:: 2022, The Authors, All Rights Reserved.
file '/tmp/test.txt' do
content 'This is managed by Rapidops'
action :create
end
But, when I do kitchen list, I am getting below error:
>>>>>> ------Exception-------
>>>>>> Class: Kitchen::UserError
>>>>>> Message: Kitchen YAML file /root/chef-repo/cookbooks/docker-cookbook/recipes/kitchen.yml does not exist.
>>>>>> ----------------------
>>>>>> Please see .kitchen/logs/kitchen.log for more details
>>>>>> Also try running `kitchen diagnose --all` for configuration
As per error message, you are trying to run kitchen commands from recipe directory -
/root/chef-repo/cookbooks/docker-cookbook/recipes/kitchen.yml does not exist
You should run kitchen commands from root level of cookbook.
In you case, it is - /root/chef-repo/cookbooks/docker-cookbook

%PARSER_ERROR while turning up the server

I have added 2 gradle dependencies for parsing the yaml contents and generate a json out of it but I am getting these kind of logs when I am trying to turn up my server.
%PARSER_ERROR[cyan] %PARSER_ERROR[gray] %PARSER_ERROR[highlight] %PARSER_ERROR[magenta] - Kafka version: 2.3.0
%PARSER_ERROR[cyan] %PARSER_ERROR[gray] %PARSER_ERROR[highlight] %PARSER_ERROR[magenta] - Kafka commitId: fc1aaa116b661c8a
%PARSER_ERROR[cyan] %PARSER_ERROR[gray] %PARSER_ERROR[highlight] %PARSER_ERROR[magenta] - Kafka startTimeMs: 1604911935830
From the Gradle dependency excluding the 'ch.qos.logback' group 'logback-core' module solved the issue.

How can I build and run Druid locally

My environment are below.
MacBook Pro (13-inch, 2019, Four Thunderbolt 3 ports)
2.8 GHz Quad CoreIntel Core i7
16 GB 2133 MHz LPDDR3
Intel Iris Plus Graphics 655 1536 MB
Docker: 19.03.12
Druid: 0.19.0
Although I followed official instructions, I failed to build or run Druid locally.
About this: https://github.com/apache/druid/tree/master/distribution/docker
I typed the following commands.
git clone https://github.com/apache/druid.git
docker build -t apache/druid:tag -f distribution/docker/Dockerfile .
However, the program never proceed.
Sending build context to Docker daemon 78.19MB
Step 1/18 : FROM maven:3-jdk-8-slim as builder
---> addee4586ff4
Step 2/18 : RUN export DEBIAN_FRONTEND=noninteractive && apt-get -qq update && apt-get -qq -y install --no-install-recommends python3 python3-yaml
---> Using cache
---> cdb74d0f6b3d
Step 3/18 : COPY . /src
---> 60d35cb6c0ce
Step 4/18 : WORKDIR /src
---> Running in 73dfa666a186
Removing intermediate container 73dfa666a186
---> 4839bf923b21
Step 5/18 : RUN mvn -B -ff -q dependency:go-offline install -Pdist,bundle-contrib-exts -Pskip-static-checks,skip-tests -Dmaven.javadoc.skip=true
---> Running in 1c9d4aa3d4e8
PLUS
Moreover, I followed this instruction and run docker-compose -f distribution/docker/docker-compose.yml up but I failed and get the error below.
coordinator | 2020-08-06T08:41:24,295 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.helper.DruidCoordinatorRuleRunner - Uh... I have no servers. Not assigning anything...
PLUS END
About this: https://hub.docker.com/r/apache/druid/tags
I typed the following commands.
docker pull apache/druid:0.19.0
docker run apache/druid:0.19.0
This program seems to work like this.
2020-08-06T07:50:22+0000 startup service
Setting 172.17.0.2= in /runtime.properties
cat: can't open '/jvm.config': No such file or directory
2020-08-06T07:50:24,024 INFO [main] org.hibernate.validator.internal.util.Version - HV000001: Hibernate Validator 5.2.5.Final
2020-08-06T07:50:24,988 INFO [main] org.apache.druid.initialization.Initialization - Loading extension [druid-hdfs-storage], jars: jackson-annotations-2.10.2.jar, hadoop-mapreduce-client-common-2.8.5.jar, httpclient-4.5.10.jar, htrace-core4-4.0.1-incubating.jar, apacheds-kerberos-codec-2.0.0-M15.jar, jackson-mapper-asl-1.9.13.jar, commons-digester-1.8.jar, jetty-sslengine-6.1.26.jar, jackson-databind-2.10.2.jar, api-asn1-api-1.0.0-M20.jar, ion-java-1.0.2.jar, hadoop-mapreduce-client-shuffle-2.8.5.jar, asm-7.1.jar, jsp-api-2.1.jar, druid-hdfs-storage-0.19.0.jar, api-util-1.0.3.jar, json-smart-2.3.jar, jackson-core-2.10.2.jar, hadoop-client-2.8.5.jar, httpcore-4.4.11.jar, commons-collections-3.2.2.jar, hadoop-hdfs-client-2.8.5.jar, hadoop-annotations-2.8.5.jar, hadoop-auth-2.8.5.jar, xmlenc-0.52.jar, aws-java-sdk-s3-1.11.199.jar, commons-net-3.6.jar, nimbus-jose-jwt-4.41.1.jar, hadoop-common-2.8.5.jar, jackson-dataformat-cbor-2.10.2.jar, hadoop-yarn-server-common-2.8.5.jar, accessors-smart-1.2.jar, gson-2.2.4.jar, commons-configuration-1.6.jar, joda-time-2.10.5.jar, hadoop-aws-2.8.5.jar, aws-java-sdk-core-1.11.199.jar, commons-codec-1.13.jar, hadoop-mapreduce-client-app-2.8.5.jar, hadoop-yarn-api-2.8.5.jar, aws-java-sdk-kms-1.11.199.jar, jackson-core-asl-1.9.13.jar, curator-recipes-4.3.0.jar, hadoop-mapreduce-client-jobclient-2.8.5.jar, jcip-annotations-1.0-1.jar, jmespath-java-1.11.199.jar, hadoop-mapreduce-client-core-2.8.5.jar, commons-logging-1.1.1.jar, leveldbjni-all-1.8.jar, curator-framework-4.3.0.jar, hadoop-yarn-client-2.8.5.jar, apacheds-i18n-2.0.0-M15.jar
2020-08-06T07:50:25,004 INFO [main] org.apache.druid.initialization.Initialization - Loading extension [druid-kafka-indexing-service], jars: lz4-java-1.7.1.jar, kafka-clients-2.5.0.jar, druid-kafka-indexing-service-0.19.0.jar, zstd-jni-1.3.3-1.jar, snappy-java-1.1.7.3.jar
2020-08-06T07:50:25,006 INFO [main] org.apache.druid.initialization.Initialization - Loading extension [druid-datasketches], jars: druid-datasketches-0.19.0.jar, commons-math3-3.6.1.jar
usage: druid <command> [<args>]
The most commonly used druid commands are:
help Display help information
index Run indexing for druid
internal Processes that Druid runs "internally", you should rarely use these directly
server Run one of the Druid server types.
tools Various tools for working with Druid
version Returns Druid version information
See 'druid help <command>' for more information on a specific command.
However, even if I add an argument like version, it does not work like this.
❯ docker run apache/druid:0.19.0 version
2020-08-06T07:51:30+0000 startup service version
Setting druid.host=172.17.0.2 in /runtime.properties
cat: can't open '/jvm.config': No such file or directory
2020-08-06T07:51:32,517 INFO [main] org.hibernate.validator.internal.util.Version - HV000001: Hibernate Validator 5.2.5.Final
2020-08-06T07:51:33,503 INFO [main] org.apache.druid.initialization.Initialization - Loading extension [druid-hdfs-storage], jars: jackson-annotations-2.10.2.jar, hadoop-mapreduce-client-common-2.8.5.jar, httpclient-4.5.10.jar, htrace-core4-4.0.1-incubating.jar, apacheds-kerberos-codec-2.0.0-M15.jar, jackson-mapper-asl-1.9.13.jar, commons-digester-1.8.jar, jetty-sslengine-6.1.26.jar, jackson-databind-2.10.2.jar, api-asn1-api-1.0.0-M20.jar, ion-java-1.0.2.jar, hadoop-mapreduce-client-shuffle-2.8.5.jar, asm-7.1.jar, jsp-api-2.1.jar, druid-hdfs-storage-0.19.0.jar, api-util-1.0.3.jar, json-smart-2.3.jar, jackson-core-2.10.2.jar, hadoop-client-2.8.5.jar, httpcore-4.4.11.jar, commons-collections-3.2.2.jar, hadoop-hdfs-client-2.8.5.jar, hadoop-annotations-2.8.5.jar, hadoop-auth-2.8.5.jar, xmlenc-0.52.jar, aws-java-sdk-s3-1.11.199.jar, commons-net-3.6.jar, nimbus-jose-jwt-4.41.1.jar, hadoop-common-2.8.5.jar, jackson-dataformat-cbor-2.10.2.jar, hadoop-yarn-server-common-2.8.5.jar, accessors-smart-1.2.jar, gson-2.2.4.jar, commons-configuration-1.6.jar, joda-time-2.10.5.jar, hadoop-aws-2.8.5.jar, aws-java-sdk-core-1.11.199.jar, commons-codec-1.13.jar, hadoop-mapreduce-client-app-2.8.5.jar, hadoop-yarn-api-2.8.5.jar, aws-java-sdk-kms-1.11.199.jar, jackson-core-asl-1.9.13.jar, curator-recipes-4.3.0.jar, hadoop-mapreduce-client-jobclient-2.8.5.jar, jcip-annotations-1.0-1.jar, jmespath-java-1.11.199.jar, hadoop-mapreduce-client-core-2.8.5.jar, commons-logging-1.1.1.jar, leveldbjni-all-1.8.jar, curator-framework-4.3.0.jar, hadoop-yarn-client-2.8.5.jar, apacheds-i18n-2.0.0-M15.jar
2020-08-06T07:51:33,524 INFO [main] org.apache.druid.initialization.Initialization - Loading extension [druid-kafka-indexing-service], jars: lz4-java-1.7.1.jar, kafka-clients-2.5.0.jar, druid-kafka-indexing-service-0.19.0.jar, zstd-jni-1.3.3-1.jar, snappy-java-1.1.7.3.jar
2020-08-06T07:51:33,526 INFO [main] org.apache.druid.initialization.Initialization - Loading extension [druid-datasketches], jars: druid-datasketches-0.19.0.jar, commons-math3-3.6.1.jar
ERROR!!!!
Found unexpected parameters: [version]
===
usage: druid <command> [<args>]
The most commonly used druid commands are:
help Display help information
index Run indexing for druid
internal Processes that Druid runs "internally", you should rarely use these directly
server Run one of the Druid server types.
tools Various tools for working with Druid
version Returns Druid version information
See 'druid help <command>' for more information on a specific command
So I see a few things here:
docker run apache/druid:0.19.0 means "fire and forget", if you don't have an endless running service here, your docker container will be shut down shortly after start.
To have an interaction within the docker container start it with "-it" command.
To let it run without interaction run it with "-d" command for detached.
YOu can find information about this here: https://docs.docker.com/engine/reference/run/
You have to check the start command.
The thing you wrote after the run command is the start command (in your case "version") - this is runned like you would type it into the running shell after words (just "version").
Additional to that, if you DONT add a startup command, there could be a startup command in the default druid dockerfile.
You can see the dockerfile of your selected image at docker.hub, like here:
https://hub.docker.com/layers/apache/druid/0.19.0/images/sha256-eb2a4852b4ad1d3ca86cbf4c9dc7ed9b73c767815f187eb238d2b80ca26dfd9a?context=explore
There you see, the start command, wihtin a dockerfile this is called ENTRYPOINT, is a shellscript:
ENTRYPOINT ["/druid.sh"]
So writing "version" after your run commands stops the shell command from running - we should not do that :)

How to properly setup docker-sync to exclude folders

I am trying to setup docker-sync to exclude my app/cache and app/logs folder but it is not working.
Things I've tried:
Using sync_excludes: ['.idea', 'app/cache/', 'app/logs/'] but it will be translated to something like this
command unison -ignore='Name .idea' -ignore='Name app/cache/*'
-ignore='Name app/logs/*'
So then I tried using sync_args and set like this:
sync_args:
- "-debug verbose"
- "-ignore='Path app/cache'"
- "-ignore='Path app/logs'"
command unison -ignore='Name .idea' -ignore='Name systems' \
-debug verbose -ignore='Path app/cache/*' -ignore='Path app/logs/*'
looking at the logs I can see this
[unox][DEBUG]: recvCmd: DIR
[unox][DEBUG]: sendCmd: OK
[fswatch+] >> OK
[pred] immutable 'app/cache' = false
[pred] ignore 'app/cache/prod' = true
[pred] ignorenot 'app/cache/prod' = false
[ignore] buildUpdateChildren: ignoring path app/cache/prod
But this still seeing the events being triggered and it is still syncing to my host machine.
[pred] ignore 'app/cache/prod/annotations/5702f47f407ddb07532bfd60d8ea2919489ef4bc#__construct.cache.php' = false
[pred] ignore 'app/cache/prod/annotations/f21b469a2214195ff16e2af43f249bdbfa245c25#findPublishedOr404.cache.php' = false
Anyone know what I am missing?
my last version look like this:
version: "2"
options:
# optional, activate this if you need to debug something, default is false
# IMPORTANT: do not run stable with this, it creates a memory leak, turn off verbose when you are done testing
verbose: true
syncs:
#IMPORTANT: ensure this name is unique
dt-akeneo-unison-sync:
notify_terminal: true
# which folder to watch / sync from - you can use tilde (~), it will get expanded. Be aware that the trailing slash makes a difference
# if you add them, only the inner parts of the folder gets synced, otherwise the parent folder will be synced as top-level folder
src: './'
# the files should be own by root in the target cointainer
sync_userid: 1000
sync_strategy: 'unison'
# optional, a list of regular expressions to exclude from the fswatch - see fswatch docs for details
watch_excludes: ['\.git', '\.gitignore', '.*\.md']
sync_args:
- "-debug verbose" #force Unison to choose the file with the later (earlier) modtime
- "-ignore='Path app/cache'"
- "-ignore='Path app/logs'"
- "-ignore='Path .git'"
- "-ignore='Path .git'"
- "-ignore='Path vendor'"
- "-ignore='Path upgrades'"
- "-ignore='Path systems'"
# optional: use this to switch to fswatch verbose mode
watch_args: '-v'
My Env:
{11:41}~ ➭ docker -v
Docker version 17.09.0-ce, build afdb6d4
{11:42}~ ➭ docker-sync -v
0.4.6
SO: OSx 10.11.6
Running my projects on a mounted Filesystem Case-Sensitive created using https://gist.github.com/scottsb/479bebe8b4b86bf17e2d
/dev/disk2s2 on /Users/neisantos/src (hfs, local, nodev, nosuid, journaled, noowners, nobrowse)
Don't know if it's too late.
I have almost the same setup:
OSx 10.12.6
Docker version 18.06.0-ce, build 0ffa825
docker-sync v 0.5.7
My project structure looks like this
./
build
changes
docker
etc
var
some-yml-files.yml
My docker-sync.yml looks like this
version: "2"
options:
verbose: true
syncs:
my-appcode-sync: # tip: add -sync and you keep consistent names as a convention
src: '.'
# sync_strategy: 'native_osx' # not needed, this is the default now
sync_excludes: ['docker/var', 'changes', '.git', '.idea']
I took this from https://github.com/EugenMayer/docker-sync/issues/421#issuecomment-309244156
Note the src: '.'. Initially I used src: './' like you did and ignoring folders didn't work either. After removing the / it worked for me.

Homebrew: Can't start elastic search

I'm in a big trouble, I can't start Elasticsearch and I need it for run my rails locally, please tell me what's going on. I installed Elasticsearch in the normal fashion then I did the following:
elasticsearch --config=/usr/local/opt/elasticsearch/config/elasticsearch.yml
But it shows the following error: [2015-11-01 20:36:50,574][INFO ][bootstrap] es.config is no longer supported. elasticsearch.yml must be placed in the config directory and cannot be renamed.
I tried several alternative ways of run it, like:
elasticsearch -f -D
But then I get the following error, and I can't find any useful for solve it, it seems to be related with file perms but not sure:
java.io.IOException: Resource not found: "org/joda/time/tz/data/ZoneInfoMap" ClassLoader: sun.misc.Launcher$AppClassLoader#33909752
at org.joda.time.tz.ZoneInfoProvider.openResource(ZoneInfoProvider.java:210)
at org.joda.time.tz.ZoneInfoProvider.<init>(ZoneInfoProvider.java:127)
at org.joda.time.tz.ZoneInfoProvider.<init>(ZoneInfoProvider.java:86)
at org.joda.time.DateTimeZone.getDefaultProvider(DateTimeZone.java:514)
at org.joda.time.DateTimeZone.getProvider(DateTimeZone.java:413)
at org.joda.time.DateTimeZone.forID(DateTimeZone.java:216)
at org.joda.time.DateTimeZone.getDefault(DateTimeZone.java:151)
at org.joda.time.chrono.ISOChronology.getInstance(ISOChronology.java:79)
at org.joda.time.DateTimeUtils.getChronology(DateTimeUtils.java:266)
at org.joda.time.format.DateTimeFormatter.selectChronology(DateTimeFormatter.java:968)
at org.joda.time.format.DateTimeFormatter.printTo(DateTimeFormatter.java:672)
at org.joda.time.format.DateTimeFormatter.printTo(DateTimeFormatter.java:560)
at org.joda.time.format.DateTimeFormatter.print(DateTimeFormatter.java:644)
at org.elasticsearch.Build.<clinit>(Build.java:51)
at org.elasticsearch.node.Node.<init>(Node.java:135)
at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:145)
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:170)
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:270)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)
[2015-11-01 20:40:57,602][INFO ][node ] [Centurius] version[2.0.0], pid[22063], build[de54438/2015-10-22T08:09:48Z]
[2015-11-01 20:40:57,605][INFO ][node ] [Centurius] initializing ...
Exception in thread "main" java.lang.IllegalStateException: failed to load bundle [] due to jar hell
Likely root cause: java.security.AccessControlException: access denied ("java.io.FilePermission" "/usr/local/Cellar/elasticsearch/2.0.0/libexec/antlr-runtime-3.5.jar" "read")
at java.security.AccessControlContext.checkPermission(AccessControlContext.java:472)
at java.security.AccessController.checkPermission(AccessController.java:884)
at java.lang.SecurityManager.checkPermission(SecurityManager.java:549)
at java.lang.SecurityManager.checkRead(SecurityManager.java:888)
at java.util.zip.ZipFile.<init>(ZipFile.java:210)
at java.util.zip.ZipFile.<init>(ZipFile.java:149)
at java.util.jar.JarFile.<init>(JarFile.java:166)
at java.util.jar.JarFile.<init>(JarFile.java:103)
at org.elasticsearch.bootstrap.JarHell.checkJarHell(JarHell.java:173)
at org.elasticsearch.plugins.PluginsService.loadBundles(PluginsService.java:340)
at org.elasticsearch.plugins.PluginsService.<init>(PluginsService.java:113)
at org.elasticsearch.node.Node.<init>(Node.java:144)
at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:145)
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:170)
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:270)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)
Refer to the log for complete error details.
Thanks for your help.
There are some changes with libexec with Elasticsearch/homebrew installation and that is why it is failing to start. There is a PR #45644 currently being worked on. Till the PR gets accepted, you can use the same formula to fix the installation of Elasticsearch.
First uninstall the earlier/older version. Then edit the formula of Elasticsearch:
$ brew edit elasticsearch
And use the formula from the PR.
Then do brew install elasticsearch, it should work fine.
To start Elasticsearch, just do:
$ elasticsearch
config option is no longer valid. For custom config, use path.config:
$ elasticsearch --path.conf=/usr/local/opt/elasticsearch/config

Resources