I am testing version compatibility with molecule and for the combination
python 3.8 x ansible latest x debian
molecule breaks in the instance creation step with
TASK [Wait for instance(s) creation to complete] *******************************
FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left).
failed: [localhost] (item=None) => {"attempts": 2, "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false}
fatal: [localhost]: FAILED! => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false}
PLAY RECAP *********************************************************************
localhost : ok=6 changed=3 unreachable=0 failed=1 skipped=3 rescued=0 ignored=0
https://travis-ci.com/ckaserer/ansible-role-example/jobs/256557752
In order to debug further, I need to set the no_log: false.
Any ideas on how to set no_log to true for molecule's own internal playbooks?
I tried with MOLECULE_DEBUG, but that did not do the trick.
Searching molecule's doc did not give any results either.
running molecule with
molecule --debug test
also does not set the molecule playbook variable for no_log to false
You can set the environment variable
MOLECULE_NO_LOG="false"
and then run your normal molecule command .e.g.
molecule test
That wasn't easy to find, I had to take a look at the source code of molecule and found that
molecule/test/resources/playbooks/docker/create.yml
which is the playbook used to create docker images that are defined by Dockerfile.j2 uses the variable molecule_no_log to set the no_log value in the playbook.
Additionally, in
molecule/test/unit/provisioner/test_ansible.py
the variable molecule_no_log is based on the environment variable MOLECULE_NO_LOG
So, in the end, I just needed to set the appropriate environment variable to false.
Molecule source code
https://github.com/ansible/molecule
Related
I'm getting this error while trying to run cube js with the default command in the getting started docs. I've started this in a folder and running it in docker.
Warning. There is no cube.js file. Continue with environment variables
π₯ Cube Store (0.28.31) is assigned to 3030 port.
Warning. Option apiSecret is required in dev mode. Cube.js has generated it as e3b8c5a35fe378f4d481ada777e5f3c4
π Authentication checks are disabled in developer mode. Please use NODE_ENV=production to enable it.
π¦
Dev environment available at http://localhost:4000
π Cube.js server (0.28.31) is listening on 4000
2021-09-03 15:06:01,512 INFO [cubestore::http::status] <pid:17> Serving status probes at 0.0.0.0:3031
2021-09-03 15:06:01,515 INFO [cubestore::metastore] <pid:17> Using existing metastore in /cube/conf/.cubestore/data/metastore
thread '
main
' panicked at '
called `Result::unwrap()` on an `Err` value: Error { message: "IO error: While fsync: a directory: Invalid argument" }
', /project/cubestore/src/metastore/mod.rs:1542:40
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Cube Store Start Error: undefined
I guess itβs corrupted metastore due to it was incorrectly shutdown for you locally. Could you please try to drop the .cubestore directory?
While trying to build an awx image (Ansible works) for ppc64le, the following comes up:
TASK [image_build : Build AWX distribution using container] ***************************************************************************************************************************************************
fatal: [localhost -> localhost]: FAILED! => {"changed": false, "msg": "Error creating container: 400 Client Error: Bad Request (\"invalid reference format\")"}
to retry, use: --limit #/root/awx/installer/install.retry
PLAY RECAP ****************************************************************************************************************************************************************************************************
localhost : ok=10 changed=3 unreachable=0 failed=1
How can I see what really happens in the background? Any verbose docker logs that I can look at? The message itself is somewhat useless to me. I already set Ansible to verbose but this also was of no help.
Docker image names can only consist of lowercase (a-z) characters.
Either you are giving a un-supported image name or the variable(or paths) passed to the buid(or the container) cannot be resolved.
To enable debug logs, add "--debug" to docker daemon (/etc/systemd/system/multi-user.target.wants/docker.service for systemd based linux env)
For reference: https://docs.docker.com/config/daemon/#configure-the-docker-daemon
UNREACHABLE! changed : false , msg:SSH error:data could not be sent
to the remote host. Make sure this host can be reached over ssh",
"unreachable": true
Hosts:
[test]
xxxxxx.local ansible_ssh_user=myname
ansible.cfg
[defaults]
host_key_checking = False
[ssh_connection]
pipelining=true
I am running the playbook from Jenkins using execute shell `
Your question need little bit more information about your environment or command used to run playbook. Although, you can try add to your ~/.ansible.cfg file these lines, which may solve your issue
[ssh_connection]
control_path = %(directory)s/%%h-%%r
I am using Debian OS and tomcat6.
I export CATALINA_OPTS="-Xms1024m -Xmx2048m" environment variable and create a puppet service:
class tomcat6::service {
service { 'tomcat6':
ensure => running,
hasstatus => true,
hasrestart => true,
enable => true,
}
}
As /usr/share/tomcat6/bin/catalina.sh reads CATALINA_OPTS variables for starting tomcat6 service, the process should receive CATALINA_OPTS but it does not show in the process command. I execute ps aux|grep catalina to show the command detail:
tomcat6 10658 1.0 2.0 2050044 189572 ? Sl 18:04 0:16 /usr/lib/jvm/default- java/bin/java -Djava.util.logging.config.file=/var/lib/tomcat6/conf/logging.properties -Djava.awt.headless=true -Xmx128m -XX:+UseConcMarkSweepGC -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Djava.endorsed.dirs=/usr/share/tomcat6/endorsed -classpath /usr/share/tomcat6/bin/bootstrap.jar -Dcatalina.base=/var/lib/tomcat6 -Dcatalina.home=/usr/share/tomcat6 -Djava.io.tmpdir=/tmp/tomcat6-tomcat6-tmp org.apache.catalina.startup.Bootstrap start
Puppet does not receive CATALINA_OPTS properly.
My question is, how can I let puppet read CATALINA_OPTS when executing puppet tomcat6 service?
Thank you.
instead of
hasstatus => true,
put
hasstatus => false,
By doing this, you will force puppet to look up the proc table and find the daemon OR in other words, this will make puppet run ps auxw | grep tomcat6 before doing anything else.
hasstatus => true tells that if puppet receives a status != running it will do as directed, but in some cases several daemons don't return the status correctly (probably due to mutiple threading involved)
I fixed the issue by setting setenv.sh for tomcat6. It works properly.
I have written my own hadoop program and I can run using pseudo distribute mode in my own laptop, however, when I put the program in the cluster which can run example jar of hadoop, it by default launches the local job though I indicate the hdfs file path, below is the output, give suggestions?
./hadoop -jar MyRandomForest_oob_distance.jar hdfs://montana-01:8020/user/randomforest/input/genotype1.txt hdfs://montana-01:8020/user/randomforest/input/phenotype1.txt hdfs://montana-01:8020/user/randomforest/output1_distance/ hdfs://montana-01:8020/user/randomforest/input/genotype101.txt hdfs://montana-01:8020/user/randomforest/input/phenotype101.txt 33 500 1
12/03/16 16:21:25 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
12/03/16 16:21:25 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
12/03/16 16:21:25 INFO mapred.JobClient: Running job: job_local_0001
12/03/16 16:21:25 INFO mapred.MapTask: io.sort.mb = 100
12/03/16 16:21:25 INFO mapred.MapTask: data buffer = 79691776/99614720
12/03/16 16:21:25 INFO mapred.MapTask: record buffer = 262144/327680
12/03/16 16:21:25 WARN mapred.LocalJobRunner: job_local_0001
java.io.FileNotFoundException: File /user/randomforest/input/genotype1.txt does not exist.
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:361)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:245)
at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.<init>(ChecksumFileSystem.java:125)
at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:283)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:356)
at Data.Data.loadData(Data.java:103)
at MapReduce.DearMapper.loadData(DearMapper.java:261)
at MapReduce.DearMapper.setup(DearMapper.java:332)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:142)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:621)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:177)
12/03/16 16:21:26 INFO mapred.JobClient: map 0% reduce 0%
12/03/16 16:21:26 INFO mapred.JobClient: Job complete: job_local_0001
12/03/16 16:21:26 INFO mapred.JobClient: Counters: 0
Total Running time is: 1 secs
LocalJobRunner has been chosen as your configuration most probably has the mapred.job.tracker property set to local or has not been set at all (in which case the default is local). To check, go to "wherever you extracted/installed hadoop"/etc/hadoop/ and see if the file mapred-site.xml exists (for me it did not, a file called mapped-site.xml.template was there). In that file (or create it if it doesn't exist) make sure it has the following property:
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
See the source for org.apache.hadoop.mapred.JobClient.init(JobConf)
What is the value of this configuration property in the hadoop configuration on the machine you are submitting this from? Also confirm that the hadoop executable you are running references this configuration (and that you don't have 2+ installations configured differently) - type which hadoop and trace any symlinks you come across.
Alternatively you can override this when you submit your job, if you know the JobTracker host and port number using the -jt option:
hadoop jar MyRandomForest_oob_distance.jar -jt hostname:port hdfs://montana-01:8020/user/randomforest/input/genotype1.txt hdfs://montana-01:8020/user/randomforest/input/phenotype1.txt hdfs://montana-01:8020/user/randomforest/output1_distance/ hdfs://montana-01:8020/user/randomforest/input/genotype101.txt hdfs://montana-01:8020/user/randomforest/input/phenotype101.txt 33 500 1
If you're using Hadoop 2 and your job is running locally instead of on the cluster, ensure that you have setup mapred-site.xml to contain the mapreduce.framework.name property with a value of yarn. You also need to set up an aux-service in yarn-site.xml
Checkout the Cloudera Hadoop 2 operator migration blog for more information.
I had the same problem that every mapreduce v2 (mrv2) or yarn task only ran with the mapred.LocalJobRunner
INFO mapred.LocalJobRunner: Starting task: attempt_local284299729_0001_m_000000_0
The Resourcemanager and Nodemanagers were accessible and the mapreduce.framework.name was set to yarn.
Setting the HADOOP_MAPRED_HOME before executing the job fixed the problem for me.
export HADOOP_MAPRED_HOME=/usr/lib/hadoop-mapreduce
cheers
dan