[ERROR SystemVerification]: could not unmarshal the JSON output of 'docker info - docker

I am trying to initialize a kubernetes cluster using a yaml file config using terraform, the initialization commands are in user data, when I look in cloud-init-output.log I have this error which I do not couldn't resolve.
Here is my config yaml file
kind: ClusterConfiguration
kubernetesVersion: v1.20.2
networking:
serviceSubnet: "10.100.0.0/16"
podSubnet: "10.244.0.0/16"
apiServer:
extraArgs:
cloud-provider: "aws"
controllerManager:
extraArgs:
cloud-provider: "aws"
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
bootstrapTokens:
- token: "urepc5.tzoz0wa8skdkiesf"
description: "default kubeadm bootstrap token"
ttl: "15m"
localAPIEndpoint:
advertiseAddress: "10.0.0.226"
bindPort: 6443
And the output of cloud-init
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR SystemVerification]: could not unmarshal the JSON output of 'docker info':
WARNING: Error loading config file: .dockercfg: $HOME is not defined
{"ID":"NGGY:LW7B:UDDM:OZNZ:DJ7S:BVEL:AUWJ:RUQQ:4D73:ZYK2:I75V:6ZQL","Containers":0,"ContainersRunning":0,"ContainersPaused":0,"ContainersStopped":0,"Images":0,"Driver":"overlay2","DriverStatus":[["Backing Filesystem","extfs"],["Supports d_type","true"],["Native Overlay Diff","true"]],"Plugins":{"Volume":["local"],"Network":["bridge","host","ipvlan","macvlan","null","overlay"],"Authorization":null,"Log":["awslogs","fluentd","gcplogs","gelf","journald","json-file","local","logentries","splunk","syslog"]},"MemoryLimit":true,"SwapLimit":false,"KernelMemory":true,"KernelMemoryTCP":true,"CpuCfsPeriod":true,"CpuCfsQuota":true,"CPUShares":true,"CPUSet":true,"PidsLimit":true,"IPv4Forwarding":true,"BridgeNfIptables":true,"BridgeNfIp6tables":true,"Debug":false,"NFd":22,"OomKillDisable":true,"NGoroutines":34,"SystemTime":"2021-01-17T16:57:14.692026867Z","LoggingDriver":"json-file","CgroupDriver":"cgroupfs","CgroupVersion":"1","NEventsListener":0,"KernelVersion":"5.4.0-1029-aws","OperatingSystem":"Ubuntu 20.04.1 LTS","OSVersion":"20.04","OSType":"linux","Architecture":"x86_64","IndexServerAddress":"https://index.docker.io/v1/","RegistryConfig":{"AllowNondistributableArtifactsCIDRs":[],"AllowNondistributableArtifactsHostnames":[],"InsecureRegistryCIDRs":["127.0.0.0/8"],"IndexConfigs":{"docker.io":{"Name":"docker.io","Mirrors":[],"Secure":true,"Official":true}},"Mirrors":[]},"NCPU":2,"MemTotal":4124860416,"GenericResources":null,"DockerRootDir":"/var/lib/docker","HttpProxy":"","HttpsProxy":"","NoProxy":"","Name":"ip-10-0-0-226.ec2.internal","Labels":[],"ExperimentalBuild":false,"ServerVersion":"20.10.2","Runtimes":{"io.containerd.runc.v2":{"path":"runc"},"io.containerd.runtime.v1.linux":{"path":"runc"},"runc":{"path":"runc"}},"DefaultRuntime":"runc","Swarm":{"NodeID":"","NodeAddr":"","LocalNodeState":"inactive","ControlAvailable":false,"Error":"","RemoteManagers":null},"LiveRestoreEnabled":false,"Isolation":"","InitBinary":"docker-init","ContainerdCommit":{"ID":"269548fa27e0089a8b8278fc4fc781d7f65a939b","Expected":"269548fa27e0089a8b8278fc4fc781d7f65a939b"},"RuncCommit":{"ID":"ff819c7e9184c13b7c2607fe6c30ae19403a7aff","Expected":"ff819c7e9184c13b7c2607fe6c30ae19403a7aff"},"InitCommit":{"ID":"de40ad0","Expected":"de40ad0"},"SecurityOptions":["name=apparmor","name=seccomp,profile=default"],"Warnings":["WARNING: No swap limit support","WARNING: No blkio weight support","WARNING: No blkio weight_device support"],"ClientInfo":{"Debug":false,"Context":"default","Plugins":[{"SchemaVersion":"0.1.0","Vendor":"Docker Inc.","Version":"v0.9.1-beta3","ShortDescription":"Docker App","Experimental":true,"Name":"app","Path":"/usr/libexec/docker/cli-plugins/docker-app"},{"SchemaVersion":"0.1.0","Vendor":"Docker Inc.","Version":"v0.5.1-docker","ShortDescription":"Build with BuildKit","Name":"buildx","Path":"/usr/libexec/docker/cli-plugins/docker-buildx"}],"Warnings":null}}
: invalid character 'W' looking for beginning of value
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

Related

logstash deployment into open shift

i have deploy logstash using jenkins into openshift ...
i have built an openshift file to be Template contains ( deployment , secret , configmap) ....
but the below error is generating :
hudson.AbortException: new-project returned an error;
{
err=Error from server (AlreadyExists):
project.project.****.io "data-collector" already exists
verb=new-project, cmd=oc --server=https://api.devsaibocp.saibnet2.saib.com:6443
--insecure-skip-tls-verify
--token=XXXXX new-project data-collector
--skip-config-write, out=, status=1 }
verb=, cmd=oc --server=https://api.devsaibocp.saibnet2.saib.com:6443
--insecure-skip-tls-verify
--namespace=data-collector
--token=XXXXX
apply -f https://****/bfm/account/account-data-pipline/-/blob/main/openshift.yml,
out=, status=1 }

AKS: mount existing azure file share without manually providing storage key

I'm able to mount an existing Azure File Share in a pod providing manually the storage key:
apiVersion: v1
kind: Secret
metadata:
name: storage-secret
namespace: azure
type: Opaque
data:
azurestorageaccountname: Base64_Encoded_Value_Here
azurestorageaccountkey: Base64_Encoded_Value_Here
It should also be possible that the storage key is automatically created as secret in the AKS if the AKS has the right permissions.
-> Giving AKS (kubelet identity) the "Storage Account Key Operator Service Role" and "Reader" role
Result is the error message:
Warning FailedMount 2m46s (x5 over 4m54s) kubelet MountVolume.SetUp failed for volume "myfileshare" : rpc error: code = Internal desc = accountName() or accountKey is empty
Warning FailedMount 44s (x5 over 4m53s) kubelet MountVolume.SetUp failed for volume "myfileshare" : rpc error: code = InvalidArgument desc = GetAccountInfo(csi-44a54edbcf...................) failed with error: could not get secret(azure-storage-account-mystorage-secret): secrets "azure-storage-account-mystorage-secret" not found
I also tried to create a custom "StorageClass" and a "PersistentVolume" ( not claim)
but that changed nothing. Maybe I am on the wrong track.
Can somebody help?
Additional information:
My AKS is version 1.22.6 and I use a managed identity.

Getting an error while running jenkins job to deploy java application to tomcat server using Ansible template

I am getting the following error using ansible playbook
Ansible Playbook:
hosts: all_hosts
become: true
tasks:
name: copy jar/war onto tomcat servers
copy:
src: /opt/playbooks/wabapp/target/webapp.war
dest: /opt/apache-tomcat-8.5.54/webapps
Error in jenkins
SSH: Connecting with configuration [Ansible] ...
SSH: EXEC: STDOUT/STDERR from command [ansible-playbook /opt/playbooks/file.yml] ...
ERROR! Syntax Error while loading YAML.
mapping values are not allowed in this context
The error appears to be in '/opt/playbooks/file.yml': line 6, column 13, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: copy jar/war onto tomcat servers
copy:
^ here
SSH: EXEC: completed after 801 ms
SSH: Disconnecting configuration [Ansible] ...
ERROR: Exception when publishing, exception message [Exec exit status not zero. Status [4]]
Build step 'Send files or execute commands over SSH' changed build result to UNSTABLE
Finished: UNSTABLE

Docker registry: unable to push image since dns_unresolved_hostname

I'm not able to push an image to my local registry
$ docker image push registry.local:5000/covid-backend:60988b0-dirty
The push refers to repository [registry.local:5000/covid-backend]
eff147c1024b: Preparing
790a9d8e41bb: Preparing
20dd87a4c2ab: Preparing
78075328e0da: Preparing
9f8566ee5135: Preparing
error parsing HTTP 404 response body: invalid character '<' looking for beginning of value: "<HTML><HEAD>\r\n<TITLE>Network Error</TITLE>\r\n</HEAD>\r\n<BODY>\r\n<FONT face=\"Helvetica\">\r\n<big><strong></strong></big><BR>\r\n</FONT>\r\n<blockquote>\r\n<TABLE border=0 cellPadding=1 width=\"80% \">\r\n<TR><TD>\r\n<FONT face=\"Helvetica\">\r\n<big>Network Error (dns_unresolved_hostname)</big>\r\n<BR>\r\n<BR>\r\n</FONT>\r\n</TD></TR>\r\n<TR><TD>\r\n<FONT face=\"Helvetica\">\r\nYour requested host \"registry.local\" could not be resolved by DNS.\r\n</FONT>\r\n</TD></TR>\r\n<TR><TD>\r\n<FONT face=\"Helvetica\">\r\n\r\n</FONT>\r\n</TD></TR>\r\n<TR><TD>\r\n<FONT face=\"Helvetica\" SIZE=2>\r\n<BR>\r\nFor assistance, contact your network support team.\r\n</FONT>\r\n</TD></TR>\r\n</TABLE>\r\n</blockquote>\r\n</FONT>\r\n</BODY></HTML>\r\n"
HTML content response contains:
Network Error (dns_unresolved_hostname)
Your requested host \\"registry.local\\" could not be resolved by DNS.
I've tried to reach it using curl:
$ curl -s registry.local:5000/v2/_catalog
{"repositories":["covid-backend","skaffold-covid-backend"]}
My /etc/hosts:
127.0.0.1 localhost registry.local
I've also tried to add it into my ~/.docker/config.json as insecure registry:
"insecure-registries" : [
"registry.local:5000"
]
I've also took a look on docker logs:
abr 27 09:30:25 psgd dockerd[15476]: time="2020-04-27T09:30:25.967945384+02:00" level=info msg="Attempting next endpoint for push after error: Get https://registry.local:5000/v2/: Service Unavailable"
abr 27 09:30:29 psgd dockerd[15476]: time="2020-04-27T09:30:29.121878880+02:00" level=error msg="Upload failed: error parsing HTTP 404 response body: invalid character '<' looking for beginning of value: \"<HTML><HEAD>\\r\\n<TITLE>Network Error</TITLE>\\r\\n</HEAD>\\r\\n<BODY>\\r\\n<FONT face=\\\"Helvetica\\\">\\r\\n<big><strong></strong></big><BR>\\r\\n</FONT>\\r\\n<blockquote>\\r\\n<TABLE border=0 cellPadding=1 width=\\\"80% \\\">\\r\\n<TR><TD>\\r\\n<FONT face=\\\"Helvetica\\\">\\r\\n<big>Network Error (dns_unresolved_hostname)</big>\\r\\n<BR>\\r\\n<BR>\\r\\n</FONT>\\r\\n</TD></TR>\\r\\n<TR><TD>\\r\\n<FONT face=\\\"Helvetica\\\">\\r\\nYour requested host \\\"registry.local\\\" could not be resolved by DNS.\\r\\n</FONT>\\r\\n</TD></TR>\\r\\n<TR><TD>\\r\\n<FONT face=\\\"Helvetica\\\">\\r\\n\\r\\n</FONT>\\r\\n</TD></TR>\\r\\n<TR><TD>\\r\\n<FONT face=\\\"Helvetica\\\" SIZE=2>\\r\\n<BR>\\r\\nFor assistance, contact your network support team.\\r\\n</FONT>\\r\\n</TD></TR>\\r\\n</TABLE>\\r\\n</blockquote>\\r\\n</FONT>\\r\\n</BODY></HTML>\\r\\n\""
abr 27 09:30:29 psgd dockerd[15476]: time="2020-04-27T09:30:29.122824956+02:00" level=info msg="Attempting next endpoint for push after error: error parsing HTTP 404 response body: invalid character '<' looking for beginning of value: \"<HTML><HEAD>\\r\\n<TITLE>Network Error</TITLE>\\r\\n</HEAD>\\r\\n<BODY>\\r\\n<FONT face=\\\"Helvetica\\\">\\r\\n<big><strong></strong></big><BR>\\r\\n</FONT>\\r\\n<blockquote>\\r\\n<TABLE border=0 cellPadding=1 width=\\\"80% \\\">\\r\\n<TR><TD>\\r\\n<FONT face=\\\"Helvetica\\\">\\r\\n<big>Network Error (dns_unresolved_hostname)</big>\\r\\n<BR>\\r\\n<BR>\\r\\n</FONT>\\r\\n</TD></TR>\\r\\n<TR><TD>\\r\\n<FONT face=\\\"Helvetica\\\">\\r\\nYour requested host \\\"registry.local\\\" could not be resolved by DNS.\\r\\n</FONT>\\r\\n</TD></TR>\\r\\n<TR><TD>\\r\\n<FONT face=\\\"Helvetica\\\">\\r\\n\\r\\n</FONT>\\r\\n</TD></TR>\\r\\n<TR><TD>\\r\\n<FONT face=\\\"Helvetica\\\" SIZE=2>\\r\\n<BR>\\r\\nFor assistance, contact your network support team.\\r\\n</FONT>\\r\\n</TD></TR>\\r\\n</TABLE>\\r\\n</blockquote>\\r\\n</FONT>\\r\\n</BODY></HTML>\\r\\n\""
My NO_PROXY environment variable content:
$ echo $NO_PROXY
localhost,127.0.0.1/8,::1,192.168.99.0/8,registry.local
The problem is that I need to configure correctly docker behind proxy.
I though proxy-related environment variables is enough (HTTP_PROXY, HTTPS_PROXY and NO_PROXY).
You can find how to configure docker behind a proxy here.
I've just added NO_PROXY on /etc/systemd/system/docker.service.d/ files like:
[Service]
Environment="HTTP_PROXY=http://<proxy-ip>:<proxy-port>/" "NO_PROXY=localhost, 127.0.0.1/8, ::1, 192.168.99.0/8, registry.local"
Solved.

unable to start logstash

this is my first time running logstash on container. Im running logstash on the same container elasticsearch + kibana. Its running on ubuntu.
i run my conf file by using
/usr/share/logstash/bin/logstash -f conf.d/logstash.conf
here is my logstash.conf :
input{
beats{
port=>5044
}
}
filter
{
grok {
match =>{
"message" => "%{TIMESTAMP_ISO8601:logtimestamp}\s%{DATA:S_IP}\s%{WORD:s_method}\s%{DATA:cs_uri_stem}\s%{DATA:cs_uri_query}\s%{DATA:s_port}\s%{GREEDYDATA:log_message}"
}
}
date{
match =>["logtimestamp","yyyy-MM-dd HH:mm:ss"]
target=>"#timestamp"
}
}
output{
stdout{codec=>rubydebug}
elasticsearch{
hosts=>"elastic#localhost:9200"
index=>"log_iis"
user =>"*****"
password=>"*****"
}
}
and it returning error as :
Java HotSpot(TM) 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.headius.backport9.modules.Modules (file:/usr/share/logstash/logstash-core/lib/jars/jruby-complete-9.2.8.0.jar) to field java.io.FileDescriptor.fd
WARNING: Please consider reporting this to the maintainers of com.headius.backport9.modules.Modules
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Thread.exclusive is deprecated, use Thread::Mutex
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[WARN ] 2020-02-10 03:32:59.625 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2020-02-10 03:32:59.632 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"7.5.2"}
[INFO ] 2020-02-10 03:33:00.995 [Converge PipelineAction::Create<main>] Reflections - Reflections took 24 ms to scan 1 urls, producing 20 keys and 40 values
[ERROR] 2020-02-10 03:33:01.375 [Converge PipelineAction::Create<main>] agent - Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"Java::JavaLang::IllegalStateException", :message=>"Unable to configure plugins: Illegal character in scheme name at index 7: elastic#localhost:9200", :backtrace=>["org.logstash.config.ir.CompiledPipeline.<init>(CompiledPipeline.java:119)", "org.logstash.execution.JavaBasePipelineExt.initialize(JavaBasePipelineExt.java:60)", "org.logstash.execution.JavaBasePipelineExt$INVOKER$i$1$0$initialize.call(JavaBasePipelineExt$INVOKER$i$1$0$initialize.gen)", "org.jruby.internal.runtime.methods.JavaMethod$JavaMethodN.call(JavaMethod.java:837)", "org.jruby.ir.runtime.IRRuntimeHelpers.instanceSuper(IRRuntimeHelpers.java:1156)", "org.jruby.ir.runtime.IRRuntimeHelpers.instanceSuperSplatArgs(IRRuntimeHelpers.java:1143)", "org.jruby.ir.targets.InstanceSuperInvokeSite.invoke(InstanceSuperInvokeSite.java:39)", "usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.RUBY$method$initialize$0(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:27)", "org.jruby.internal.runtime.methods.CompiledIRMethod.call(CompiledIRMethod.java:91)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:90)", "org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:332)", "org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:86)", "org.jruby.RubyClass.newInstance(RubyClass.java:915)", "org.jruby.RubyClass$INVOKER$i$newInstance.call(RubyClass$INVOKER$i$newInstance.gen)", "org.jruby.ir.targets.InvokeSite.invoke(InvokeSite.java:183)", "usr.share.logstash.logstash_minus_core.lib.logstash.pipeline_action.create.RUBY$method$execute$0(/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:36)", "usr.share.logstash.logstash_minus_core.lib.logstash.pipeline_action.create.RUBY$method$execute$0$__VARARGS__(/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb)", "org.jruby.internal.runtime.methods.CompiledIRMethod.call(CompiledIRMethod.java:91)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:90)", "org.jruby.ir.targets.InvokeSite.invoke(InvokeSite.java:183)", "usr.share.logstash.logstash_minus_core.lib.logstash.agent.RUBY$block$converge_state$2(/usr/share/logstash/logstash-core/lib/logstash/agent.rb:326)", "org.jruby.runtime.CompiledIRBlockBody.callDirect(CompiledIRBlockBody.java:136)", "org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:77)", "org.jruby.runtime.Block.call(Block.java:129)", "org.jruby.RubyProc.call(RubyProc.java:295)", "org.jruby.RubyProc.call(RubyProc.java:274)", "org.jruby.RubyProc.call(RubyProc.java:270)", "org.jruby.internal.runtime.RubyRunnable.run(RubyRunnable.java:105)", "java.base/java.lang.Thread.run(Thread.java:830)"]}
warning: thread "Converge PipelineAction::Create<main>" terminated with exception (report_on_exception is true):
LogStash::Error: Don't know how to handle `Java::JavaLang::IllegalStateException` for `PipelineAction::Create<main>`
create at org/logstash/execution/ConvergeResultExt.java:109
add at org/logstash/execution/ConvergeResultExt.java:37
converge_state at /usr/share/logstash/logstash-core/lib/logstash/agent.rb:339
[ERROR] 2020-02-10 03:33:01.379 [Agent thread] agent - An exception happened when converging configuration {:exception=>LogStash::Error, :message=>"Don't know how to handle `Java::JavaLang::IllegalStateException` for `PipelineAction::Create<main>`", :backtrace=>["org/logstash/execution/ConvergeResultExt.java:109:in `create'", "org/logstash/execution/ConvergeResultExt.java:37:in `add'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:339:in `block in converge_state'"]}
[FATAL] 2020-02-10 03:33:01.403 [LogStash::Runner] runner - An unexpected error occurred! {:error=>#<LogStash::Error: Don't know how to handle `Java::JavaLang::IllegalStateException` for `PipelineAction::Create<main>`>, :backtrace=>["org/logstash/execution/ConvergeResultExt.java:109:in `create'", "org/logstash/execution/ConvergeResultExt.java:37:in `add'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:339:in `block in converge_state'"]}
[ERROR] 2020-02-10 03:33:01.432 [LogStash::Runner] Logstash - java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit)
and respond or explanation will be appreciated so much. thank you
The error log says that you get
Illegal character in scheme name at index 7: elastic#localhost:9200", which is the value of the hosts option.
I guess the problem is the at (#). Is that needed? Anyway, if you check the documentation of the Elasticsearch output plugin [1], it says that
Any special characters present in the URLs here MUST be URL escaped! This means # should be put in as %23 for instance.
[1] https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html#plugins-outputs-elasticsearch-hosts

Resources