kube-Prometheus-stack - Grafana pod stuck at "CrashloopBackoff" - devops

Trying to install kube-prometheus-stack (39.8.0) and everything went well, now there is a requirement - where grafana pod needs to have a persistent vol with oci-fss as a storage class, below is my values.yaml file.
grafana:
initChownData:
enabled: false
persistence:
enabled: true
type: pvc
storageClassName: oci-fss
accessModes:
- ReadWriteMany
size: 50Gi
finalizers:
- kubernetes.io/pvc-protection
Grafana pod status is:
pod/prometheus-grafana-54cdc8774f-blfqx 2/3 CrashLoopBackOff
Grafana pod logs:
ubectl logs -f pod/prometheus-grafana-54cdc8774f-blfqx -n prometheus
Defaulted container "grafana-sc-dashboard" out of: grafana-sc-dashboard, grafana-sc-datasources, grafana
{"time": "2022-11-11T08:07:40.625012+00:00", "level": "INFO", "msg": "Starting collector"}
{"time": "2022-11-11T08:07:40.625190+00:00", "level": "WARNING", "msg": "No folder annotation was provided, defaulting to k8s-sidecar-target-directory"}
{"time": "2022-11-11T08:07:40.625329+00:00", "level": "INFO", "msg": "Loading incluster config ..."}
{"time": "2022-11-11T08:07:40.626083+00:00", "level": "INFO", "msg": "Config for cluster api at 'https://10.96.0.1:443' loaded..."}
{"time": "2022-11-11T08:07:40.626199+00:00", "level": "INFO", "msg": "Unique filenames will not be enforced."}
{"time": "2022-11-11T08:07:40.626283+00:00", "level": "INFO", "msg": "5xx response content will not be enabled."}

it was filesystem storage issue, Grafana container was not able to write it's configuration to the storage, we can mark this as resolved

Related

Running local instance of AWS SNS giving me connection is already closed error

I am trying to run local instance of AWS SNS for some offline testing.
i am using docker https://hub.docker.com/r/s12v/sns/ for the same. I am using macos and when I am starting the docker there are no logs appearing , also any request over 9911 resulting connection already closed.
I tried docker-compose.yml ( initially to play with sqs) too ,
version: '3.2'
services:
sns:
image: s12v/sns
container_name: sns
platform: linux/x86_64
ports:
- "9911:9911"
volumes:
- ./config/db.json:/etc/sns/db.json
My db.json is like below
{
"version": 1,
"timestamp": 1465414804110,
"subscriptions": [
{
"arn": "6df4ed2b-a650-4f7c-910a-1a89c7cae5a6",
"topicArn": "arn:aws:sns:us-east-1:1465414804035:test1",
"endpoint": "file://tmp?fileName=sns.log",
"owner": "",
"protocol": "file"
}
],
"topics": [
{
"arn": "arn:aws:sns:us-east-1:1465414804035:test1",
"name": "test1"
}
]
}
How to resolve the issue.

Docker exiting the node app due to inactivity despite --restart=always

I have a docker container running a node app on AWS with the following command
sudo docker run --restart always --network='host' image-name
Dockerfile:
FROM node:14.19.0-alpine3.14
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY . .
RUN npm install
EXPOSE 5000
CMD [ "node","app.js"]
I have set the restart policy to always, but still when the container is exiting due to the node error “The client disconnected due to inactivity”, it is not restarting and the web-app is not loading after that. I have to restart the container manually.
Docker Inspect Status After Exiting:
"State": {
"Status": "exited",
"Running": false,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 0,
"ExitCode": 1,
"Error": "",
"StartedAt": "2022-04-03T12:02:21.533507861Z",
"FinishedAt": "2022-04-03T20:06:36.98130893Z"
},
"HostConfig": {
"Binds": null,
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "host",
"PortBindings": {},
"RestartPolicy": {
"Name": "always",
"MaximumRetryCount": 0
},

Can't connect to Elasticsearch in OpenCTI stack docker install

I'm trying to make a Docker installation of OpenCTI work.
I've followed the instructions here https://github.com/OpenCTI-Platform/docker and I'm able to successfully create the Docker stack on Windows Desktop Docker.
Here is my docker-compose.yml file
`
version: '3'
services:
redis:
image: redis:6.2.6
restart: always
volumes:
- redisdata:/data
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.15.1
volumes:
- esdata:/usr/share/elasticsearch/data
environment:
- discovery.type=single-node
- xpack.ml.enabled=false
restart: always
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
ports:
- "9200:9200"
- "9300:9300"
minio:
image: minio/minio:RELEASE.2021-10-13T00-23-17Z
volumes:
- s3data:/data
ports:
- "9000:9000"
environment:
MINIO_ROOT_USER: ${MINIO_ROOT_USER}
MINIO_ROOT_PASSWORD: ${MINIO_ROOT_PASSWORD}
command: server /data
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
restart: always
rabbitmq:
image: rabbitmq:3.9-management
environment:
- RABBITMQ_DEFAULT_USER=${RABBITMQ_DEFAULT_USER}
- RABBITMQ_DEFAULT_PASS=${RABBITMQ_DEFAULT_PASS}
volumes:
- amqpdata:/var/lib/rabbitmq
restart: always
opencti:
image: opencti/platform:5.0.3
environment:
- NODE_OPTIONS=--max-old-space-size=8096
- APP__PORT=8080
- APP__ADMIN__EMAIL=${OPENCTI_ADMIN_EMAIL}
- APP__ADMIN__PASSWORD=${OPENCTI_ADMIN_PASSWORD}
- APP__ADMIN__TOKEN=${OPENCTI_ADMIN_TOKEN}
- APP__APP_LOGS__LOGS_LEVEL=error
- REDIS__HOSTNAME=redis
- REDIS__PORT=6379
- ELASTICSEARCH__URL=http://elasticsearch:9200
- MINIO__ENDPOINT=minio
- MINIO__PORT=9000
- MINIO__USE_SSL=false
- MINIO__ACCESS_KEY=${MINIO_ROOT_USER}
- MINIO__SECRET_KEY=${MINIO_ROOT_PASSWORD}
- RABBITMQ__HOSTNAME=rabbitmq
- RABBITMQ__PORT=5672
- RABBITMQ__PORT_MANAGEMENT=15672
- RABBITMQ__MANAGEMENT_SSL=false
- RABBITMQ__USERNAME=${RABBITMQ_DEFAULT_USER}
- RABBITMQ__PASSWORD=${RABBITMQ_DEFAULT_PASS}
- SMTP__HOSTNAME=${SMTP_HOSTNAME}
- SMTP__PORT=25
- PROVIDERS__LOCAL__STRATEGY=LocalStrategy
ports:
- "8080:8080"
depends_on:
- redis
- elasticsearch
- minio
- rabbitmq
restart: always
deploy:
placement:
constraints:
- "node.role==manager"
worker:
image: opencti/worker:5.0.3
environment:
- OPENCTI_URL=http://opencti:8080
- OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
- WORKER_LOG_LEVEL=info
depends_on:
- opencti
deploy:
mode: replicated
replicas: 3
restart: always
connector-history:
image: opencti/connector-history:5.0.3
environment:
- OPENCTI_URL=http://opencti:8080
- OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
- CONNECTOR_ID=${CONNECTOR_HISTORY_ID} # Valid UUIDv4
- CONNECTOR_TYPE=STREAM
- CONNECTOR_NAME=History
- CONNECTOR_SCOPE=history
- CONNECTOR_CONFIDENCE_LEVEL=15 # From 0 (Unknown) to 100 (Fully trusted)
- CONNECTOR_LOG_LEVEL=info
restart: always
depends_on:
- opencti
connector-export-file-stix:
image: opencti/connector-export-file-stix:5.0.3
environment:
- OPENCTI_URL=http://opencti:8080
- OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
- CONNECTOR_ID=${CONNECTOR_EXPORT_FILE_STIX_ID} # Valid UUIDv4
- CONNECTOR_TYPE=INTERNAL_EXPORT_FILE
- CONNECTOR_NAME=ExportFileStix2
- CONNECTOR_SCOPE=application/json
- CONNECTOR_CONFIDENCE_LEVEL=15 # From 0 (Unknown) to 100 (Fully trusted)
- CONNECTOR_LOG_LEVEL=info
restart: always
depends_on:
- opencti
connector-export-file-csv:
image: opencti/connector-export-file-csv:5.0.3
environment:
- OPENCTI_URL=http://opencti:8080
- OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
- CONNECTOR_ID=${CONNECTOR_EXPORT_FILE_CSV_ID} # Valid UUIDv4
- CONNECTOR_TYPE=INTERNAL_EXPORT_FILE
- CONNECTOR_NAME=ExportFileCsv
- CONNECTOR_SCOPE=text/csv
- CONNECTOR_CONFIDENCE_LEVEL=15 # From 0 (Unknown) to 100 (Fully trusted)
- CONNECTOR_LOG_LEVEL=info
restart: always
depends_on:
- opencti
connector-import-file-stix:
image: opencti/connector-import-file-stix:5.0.3
environment:
- OPENCTI_URL=http://opencti:8080
- OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
- CONNECTOR_ID=${CONNECTOR_IMPORT_FILE_STIX_ID} # Valid UUIDv4
- CONNECTOR_TYPE=INTERNAL_IMPORT_FILE
- CONNECTOR_NAME=ImportFileStix
- CONNECTOR_SCOPE=application/json,text/xml
- CONNECTOR_AUTO=false # Enable/disable auto-import of file
- CONNECTOR_CONFIDENCE_LEVEL=15 # From 0 (Unknown) to 100 (Fully trusted)
- CONNECTOR_LOG_LEVEL=info
restart: always
depends_on:
- opencti
connector-import-report:
image: opencti/connector-import-report:5.0.3
environment:
- OPENCTI_URL=http://opencti:8080
- OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
- CONNECTOR_ID=${CONNECTOR_IMPORT_REPORT_ID} # Valid UUIDv4
- CONNECTOR_TYPE=INTERNAL_IMPORT_FILE
- CONNECTOR_NAME=ImportReport
- CONNECTOR_SCOPE=application/pdf,text/plain
- CONNECTOR_AUTO=false # Enable/disable auto-import of file
- CONNECTOR_ONLY_CONTEXTUAL=true # Only extract data related to an entity (a report, a threat actor, etc.)
- CONNECTOR_CONFIDENCE_LEVEL=15 # From 0 (Unknown) to 100 (Fully trusted)
- CONNECTOR_LOG_LEVEL=info
- IMPORT_REPORT_CREATE_INDICATOR=false
restart: always
depends_on:
- opencti
connector-alienvault:
image: opencti/connector-alienvault:5.0.3
environment:
- OPENCTI_URL=http://opencti:8080
- OPENCTI_TOKEN=2cf0a4fe-4ded-4931-901b-20e599b7f013
- CONNECTOR_ID=0c02a154-95a5-4624-b1cd-225582c12975
- CONNECTOR_TYPE=EXTERNAL_IMPORT
- CONNECTOR_NAME=AlienVault
- CONNECTOR_SCOPE=alienvault
- CONNECTOR_CONFIDENCE_LEVEL=15 # From 0 (Unknown) to 100 (Fully trusted)
- CONNECTOR_UPDATE_EXISTING_DATA=false
- CONNECTOR_LOG_LEVEL=info
- ALIENVAULT_BASE_URL=https://otx.alienvault.com
- ALIENVAULT_API_KEY=8f261df53d1d06c095edcd6a4b6677a6f46f72cdfcbbc4e2794758b7dca1a51d
- ALIENVAULT_TLP=White
- ALIENVAULT_CREATE_OBSERVABLES=true
- ALIENVAULT_CREATE_INDICATORS=true
- ALIENVAULT_PULSE_START_TIMESTAMP=2021-05-01T00:00:00 # BEWARE! Could be a lot of pulses!
- ALIENVAULT_REPORT_TYPE=threat-report
- ALIENVAULT_REPORT_STATUS=New
- ALIENVAULT_GUESS_MALWARE=false # Use tags to guess malware.
- ALIENVAULT_GUESS_CVE=true # Use tags to guess CVE.
- ALIENVAULT_EXCLUDED_PULSE_INDICATOR_TYPES=FileHash-MD5,FileHash-SHA1 # Excluded Pulse indicator types.
- ALIENVAULT_ENABLE_RELATIONSHIPS=true # Enable/Disable relationship creation between SDOs.
- ALIENVAULT_ENABLE_ATTACK_PATTERNS_INDICATES=true # Enable/Disable "indicates" relationships between indicators and attack patterns
- ALIENVAULT_INTERVAL_SEC=1800
restart: always
volumes:
esdata:
s3data:
redisdata:
amqpdata:
`
and here is my .env file
`
OPENCTI_ADMIN_EMAIL=admin#opencti.io
OPENCTI_ADMIN_PASSWORD=admin
OPENCTI_ADMIN_TOKEN=2cf0a4fe-4ded-4931-901b-20e599b7f013
MINIO_ROOT_USER=RootUser
MINIO_ROOT_PASSWORD=RootPassword
RABBITMQ_DEFAULT_USER=DefaultUser
RABBITMQ_DEFAULT_PASS=DefaultPass
CONNECTOR_HISTORY_ID=168b9e3f-cbb4-4d06-91d8-073a20ce2453
CONNECTOR_EXPORT_FILE_STIX_ID=a1fed5cb-0b60-4756-95b0-f62fb67204af
CONNECTOR_EXPORT_FILE_CSV_ID=3bbb35ef-2168-4031-832b-451767f6715c
CONNECTOR_IMPORT_FILE_STIX_ID=53bf4c37-f196-401e-af63-e4a0e7194c2b
CONNECTOR_IMPORT_REPORT_ID=11c03aeb-8b8a-4c95-a531-f82b0aebb2ad
SMTP_HOSTNAME=172.17.0.1
`
After creating my Docker stack as I'm unable to connect to localhost:8080 I looked at the logs of the opencti/platform container and I get the following error
{"error":{"name":"ConfigurationError","_error":{},"_showLocations":false,"_showPath":false,"time_thrown":"2021-11-10T16:09:02.791Z","data":{"reason":"ElasticSearch seems down","http_status":500,"category":"technical","error":"connect ECONNREFUSED 172.22.0.3:9200"},"internalData":{}},"category":"APP","version":"5.0.3","level":"error","message":"[OPENCTI] Platform initialization fail","timestamp":"2021-11-10T16:09:02.793Z"}
/opt/opencti/build/src/config/errors.js:8
return new Exception();
^
ConfigurationError: A configuration error has occurred
at error (/opt/opencti/build/src/config/errors.js:8:10)
at ConfigurationError (/opt/opencti/build/src/config/errors.js:53:3)
at /opt/opencti/build/src/database/elasticSearch.js:190:15
at process.async (node:internal/process/task_queues:96:5)
at checkSystemDependencies (/opt/opencti/build/src/initialization.js:113:40)
at initialization (/opt/opencti/build/src/initialization.js:372:3)
at /opt/opencti/build/src/boot.js:7:16
which says that OpenCTI is unable to connect with the elasticsearch container.
Running curl on the host machine returns the following:
C:\Windows\system32>curl -X GET "localhost:9200/_cluster/health?pretty"
curl: (52) Empty reply from server
and running curl inside the elastic search container returns the following:
sh-4.4# curl -X GET "localhost:9200/_cluster/health?pretty"
curl: (7) Failed to connect to localhost port 9200: Connection refused
And finally here is the output of the Elastic container log
c:\Tools\OpenCTI\docker>docker logs docker_elasticsearch_1
WARNING: A terminally deprecated method in java.lang.System has been called
WARNING: System::setSecurityManager has been called by org.elasticsearch.bootstrap.Elasticsearch (file:/usr/share/elasticsearch/lib/elasticsearch-7.15.1.jar)
WARNING: Please consider reporting this to the maintainers of org.elasticsearch.bootstrap.Elasticsearch
WARNING: System::setSecurityManager will be removed in a future release
WARNING: A terminally deprecated method in java.lang.System has been called
WARNING: System::setSecurityManager has been called by org.elasticsearch.bootstrap.Security (file:/usr/share/elasticsearch/lib/elasticsearch-7.15.1.jar)
WARNING: Please consider reporting this to the maintainers of org.elasticsearch.bootstrap.Security
WARNING: System::setSecurityManager will be removed in a future release
{"type": "server", "timestamp": "2021-11-10T16:12:49,261Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "329e6b6ab306", "message": "version[7.15.1], pid[8], build[default/docker/83c34f456ae29d60e94d886e455e6a3409bba9ed/2021-10-07T21:56:19.031608185Z], OS[Linux/5.10.16.3-microsoft-standard-WSL2/amd64], JVM[Eclipse Adoptium/OpenJDK 64-Bit Server VM/17/17+35]" }
{"type": "server", "timestamp": "2021-11-10T16:12:50,963Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "329e6b6ab306", "message": "JVM home [/usr/share/elasticsearch/jdk], using bundled JDK [true]" }
{"type": "server", "timestamp": "2021-11-10T16:12:50,965Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "329e6b6ab306", "message": "JVM arguments [-Xshare:auto, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -XX:+ShowCodeDetailsInExceptionMessages, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.locale.providers=SPI,COMPAT, --add-opens=java.base/java.io=ALL-UNNAMED, -XX:+UseG1GC, -Djava.io.tmpdir=/tmp/elasticsearch-3486636715162729323, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Des.cgroups.hierarchy.override=/, -Xms3170m, -Xmx3170m, -XX:MaxDirectMemorySize=1661992960, -XX:G1HeapRegionSize=4m, -XX:InitiatingHeapOccupancyPercent=30, -XX:G1ReservePercent=15, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/usr/share/elasticsearch/config, -Des.distribution.flavor=default, -Des.distribution.type=docker, -Des.bundled_jdk=true]" }
{"type": "server", "timestamp": "2021-11-10T16:24:45,864Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "329e6b6ab306", "message": "loaded module [aggs-matrix-stats]" }
{"type": "server", "timestamp": "2021-11-10T16:24:45,937Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "329e6b6ab306", "message": "loaded module [analysis-common]" }
{"type": "server", "timestamp": "2021-11-10T16:24:45,937Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "329e6b6ab306", "message": "loaded module [constant-keyword]" }
{"type": "server", "timestamp": "2021-11-10T16:28:55,893Z", "level": "WARN", "component": "o.e.t.ThreadPool", "cluster.name": "docker-cluster", "node.name": "329e6b6ab306", "message": "timer thread slept for [40.5s/40538938900ns] on relative clock which is above the warn threshold of [5000ms]" }
{"type": "server", "timestamp": "2021-11-10T16:28:56,461Z", "level": "WARN", "component": "o.e.t.ThreadPool", "cluster.name": "docker-cluster", "node.name": "329e6b6ab306", "message": "timer thread slept for [21.1s/21140ms] on absolute clock which is above the warn threshold of [5000ms]" }
{"type": "server", "timestamp": "2021-11-10T16:29:09,291Z", "level": "WARN", "component": "o.e.t.ThreadPool", "cluster.name": "docker-cluster", "node.name": "329e6b6ab306", "message": "timer thread slept for [21.1s/21189926100ns] on relative clock which is above the warn threshold of [5000ms]" }
{"type": "server", "timestamp": "2021-11-10T16:29:09,730Z", "level": "WARN", "component": "o.e.t.ThreadPool", "cluster.name": "docker-cluster", "node.name": "329e6b6ab306", "message": "timer thread slept for [13.2s/13272ms] on absolute clock which is above the warn threshold of [5000ms]" }
{"type": "server", "timestamp": "2021-11-10T16:29:09,808Z", "level": "WARN", "component": "o.e.t.ThreadPool", "cluster.name": "docker-cluster", "node.name": "329e6b6ab306", "message": "timer thread slept for [13.2s/13271423200ns] on relative clock which is above the warn threshold of [5000ms]" }
{"type": "server", "timestamp": "2021-11-10T16:29:40,059Z", "level": "WARN", "component": "o.e.t.ThreadPool", "cluster.name": "docker-cluster", "node.name": "329e6b6ab306", "message": "timer thread slept for [21s/21002ms] on absolute clock which is above the warn threshold of [5000ms]" }
{"type": "server", "timestamp": "2021-11-10T16:29:41,083Z", "level": "WARN", "component": "o.e.t.ThreadPool", "cluster.name": "docker-cluster", "node.name": "329e6b6ab306", "message": "timer thread slept for [21s/21002782100ns] on relative clock which is above the warn threshold of [5000ms]" }
{"type": "server", "timestamp": "2021-11-10T16:31:25,568Z", "level": "WARN", "component": "o.e.t.ThreadPool", "cluster.name": "docker-cluster", "node.name": "329e6b6ab306", "message": "timer thread slept for [1.3m/80947ms] on absolute clock which is above the warn threshold of [5000ms]" }
{"type": "server", "timestamp": "2021-11-10T16:31:27,646Z", "level": "WARN", "component": "o.e.t.ThreadPool", "cluster.name": "docker-cluster", "node.name": "329e6b6ab306", "message": "timer thread slept for [1.3m/80946985800ns] on relative clock which is above the warn threshold of [5000ms]" }
{"type": "server", "timestamp": "2021-11-10T16:40:11,402Z", "level": "WARN", "component": "o.e.t.ThreadPool", "cluster.name": "docker-cluster", "node.name": "329e6b6ab306", "message": "timer thread slept for [11.9s/11968ms] on absolute clock which is above the warn threshold of [5000ms]" }
{"type": "server", "timestamp": "2021-11-10T16:40:16,415Z", "level": "WARN", "component": "o.e.t.ThreadPool", "cluster.name": "docker-cluster", "node.name": "329e6b6ab306", "message": "timer thread slept for [11.9s/11967222000ns] on relative clock which is above the warn threshold of [5000ms]" }
{"type": "server", "timestamp": "2021-11-10T16:40:17,438Z", "level": "WARN", "component": "o.e.t.ThreadPool", "cluster.name": "docker-cluster", "node.name": "329e6b6ab306", "message": "timer thread slept for [6s/6050ms] on absolute clock which is above the warn threshold of [5000ms]" }
{"type": "server", "timestamp": "2021-11-10T16:40:17,489Z", "level": "WARN", "component": "o.e.t.ThreadPool", "cluster.name": "docker-cluster", "node.name": "329e6b6ab306", "message": "timer thread slept for [6s/6050442500ns] on relative clock which is above the warn threshold of [5000ms]" }
{"type": "server", "timestamp": "2021-11-10T16:42:07,964Z", "level": "WARN", "component": "o.e.t.ThreadPool", "cluster.name": "docker-cluster", "node.name": "329e6b6ab306", "message": "timer thread slept for [9.1s/9115ms] on absolute clock which is above the warn threshold of [5000ms]" }
{"type": "server", "timestamp": "2021-11-10T16:42:08,922Z", "level": "WARN", "component": "o.e.t.ThreadPool", "cluster.name": "docker-cluster", "node.name": "329e6b6ab306", "message": "timer thread slept for [9.1s/9115088600ns] on relative clock which is above the warn threshold of [5000ms]" }
{"type": "server", "timestamp": "2021-11-10T16:44:45,356Z", "level": "WARN", "component": "o.e.t.ThreadPool", "cluster.name": "docker-cluster", "node.name": "329e6b6ab306", "message": "execution of [ReschedulingRunnable{runnable=org.elasticsearch.watcher.ResourceWatcherService$ResourceMonitor#3eac944, interval=5s}] took [6225ms] which is above the warn threshold of [5000ms]" }
{"type": "server", "timestamp": "2021-11-10T16:44:45,139Z", "level": "WARN", "component": "o.e.t.ThreadPool", "cluster.name": "docker-cluster", "node.name": "329e6b6ab306", "message": "timer thread slept for [6.2s/6226ms] on absolute clock which is above the warn threshold of [5000ms]" }
{"type": "server", "timestamp": "2021-11-10T16:44:45,809Z", "level": "WARN", "component": "o.e.t.ThreadPool", "cluster.name": "docker-cluster", "node.name": "329e6b6ab306", "message": "timer thread slept for [6.2s/6225665000ns] on relative clock which is above the warn threshold of [5000ms]" }
{"type": "server", "timestamp": "2021-11-10T16:44:57,124Z", "level": "WARN", "component": "o.e.t.ThreadPool", "cluster.name": "docker-cluster", "node.name": "329e6b6ab306", "message": "timer thread slept for [6.7s/6719ms] on absolute clock which is above the warn threshold of [5000ms]" }
{"type": "server", "timestamp": "2021-11-10T16:44:57,488Z", "level": "WARN", "component": "o.e.t.ThreadPool", "cluster.name": "docker-cluster", "node.name": "329e6b6ab306", "message": "timer thread slept for [6.7s/6718734700ns] on relative clock which is above the warn threshold of [5000ms]" }
{"type": "server", "timestamp": "2021-11-10T16:44:57,181Z", "level": "WARN", "component": "o.e.t.ThreadPool", "cluster.name": "docker-cluster", "node.name": "329e6b6ab306", "message": "execution of [ReschedulingRunnable{runnable=org.elasticsearch.watcher.ResourceWatcherService$ResourceMonitor#4e9331e8, interval=1m}] took [6718ms] which is above the warn threshold of [5000ms]" }
Docker version is the following
Client:
Cloud integration: 1.0.17
Version: 20.10.8
API version: 1.41
Go version: go1.16.6
Git commit: 3967b7d
Built: Fri Jul 30 19:58:50 2021
OS/Arch: windows/amd64
Context: default
Experimental: true
Server: Docker Engine - Community
Engine:
Version: 20.10.8
API version: 1.41 (minimum version 1.12)
Go version: go1.16.6
Git commit: 75249d8
Built: Fri Jul 30 19:52:31 2021
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.4.9
GitCommit: e25210fe30a0a703442421b0f60afac609f950a3
runc:
Version: 1.0.1
GitCommit: v1.0.1-0-g4144b63
docker-init:
Version: 0.19.0
GitCommit: de40ad0
I don't have the faintest clue of what's going on! Please help!
Thank you in advance!
P.S. - I tried to use Elasticsearch's docker-compose.yml file to start another stack
version: '2.2'
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.15.1
container_name: es01
environment:
- node.name=es01
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es02,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- elastic
es02:
image: docker.elastic.co/elasticsearch/elasticsearch:7.15.1
container_name: es02
environment:
- node.name=es02
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es01,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data02:/usr/share/elasticsearch/data
networks:
- elastic
es03:
image: docker.elastic.co/elasticsearch/elasticsearch:7.15.1
container_name: es03
environment:
- node.name=es03
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es01,es02
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data03:/usr/share/elasticsearch/data
networks:
- elastic
volumes:
data01:
driver: local
data02:
driver: local
data03:
driver: local
networks:
elastic:
driver: bridge
and I'm receiving the same error
Please find the attached Diagnostics ID from docker
8A8E051B-440A-45B6-8AFC-1DCB7BB07FF3/20211110173108

kubectl create from inside pod

I have a running jenkins pod and i am trying to execute following commands:
sudo kubectl --kubeconfig /opt/jenkins_home/admin.conf apply -f /opt/jenkins_home/ab-kubernetes/ab-back.yml
It is giving following error:
Error from server (NotFound): the server could not find the requested resource
What cound go wrong here?
ab-back.yml file
---
apiVersion: v1
kind: Service
metadata:
name: dg-back-svc
spec:
selector:
app: dg-core-backend-d
type: NodePort
ports:
- name: http
protocol: TCP
port: 8081
nodePort: 30003
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: dg-core-backend-d
spec:
replicas: 1
template:
metadata:
labels:
app: dg-core-backend-d
spec:
containers:
- name: dg-core-java
image: ab/dg-springboot-java:1.0
imagePullPolicy: IfNotPresent
command: ["sh"]
args: ["-c", "/root/post-deployment.sh"]
ports:
- containerPort: 8081
# livenessProbe:
# httpGet:
# path: /
# port: 8080
env:
- name: SPRING_PROFILES_ACTIVE
value: xxx
UPDATE:
kubectl version is as follows :
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:17:28Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.0", GitCommit:"fff5156092b56e6bd60fff75aad4dc9de6b6ef37", GitTreeState:"clean", BuildDate:"2017-03-28T16:24:30Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
On applying logs as --v=4,kubectl apply is working and giving logs as follows :
I0702 11:40:17.721604 1601 merged_client_builder.go:159] Using in-cluster namespace
I0702 11:40:17.734648 1601 decoder.go:224] decoding stream as YAML
service/dg-back-svc created
deployment.extensions/dg-core-backend-d created
but kubectl create is giving error as :
I0702 11:41:12.265490 1631 helpers.go:201] server response object: [{
"metadata": {},
"status": "Failure",
"message": "the server could not find the requested resource",
"reason": "NotFound",
"details": {
"causes": [
{
"reason": "UnexpectedServerResponse",
"message": "unknown"
}
]
},
"code": 404
}]
Also on doing kubectl get pods --v=10,it is giving log as :
Response Body: {
"metadata": {},
"status": "Failure",
"message": "only the following media types are accepted: application/json, application/yaml, application/vnd.kubernetes.protobuf",
"reason": "NotAcceptable",
"code": 406
}
I0702 12:34:27.542564 2514 request.go:1099] body was not decodable (unable to check for Status): Object 'Kind' is missing in '{
"metadata": {},
"status": "Failure",
"message": "only the following media types are accepted: application/json, application/yaml, application/vnd.kubernetes.protobuf",
"reason": "NotAcceptable",
"code": 406
}'
No resources found.
I0702 12:34:27.542813 2514 helpers.go:201] server response object: [{
"metadata": {},
"status": "Failure",
"message": "unknown (get pods)",
"reason": "NotAcceptable",
"details": {
"kind": "pods",
"causes": [
{
"reason": "UnexpectedServerResponse",
"message": "unknown"
}
]
},
"code": 406
}]
The problem is in versions, try to use the old version of the client or upgrade the server. kubectl supports one version forward and backward skew:
From documentation
a client should be skewed no more than one minor version from the
master, but may lead the master by up to one minor version. For
example, a v1.3 master should work with v1.1, v1.2, and v1.3 nodes,
and should work with v1.2, v1.3, and v1.4 clients.
Kubernetes server doesnt have this extensions/v1beta1 this resources. thats the reason why you cannot create dg-core-backend-d
You can check this by typing kubectl api-versions

Kubernetes (1.10) mountPropagation: Bidirectional not working.

I'm creating a pod with a volumeMount set to mountPropagation: Bidirectional. When created, the container is mounting the volume with "Propagation": "rprivate".
From the k8s docs I would expect mountPropagation: Bidirectional to result in a volume mount propagation of rshared
If I start the container directly with docker this is working.
Some info:
Deployment Yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: test
spec:
selector:
matchLabels:
app: test
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: test
spec:
containers:
- image: gcr.io/google_containers/busybox:1.24
command:
- sleep
- "36000"
name: test
volumeMounts:
- mountPath: /tmp/test
mountPropagation: Bidirectional
name: test-vol
volumes:
- name: test-vol
hostPath:
path: /tmp/test
Resulting mount section from docker inspect
"Mounts": [
{
"Type": "bind",
"Source": "/tmp/test",
"Destination": "/tmp/test",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
}…..
Equivalent Docker run
docker run --restart=always --name test -d --net=host --privileged=true -v /tmp/test:/tmp/test:shared gcr.io/google_containers/busybox:1.24
Resulting Mounts section from docker inspect when created with docker run
"Mounts": [
{
"Type": "bind",
"Source": "/tmp/test",
"Destination": "/tmp/test",
"Mode": "shared",
"RW": true,
"Propagation": "shared"
}...
Output of kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.1", GitCommit:"d4ab47518836c750f9949b9e0d387f20fb92260b", GitTreeState:"clean", BuildDate:"2018-04-13T22:29:03Z", GoVersion:"go1.9.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.1", GitCommit:"d4ab47518836c750f9949b9e0d387f20fb92260b", GitTreeState:"clean", BuildDate:"2018-04-12T14:14:26Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Using rke version v0.1.6
this was a regression fixed in 1.10.3 in https://github.com/kubernetes/kubernetes/pull/62633

Resources