Can balena pull images from https://hub.docker.com? - docker

I am trying to create a balena service as:
docker-compose.yml:
version: "2.1"
services:
otbr-chip:
image: connectedhomeip/otbr:sve2
restart: always
network_mode: host
privileged: true
devices:
- "/dev/ttyACM0:/dev/radio"
environment:
- NAT64=1
- DNS64=0
- WEB_GUI=0
entrypoint: ["/app/etc/docker/docker_entrypoint.sh"]
command: [ "--radio-url", "spinel+hdlc+uart:///dev/radio?uart-baudrate=115200", "-B", "eth0" ]
labels:
io.balena.features.balena-socket: '1'
io.balena.features.kernel-modules: '1'
io.balena.features.firmware: '1'
io.balena.features.dbus: '1'
io.balena.features.sysfs: '1'
io.balena.features.procfs: '1'
io.balena.features.journal-logs: '1'
io.balena.features.supervisor-api: '1'
io.balena.features.balena-api: '1'
io.balena.update.strategy: download-then-kill
io.balena.update.handover-timeout: ''
But I am wondering whether balena can pull images from https://hub.docker.com?
At the moment a "balena push" works fine but it fails to start the service, I am wondering if the issue is that it can't actually pull the image?
Any help is much appreciated
Thanks

Related

Promtail: Loki Server returned HTTP status 429 Too Many Requests

I'm running Loki for test purposes in Docker and am recently getting following error from the Promtail and Loki containers:
level=warn ts=2022-02-18T09:41:39.186511145Z caller=client.go:349 component=client host=loki:3100 msg="error sending batch, will retry" status=429 error="server returned HTTP status 429 Too Many Requests (429): Maximum active stream limit exceeded, reduce the number of active streams (reduce labels or reduce label values), or contact your Loki administrator to see if the limit can be increased"
I have tried increasing limit settings (ingestion_rate_mb and ingestion_burst_size_mb) in my Loki config.
I setup two Promtail jobs - one job ingesting MS Exchange logs from a local directory (currently 8TB and increasing), the other job gets logs spooled from syslog-ng.
I've read that reducing labels help. But I'm only using two labels.
Configuration
Below my config files (docker-compose, loki, promtail):
docker-compose.yaml
version: "3"
networks:
loki:
services:
loki:
image: grafana/loki:2.4.2
container_name: loki
restart: always
user: "10001:10001"
ports:
- "3100:3100"
command: -config.file=/etc/loki/local-config.yaml
volumes:
- ${DATADIR}/loki/etc:/etc/loki:rw
- ${DATADIR}/loki/chunks:/loki/chunks
networks:
- loki
promtail:
image: grafana/promtail:2.4.2
container_name: promtail
restart: always
volumes:
- /var/log/loki:/var/log/loki
- ${DATADIR}/promtail/etc:/etc/promtail
ports:
- "1514:1514" # for syslog-ng
- "9080:9080" # for http web interface
command: -config.file=/etc/promtail/config.yml
networks:
- loki
grafana:
image: grafana/grafana:8.3.4
container_name: grafana
restart: always
user: "476:0"
volumes:
- ${DATADIR}/grafana/var:/var/lib/grafana
ports:
- "3000:3000"
networks:
- loki
Loki Config
auth_enabled: false
server:
http_listen_port: 3100
common:
path_prefix: /loki
storage:
filesystem:
chunks_directory: /loki/chunks
rules_directory: /loki/rules
replication_factor: 1
ring:
instance_addr: 127.0.0.1
kvstore:
store: inmemory
schema_config:
configs:
- from: 2020-10-24
store: boltdb-shipper
object_store: filesystem
schema: v11
index:
prefix: index_
period: 24h
ruler:
alertmanager_url: http://localhost:9093
# https://grafana.com/docs/loki/latest/configuration/#limits_config
limits_config:
reject_old_samples: true
reject_old_samples_max_age: 168h
ingestion_rate_mb: 12
ingestion_burst_size_mb: 24
per_stream_rate_limit: 24MB
chunk_store_config:
max_look_back_period: 336h
table_manager:
retention_deletes_enabled: true
retention_period: 2190h
ingester:
lifecycler:
address: 127.0.0.1
ring:
kvstore:
store: inmemory
replication_factor: 1
final_sleep: 0s
chunk_encoding: snappy
Promtail Config
server:
http_listen_port: 9080
grpc_listen_port: 0
positions:
filename: /tmp/positions.yaml
clients:
- url: http://loki:3100/loki/api/v1/push
scrape_configs:
- job_name: exchange
static_configs:
- targets:
- localhost
labels:
job: exchange
__path__: /var/log/loki/exchange/*/*/*log
- job_name: syslog-ng
syslog:
listen_address: 0.0.0.0:1514
idle_timeout: 60s
label_structured_data: yes
labels:
job: "syslog-ng"
relabel_configs:
- source_labels: ['__syslog_message_hostname']
target_label: 'host'

How to reduce the amount of chunks to prevent running out of disk space for Loki/Promtail?

I'm currently evaluating Loki and facing issues with running out of disk space due to the amount of chunks.
My instance is running in Docker containers using a docker-compose setup (Loki, Promtail, Grafana) from the official documentation (see docker-compose.yml below).
I'm more or less using the default configuration of Loki and Promtail. Except for some tweaks for the retention period (I need 3 months) plus a higher ingestion rate and ingestion burst size (see configs below).
I bind-mounted a volume containing 1TB of log files (MS Exchange logs) and set up a job in promtail using only one label.
The resulting chunks are constantly eating up disk space and I had to expand the VM disk incrementally up to 1TB.
Currently, I have 0.9 TB of chunks. Shouldn't this be far less? (Like 25% of initial log size?). Over the last weekend, I stopped the Promtail container to prevent running out of disk space. Today I started Promtail again and get the following warning.
level=warn ts=2022-01-24T08:54:57.763739304Z caller=client.go:349 component=client host=loki:3100 msg="error sending batch, will retry" status=429 error="server returned HTTP status 429 Too Many Requests (429): Ingestion rate limit exceeded (limit: 12582912 bytes/sec) while attempting to ingest '2774' lines totaling '1048373' bytes, reduce log volume or contact your Loki administrator to see if the limit can be increased"
I had this warning beforehand and increasing ingestion_rate_mb to 12and ingestion_burst_size_mb to 24 fixed this...
Kind of at a dead-end here.
Docker Compose
version: "3"
networks:
loki:
services:
loki:
image: grafana/loki:2.4.1
container_name: loki
restart: always
ports:
- "3100:3100"
command: -config.file=/etc/loki/local-config.yaml
volumes:
- ${DATADIR}/loki/etc:/etc/loki:rw
networks:
- loki
promtail:
image: grafana/promtail:2.4.1
container_name: promtail
restart: always
volumes:
- /var/log/exchange:/var/log
- ${DATADIR}/promtail/etc:/etc/promtail
ports:
- "1514:1514" # for syslog-ng
- "9080:9080" # for http web interface
command: -config.file=/etc/promtail/config.yml
networks:
- loki
grafana:
image: grafana/grafana:latest
container_name: grafana
restart: always
volumes:
- grafana_var:/var/lib/grafana
ports:
- "3000:3000"
networks:
- loki
volumes:
grafana_var:
Loki Config:
server:
http_listen_port: 3100
common:
path_prefix: /loki
storage:
filesystem:
chunks_directory: /loki/chunks
rules_directory: /loki/rules
replication_factor: 1
ring:
instance_addr: 127.0.0.1
kvstore:
store: inmemory
schema_config:
configs:
- from: 2020-10-24
store: boltdb-shipper
object_store: filesystem
schema: v11
index:
prefix: index_
period: 24h
ruler:
alertmanager_url: http://localhost:9093
# https://grafana.com/docs/loki/latest/configuration/#limits_config
limits_config:
reject_old_samples: true
reject_old_samples_max_age: 168h
ingestion_rate_mb: 12
ingestion_burst_size_mb: 24
per_stream_rate_limit: 12MB
chunk_store_config:
max_look_back_period: 336h
table_manager:
retention_deletes_enabled: true
retention_period: 2190h
ingester:
lifecycler:
address: 127.0.0.1
ring:
kvstore:
store: inmemory
replication_factor: 1
final_sleep: 0s
chunk_encoding: snappy
Promtail Config
server:
http_listen_port: 9080
grpc_listen_port: 0
positions:
filename: /tmp/positions.yaml
clients:
- url: http://loki:3100/loki/api/v1/push
scrape_configs:
- job_name: exchange
static_configs:
- targets:
- localhost
labels:
job: exchangelog
__path__: /var/log/*/*/*log
Issue was solved. Logs were stored on ZFS with compression enabled and were thus listed much smaller on the file system. Chunk size was actually accurate. My bad.

Ansible template error while templating string: expected token 'end of print statement', got 'integer

I have task which configure my docker-compose.yml
here is my docker-compose.yml file which i templated use the template module
services:
zeebe:
# restart: always
container_name: chai_maker
image: registry.uzbekistan.uz/cicd/chai_s_molokom:0.26.0
ports:
- 26500:26500
- 26501:26501
- 26502:26502
- 9600:9600 # monitoring
# networks:
# - zibi
environment:
TIME_ZONE: 'Asia/Khudzhand'
ZEEBE_BROKER_GATEWAY_ENABLE: 'true'
ZEEBE_BROKER_NETWORK_HOST: '0.0.0.0' # this broker's host (advertized host)
ZEEBE_BROKER_NETWORK_ADVERTISEDHOST: '{{ ansible_default_ipv4.address }}'
ZEEBE_HOST: '0.0.0.0'
ZEEBE_BROKER_NETWORK_MAXMESSAGESIZE: '512KB'
ZEEBE_BROKER_NETWORK_COMMANDAPI_PORT: '26501'
ZEEBE_BROKER_NETWORK_INTERNALAPI_PORT: '26502'
ZEEBE_BROKER_NETWORK_MONITORINGAPI_PORT: '9600'
ZEEBE_BROKER_CLUSTER_NODEID: '{{ '
ZEEBE_BROKER_CLUSTER_PARTITIONSCOUNT: '1'
ZEEBE_BROKER_CLUSTER_REPLICATIONFACTOR: '6' # set to 6
ZEEBE_BROKER_CLUSTER_CLUSTERSIZE: '6' # set to 6
ZEEBE_BROKER_CLUSTER_CLUSTERNAME: 'pidoras'
ZEEBE_BROKER_THREADS_CPUTHREADCOUNT: '2'
ZEEBE_BROKER_THREADS_IOTHREADCOUNT: '2'
ZEEBE_BROKER_EXPORTERS_ELASTICSEARCH_ARGS_URLS: 'http://192.168.2.13:9200'
ZEEBE_DEBUG: 'false'
# JAVA_TOOL_OPTIONS: '-XX:MaxRAMPercentage=25.0 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/usr/local/zeebe/data -XX:ErrorFile=/usr/local/zeebe/data/zeebe_error%p.logo -XX:+ExitOnOutOfMemoryError'
ZEEBE_BROKER_CLUSTER_INITIALCONTACTPOINTS: '10.0.88.80:26502, 10.0.88.81:26502, 10.0.88.82:26502, 10.0.88.83:26502, 10.0.88.84:26502, 10.0.88.85:26502, 10.0.88.86:26502' # comma separated host list
volumes:
- ${PWD}/data:/usr/local/zeebe/data
- ${PWD}/exporters:/usr/local/zeebe/exporters
- ${PWD}/application.yaml:/usr/local/zeebe/config/application.yaml
- ${PWD}/log4j2.xml:/usr/local/zeebe/config/log4j2.xml
networks:
zibi:
driver: host
and when i execute my role, it exit with this error
FAILED! => {"changed": false, "msg": "AnsibleError: template error while templating string: expected token 'end of print statement', got 'integer'

Docker compose/Swarm: Use network names of compose file

I work with a compose file which looks like this:
version: '3.7'
services:
shinyproxy:
build: /home/shinyproxy
deploy:
#replicas: 3
user: root:root
hostname: shinyproxy
image: shinyproxy-example
networks:
- sp-example-net
volumes:
- type: bind
source: /var/run/docker.sock
target: /var/run/docker.sock
- type: bind
source: /home/shinyproxy/application.yml
target: /opt/shinyproxy/application.yml
....
networks:
sp-example-net:
driver: overlay
attachable: true
This shinyproxy application uses the following .yml file
proxy:
port: 5000
template-path: /opt/shinyproxy/templates/2col
authentication: keycloak
admin-groups: admins
users:
- name: jack
password: password
groups: admins
- name: jeff
password: password
container-backend: docker-swarm
docker:
internal-networking: true
container-network: sp-example-net
specs:
- id: 01_hello
display-name: Hello Application
description: Application which demonstrates the basics of a Shiny app
container-cmd: ["R", "-e", "shinyproxy::run_01_hello()"]
container-image: openanalytics/shinyproxy-demo
container-network: "${proxy.docker.container-network}"
access-groups: test
- id: euler
display-name: Euler's number
container-cmd: ["R", "-e", "shiny::runApp('/root/euler')"]
container-image: euler-docker
container-network: "${proxy.docker.container-network}"
access-groups: test
To deploy the stack I run the following command:
docker stack deploy -c docker-compose.yml test
This results in the following: Creating network test_sp-example-net
So indead of sp-example_net my network´s name is test_sp-example_net
Is there a way to prevent this kind of combination for my network name?
Thank you!

ELK for spring boot application using docker - performance issues

We are using ELK for logging in our spring application using docker setup. I have configured log stash to read the log file from a given path(where the application generates the logs) and pass it to elastic search. The initial setup works fine and all the logs are passed to kibana instantly. However, as the size of the logs increase (or some form of application logging happens), the response time for application increases exponentially and ultimately brings down the application and everything within the docker network.
Logstash conf file:
input {
file {
type => "java"
path => ["/logs/application.log"]
}
filter {
multiline {
pattern => "^%{YEAR}-%{MONTHNUM}-%{MONTHDAY} %{TIME}.*"
negate => "true"
what => "previous"
periodic_flush => false
}
if [message] =~ "\tat" {
grok {
match => ["message", "^(\tat)"]
add_tag => ["stacktrace"]
}
}
grok {
match => [ "message",
"(?<timestamp>%{YEAR}-%{MONTHNUM}-%{MONTHDAY} %{TIME}) %{LOGLEVEL:level} %{NUMBER:pid} --- \[(?<thread>[A-Za-z0-9-]+)\] [A-Za-z0-9.]*\.(?<class>[A-Za-z0-9#_]+)\s*:\s+(?<logmessage>.*)",
"message",
"(?<timestamp>%{YEAR}-%{MONTHNUM}-%{MONTHDAY} %{TIME}) %{LOGLEVEL:level} %{NUMBER:pid} --- .+? :\s+(?<logmessage>.*)"
]
}
#Parsing out timestamps which are in timestamp field thanks to previous grok section
date {
match => [ "timestamp" , "yyyy-MM-dd HH:mm:ss.SSS" ]
}
}
output {
# Sending properly parsed log events to elasticsearch
elasticsearch {
hosts => ["elasticsearch:9200"] // elastic search is the name if the service in docker-compose file for elk
}
}}
Logstash Docker file:
FROM logstash
ADD config/logstash.conf /tmp/config/logstash.conf
Volume $HOME/Documents/logs /logs
RUN touch /tmp/config/logstash.conf
EXPOSE 5000
ENTRYPOINT ["logstash", "agent","-v","-f","/tmp/config/logstash.conf"]
docker compose for ELK:
version: '2'
services:
elasticsearch:
image: elasticsearch:2.3.3
command: elasticsearch -Des.network.host=0.0.0.0
ports:
- "9200:9200"
- "9300:9300"
networks:
- elk
logstash:
build: image/logstash
volumes:
- $HOME/Documents/logs:/logs
ports:
- "5000:5000"
networks:
- elk
kibana:
image: kibana:4.5.1
ports:
- "5601:5601"
networks:
- elk
networks:
elk:
Note: My spring-boot application and elk are on different networks. The performance issue remains same even if they are on same container.
Is this a performance issue because of the continuous writing/polling of a log file which is causing read/write lock issues?

Resources