I am trying prometheus to scrape uwsgi container on port 7070
I have following scrape job in prometheus.yaml
- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: kubernetes_pod_name
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scheme]
action: replace
target_label: __scheme__
regex: (https?)
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
target_label: __address__
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
Also have annotations for uwsgi container :
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "7070"
If I curl http://<host_ip>:7070/metrics from any container, it return json format metrics.
But when I do curl http://<host_ip>:7070/metrics | promtool check metrics.
I get error while linting: text format parsing error in line 1: invalid metric name
I think this is because uwsgi container expose metrics in json format and prometheus doesn't understand json.
What I need to do to make it scrapeable by prometheus ?
Manage to do this with https://github.com/timonwong/uwsgi_exporter as a sidecar container.
- name: uwsgi
image: .....
- name: uwsgi-exporter
image: timonwong/uwsgi-exporter:latest
imagePullPolicy: Always
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- name: uwsgi-exp-port
containerPort: 9117
protocol: TCP
args:
- --stats.uri=http://localhost:7070/metrics
https://www.robustperception.io/writing-json-exporters-in-python
This should be your answer. So you need to write your JSON exporter
Related
I am new to ECS and I am trying to deploy it with Cloudformation.
I was making the following cloudformation template by looking at the documentation and some examples I found from blogs and some articles.
However, for some reason, it got stuck in updating one of the resources and eventually timed out.
I am not sure why it gets stuck and fails.
Can someone spot the mistake I am making?
For now, my goal is to deploy and see the app on the internet. I am not really looking for the advanced configuration.
I also pass the ecr url to this upon deployment aws cli.
Thank you in advance.
AWSTemplateFormatVersion: 2010-09-09
Transform: AWS::Serverless-2016-10-31
Description: >
ECS Service
Parameters:
Environment:
Type: String
Default: alpha
AllowedValues:
- alpha
- beta
- production
ECRDockerUri:
Type: String
Default: <url for ecr repo>
ContainerPort:
Type: Number
Default: 8080
Resources:
LogGroup:
Type: AWS::Logs::LogGroup
Properties:
LogGroupName: !Sub "${Environment}-fake-user-api-logGroup"
RetentionInDays: 30
ECSCluster:
Type: 'AWS::ECS::Cluster'
Properties:
ClusterName: !Sub "${Environment}-MyFargateCluster"
ExecutionRole:
Type: AWS::IAM::Role
Properties:
RoleName: !Sub "${Environment}-${AWS::AccountId}-ExecutionRole"
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Principal:
Service: ecs-tasks.amazonaws.com
Action: 'sts:AssumeRole'
ManagedPolicyArns:
- 'arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy'
ECSService:
Type: AWS::ECS::Service
Properties:
ServiceName: !Sub "${Environment}-${AWS::AccountId}-ECSService"
Cluster: !Ref ECSCluster
TaskDefinition: !Ref TaskDefinition
DesiredCount: 1
TaskDefinition:
Type: AWS::ECS::TaskDefinition
Properties:
TaskRoleArn: !Ref ExecutionRole
ContainerDefinitions:
- Name: !Sub "${Environment}-${AWS::AccountId}-Container"
Image: !Ref ECRDockerUri
Memory: 1024
Essential: true
DisableNetworking: false
Privileged: true
ReadonlyRootFilesystem: true
Environment:
- Name: SPRING_PROFILES_ACTIVE
Value: !Ref Environment
PortMappings:
- ContainerPort: !Ref ContainerPort
HostPort: !Ref ContainerPort
LogConfiguration:
LogDriver: awslogs
Options:
awslogs-group: !Ref LogGroup
awslogs-region: ca-central-1
I went through your CFN stack and found some things missing. I noticed that your Cluster name is ENV-MyFargateCluster so I am assuming your goal is to create a fargate service. To run a fargate service, you need to provide the networking configurations and notify that you want to create a Fargate service by specifying the Launch Type. Plus Fargate tasks cannot be Privileged.
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ecs-taskdefinition-containerdefinitions.html#cfn-ecs-taskdefinition-containerdefinition-privileged
Below is my snippet of the code :
AWSTemplateFormatVersion: 2010-09-09
Transform: 'AWS::Serverless-2016-10-31'
Description: |
ECS Service
Parameters:
Environment:
Type: String
Default: alpha
AllowedValues:
- alpha
- beta
- production
ECRDockerUri:
Type: String
Default: 'image'
ContainerPort:
Type: Number
Default: 80
Resources:
LogGroup:
Type: 'AWS::Logs::LogGroup'
Properties:
LogGroupName: !Sub '${Environment}-fake-user-api-logGroup'
RetentionInDays: 30
ECSCluster:
Type: 'AWS::ECS::Cluster'
Properties:
ClusterName: !Sub '${Environment}-MyFargateCluster'
ExecutionRole:
Type: 'AWS::IAM::Role'
Properties:
RoleName: !Sub '${Environment}-${AWS::AccountId}-ExecutionRole'
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Principal:
Service: ecs-tasks.amazonaws.com
Action: 'sts:AssumeRole'
ManagedPolicyArns:
- 'arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy'
ECSService:
Type: 'AWS::ECS::Service'
Properties:
ServiceName: !Sub '${Environment}-${AWS::AccountId}-ECSService'
LaunchType: FARGATE
Cluster: !Ref ECSCluster
TaskDefinition: !Ref TaskDefinition
DesiredCount: 1
NetworkConfiguration:
AwsvpcConfiguration:
AssignPublicIp: ENABLED
SecurityGroups:
- sg-XXXXXXXXXX
Subnets:
- subnet-XXXXXXXXXX
TaskDefinition:
Type: 'AWS::ECS::TaskDefinition'
Properties:
RequiresCompatibilities:
- FARGATE
TaskRoleArn: !Ref ExecutionRole
ExecutionRoleArn: !Ref ExecutionRole
ContainerDefinitions:
- Name: !Sub '${Environment}-${AWS::AccountId}-Container'
Image: !Ref ECRDockerUri
Memory: 1024
Essential: true
DisableNetworking: false
Privileged: false
ReadonlyRootFilesystem: true
Environment:
- Name: SPRING_PROFILES_ACTIVE
Value: !Ref Environment
PortMappings:
- ContainerPort: !Ref ContainerPort
LogConfiguration:
LogDriver: awslogs
Options:
awslogs-group: !Ref LogGroup
awslogs-region: ca-central-1
awslogs-stream-prefix: test
Cpu: '1024'
Memory: '2048'
NetworkMode: awsvpc
Unable to routing external ip/port
I'm following this guide and unable to route the external IP into the subdomain, resulting to nothing while internal port (installed portainer in docker) seems working. Exposing to the public is my next plan. Following is my yml config
docker-compose.yml
Exposed 80 and 443 port from the router, DNS name server also pointed on my digitalocean then pointed into my traefik-dashboard.local.example.com -> public-IP by using A-record
version: '3'
services:
traefik:
image: traefik:latest
container_name: traefik
restart: unless-stopped
security_opt:
- no-new-privileges:true
networks:
- proxy
ports:
- 80:80
- 443:443
environment:
- DO_AUTH_TOKEN=TOKEN
volumes:
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
- /root/container/traefik/data/traefik.yml:/traefik.yml:ro
- /root/container/traefik/data/traefik.yml:/traefik.yml:ro
- /root/container/traefik/data/acme.json:/acme.json
- /root/container/traefik/data/config.yml:/config.yml:ro
labels:
- "traefik.enable=true"
- "traefik.http.routers.traefik.entrypoints=http"
- "traefik.http.routers.traefik.rule=Host(`traefik-dashboard.local.example.com`)"
- "traefik.http.middlewares.traefik-auth.basicauth.users=admin:password"
- "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https"
- "traefik.http.middlewares.sslheader.headers.customrequestheaders.X-Forwarded-Proto=https"
- "traefik.http.routers.traefik.middlewares=traefik-https-redirect"
- "traefik.http.routers.traefik-secure.entrypoints=https"
- "traefik.http.routers.traefik-secure.rule=Host(`traefik-dashboard.local.example.com`)"
- "traefik.http.routers.traefik-secure.middlewares=traefik-auth"
- "traefik.http.routers.traefik-secure.tls=true"
- "traefik.http.routers.traefik-secure.tls.certresolver=digitalocean"
- "traefik.http.routers.traefik-secure.tls.domains[0].main=local.example.com"
- "traefik.http.routers.traefik-secure.tls.domains[0].sans=*.local.example.com"
- "traefik.http.routers.traefik-secure.service=api#internal"
expose:
- 80
networks:
proxy:
external: true
data/traefik.yml
api:
dashboard: true
debug: true
insecure: true
entryPoints:
http:
address: ":80"
https:
address: ":443"
serversTransport:
insecureSkipVerify: true
providers:
docker:
endpoint: "unix:///var/run/docker.sock"
exposedByDefault: false
file:
filename: /config.yml
accessLog: {}
log:
level: debug
certificatesResolvers:
digitalocean:
acme:
email: email#example.com
storage: acme.json
dnsChallenge:
provider: digitalocean
delayBeforeCheck: 0
#resolvers:
# - "1.1.1.1:53"
# - "1.0.0.1:53"
pilot:
dashboard: false
metrics:
prometheus:
entryPoint: traefik
accessLog:
filePath: "/var/log/traefik/access.log"
filters:
statusCodes:
- "400-600"
` ``api:
dashboard: true
debug: true
insecure: true
entryPoints:
http:
address: ":80"
https:
address: ":443"
serversTransport:
insecureSkipVerify: true
providers:
docker:
endpoint: "unix:///var/run/docker.sock"
exposedByDefault: false
file:
filename: /config.yml
accessLog: {}
log:
level: debug
certificatesResolvers:
digitalocean:
acme:
email: email#example.com
storage: acme.json
dnsChallenge:
provider: digitalocean
delayBeforeCheck: 0
#resolvers:
# - "1.1.1.1:53"
# - "1.0.0.1:53"
pilot:
dashboard: false
metrics:
prometheus:
entryPoint: traefik
accessLog:
filePath: "/var/log/traefik/access.log"
filters:
statusCodes:
- "400-600"
The dashboard appeared with healthy server,
data/config.yml
I'm toggling this config on/off and the problem still persists.
http:
routers:
pihole:
entryPoints:
- "https"
rule: "Host(`pihole.local.example.com`)"
middlewares:
- default-headers
- addprefix-pihole
tls: {}
service: pihole
services:
pihole:
loadBalancer:
servers:
- url: "http://192.168.0.20:80"
passHostHeader: true
middlewares:
addprefix-pihole:
addPrefix:
prefix: "/admin"
https-redirect:
redirectScheme:
scheme: https
default-headers:
headers:
frameDeny: true
sslRedirect: true
browserXssFilter: true
contentTypeNosniff: true
forceSTSHeader: true
stsIncludeSubdomains: true
stsPreload: true
stsSeconds: 15552000
customFrameOptionsValue: SAMEORIGIN
customRequestHeaders:
X-Forwarded-Proto: https
Already assigned A record in pi-hole, dns, seems doesn't do anything magic, the pihole.local.example.com still appeared blank.
Let me know if there's anything wrong with my config.
Attached below is my log
https://pastebin.com/raw/R1w9jR7U
I am testing treafik to set it up for exposing my docker containers with SSL.
I seams to mostly work but I am having some issues http to https redirect. I have middleware that shows up in the dashboard as successful but when I go http option of the address I get 404
Here is my docker-compose.yml for traefik
version: "3.3"
services:
traefik:
image: traefik:v2.5
restart: always
container_name: traefik
ports:
- "80:80" # <== http
- "8080:8080" # <== :8080 is where the dashboard runs on
- "443:443" # <== https
command:
- --api.insecure=false # <== Enabling insecure api, NOT RECOMMENDED FOR PRODUCTION
- --api.dashboard=true # <== Enabling the dashboard to view services, middlewares, routers, etc.
- --api.debug=true # <== Enabling additional endpoints for debugging and profiling
- --log.level=DEBUG # <== Setting the level of the logs from traefik
- --providers.docker=true # <== Enabling docker as the provider for traefik
- --providers.docker.exposedbydefault=false # <== Don't expose every container to traefik
- --providers.file.filename=/config/dynamic.yaml # <== Referring to a dynamic configuration file
- --providers.docker.network=web # <== Operate on the docker network named web
- --entrypoints.web.address=:80 # <== Defining an entrypoint for port :80 named web
- --entrypoints.web.http.redirections.entryPoint.to=web-secure
- --entrypoints.web.http.redirections.entryPoint.scheme=https
- --entrypoints.web.http.redirections.entrypoint.permanent=true
- --entrypoints.web-secured.address=:443 # <== Defining an entrypoint for https on port :443 (not really nee$
- --certificatesresolvers.mytlschallenge.acme.tlschallenge=true # <== Enable TLS-ALPN-01 (not really needed)
- --certificatesresolvers.mytlschallenge.acme.email=email#domain.com # <== Set your email (not really needed)
- --certificatesresolvers.mytlschallenge.acme.storage=/letsencrypt/acme.json # <== SSL stuff we don't need.
volumes:
- ./letsencrypt:/letsencrypt # <== Volume for certs (TLS) (not really needed)
- /var/run/docker.sock:/var/run/docker.sock # <== Volume for docker admin
- ./config/:/config # <== Volume for dynamic conf file, **ref: line 27
networks:
- web # <== Placing traefik on the network named web, to access containers on this network
labels:
- "traefik.enable=true" # <== Enable traefik on itself to view dashboard and assign subdomain to$
- "traefik.http.routers.api.rule=Host(`traefik.testing.domain.com`)" # <== Setting the domain for the d$
- "traefik.http.routers.api.service=api#internal" # <== Enabling the api to be a service to acce$
networks:
web:
external: true
name: web
Here is the config/dynamic.yaml for traefik to set up middleware
## Setting up the middleware for redirect to https ##
http:
middlewares:
httpsredirect:
redirectScheme:
scheme: https
permanent: true
And here is test docker containers docker-compose.yml
version: '3.3'
services:
whoami:
# A container that exposes an API to show its IP address
image: traefik/whoami
networks:
- web
labels:
- "traefik.enable=true"
- "treafik.http.routers.whoami.entrypoints=web,web-secure"
- "traefik.http.routers.whoami.rule=Host(`whoami.testing.domain.com`)"
- "traefik.http.routers.whoami.tls=true"
- "traefik.http.routers.whoami.middlewares=httpsredirect#file" # <== This is a middleware to redirect to htt$
- "traefik.http.routers.whoami.tls.certresolver=mytlschallenge"
networks:
web:
external: true
name: web
Try the following from redirect regex
Docker
# Redirect with domain replacement
# Note: all dollar signs need to be doubled for escaping.
labels:
- "traefik.http.middlewares.test-redirectregex.redirectregex.regex=^https://localhost/(.*)"
- "traefik.http.middlewares.test-redirectregex.redirectregex.replacement=http://mydomain/$${1}"
For kubernetes
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: http-to-https-redirect
spec:
redirectRegex:
regex: ^http://(www.)?yourdomain.com/(.*)
replacement: https://yourdomain.com
permanent: true
And you inject the middleware in your ingress route
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: ingressroute
spec:
tls: {}
entryPoints:
- web
- websecure
routes:
- match: "HostRegexp(`{sub:(www.)?}yourdomain.com`) && PathPrefix(`/`)"
kind: Rule
services:
- name: your-service
port: 80
My SNMP exporter is hosted somewhere using kubernetes. I can access, and run it through URL like this and get metrics for a specified target: https://some.kube.server.name/api/snmp-exporter/snmp?target=AFACG1
My list of targets is in targets.json file using file_sd_configs in prometheus.yml file to dynamically load targets for prometheus.
My prometheus.yml file looks like, as below:
scrape_configs:
- job_name: 'snmp'
scrape_interval: 120s
file_sd_configs:
- files :
- /etc/prometheus/targets.json
metrics_path: /snmp
relabel_configs:
- source_labels: [__address__]
target_label: __param_target
- source_labels: [__param_target]
target_label: instance
- target_label: __address__
replacement: https://some.kube.server.name/api/snmp-exporter/ # The SNMP exporter's real hostname:port.
And my targets.json file look like, as below:
[
{
"labels": {
"job": "snmp"
},
"targets": [
"AFACG1",
"AFACG3",
"AFACG5",
"AFACG7",
"AFACG8",
"AFACG9"
]
}
]
However, when I run prometheus, I get error \"https://some.kube.server.name/api/snmp-exporter\" is not a valid hostname"
What are the modifications that I need to implement in prometheus.yml in order to get metric for targets in targets.json?
After reading, I figured out an answer to my question which is working out well. Here, what I came up with my modified scrape_configs:
scrape_configs:
- job_name: 'snmp'
scheme: https
scrape_interval: 120s
tls_config:
insecure_skip_verify: true
file_sd_configs:
- files :
- /etc/prometheus/targets.json
metrics_path: /api/snmp-exporter/snmp
relabel_configs:
- source_labels: [__address__]
target_label: __param_target
- source_labels: [__param_target]
target_label: instance
- target_label: __address__
replacement: some.kube.server.name
I hope this helps other people who would face similar problem.
I am trying to set docker log tag with Ansible for Amazon ECS TaskDefinition but unfortunately, I am getting an error mentioned below.
I exactly want to display the container name in the docker logs.
playbook.yml:
tasks:
- name: Create task definition
ecs_taskdefinition:
containers:
- name: hello-world-1
cpu: "2"
essential: true
image: "nginx"
memory: "128"
portMappings:
- containerPort: "80"
hostPort: "0"
logConfiguration:
logDriver: syslog
options:
syslog-address: udp://127.0.0.1:514
tag: '{{.Name}}'
family: "{{ taskfamily_name }}"
state: present
register: task_output
error:
TASK [Create task definition] ***************************************************************************
task path: /home/ubuntu/ansible/ecs_random.yml:14
fatal: [localhost]: FAILED! => {
"msg": "template error while templating string: unexpected '.'. String: {{.Name}}"
}
Below expression works for me.
tag: "{{ '{{' }}.Name{{ '}}' }}"
task:
tasks:
- name: Create task definition
ecs_taskdefinition:
containers:
- name: hello-world-1
cpu: "2"
essential: true
image: "nginx"
memory: "128"
portMappings:
- containerPort: "80"
hostPort: "0"
logConfiguration:
logDriver: syslog
options:
syslog-address: udp://127.0.0.1:514
tag: "{{ '{{' }}.Name{{ '}}' }}"
Related Question:
Escaping double curly braces in Ansible
Related Documentation:
http://jinja.pocoo.org/docs/dev/templates/#escaping