Redirection in envoy - docker

I'm testing an envoy configuration in my stage environment, here I need to make a redirection to custom page "/oops", whenever there occurs any 5xx error while calling test.com. It is accessible the path "http://test1.com/oops" directly. Can anybody please suggest me ideas?
static_resources:
listeners:
- name: test-listener
address:
socket_address: { address: 0.0.0.0, port_value: 30000 }
filter_chains:
- filters:
- name: envoy.http_connection_manager
config:
access_log:
- name: envoy.file_access_log
config:
path: "/dev/stdout"
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["test1.com"]
routes:
- match: { prefix: "/" }
route: { cluster: test1_service }
- name: local_service2
domains: ["test2.com"]
routes:
- match: { prefix: "/" }
route: { cluster: google_service }
http_filters:
- name: envoy.router
clusters:
- name: test1_service
connect_timeout: 0.25s
type: LOGICAL_DNS
dns_lookup_family: V4_ONLY
lb_policy: ROUND_ROBIN
hosts: [{ socket_address: { address: 172.17.0.3, port_value: 80 }}]
- name: google_service
connect_timeout: 0.25s
type: LOGICAL_DNS
dns_lookup_family: V4_ONLY
lb_policy: ROUND_ROBIN
hosts: [{ socket_address: { address: google.com, port_value: 80 }}]

You can use HTTP Routing in envoy to accomplish this. Also refer the route.VirtualHost config.

Related

SAM template for an EventBridge that triggers SQS

I have an EventBridge that receives events and want to publish it to SQS that triggers Lambdafunction using sam template
putting events on EventBridge is ok, but the SQS doesn't have been trigger by the EventBridge
do I have any errors in the following yaml
eventSqsQueue:
Type: AWS::SQS::Queue
eventSynchronizer:
Type: AWS::Serverless::Function
Properties:
CodeUri: build/eventSynchronizer
Handler: eventSynchronizer.Handler
Events:
MySQSEvent:
Type: SQS
Properties:
Queue: !GetAtt eventSqsQueue.Arn
BatchSize: 10
eventEventRule:
Type: AWS::Events::Rule
Properties:
EventPattern:
account:
- !Sub '${AWS::AccountId}'
source:
- "microserviceName"
DetailType:
- "event Created"
- "event Updated"
- "event Deleted"
Targets:
- Arn: !GetAtt eventSqsQueue.Arn
Id: "SQSqueue"
EventBridgeToToSqsPolicy:
Type: AWS::SQS::QueuePolicy
Properties:
PolicyDocument:
Statement:
- Effect: Allow
Principal:
Service: events.amazonaws.com
Action: SQS:SendMessage
Resource: !GetAtt eventSqsQueue.Arn
Queues:
- Ref: eventSqsQueue
DetailType should be detail-type and with some delete to extra written code
Here is the final solution
eventSqsQueue:
Type: AWS::SQS::Queue
eventSynchronizer:
Type: AWS::Serverless::Function
Properties:
CodeUri: build/eventSynchronizer
Handler: eventSynchronizer.Handler
# ReservedConcurrentExecutions: 1
Events:
MySQSEvent:
Type: SQS
Properties:
Queue: !GetAtt eventSqsQueue.Arn
BatchSize: 10
eventEventRule:
Type: AWS::Events::Rule
Properties:
Description: "eventEventRule"
EventPattern:
source:
- "microserviceName"
detail-type:
- "event Created"
- "event Updated"
- "event Deleted"
Targets:
- Arn: !GetAtt eventSqsQueue.Arn
Id: "SQSqueue"
EventBridgeToToSqsPolicy:
Type: AWS::SQS::QueuePolicy
Properties:
PolicyDocument:
Statement:
- Effect: Allow
Principal:
Service: events.amazonaws.com
Action: SQS:SendMessage
Resource: !GetAtt eventSqsQueue.Arn
Queues:
- Ref: eventSqsQueue
```
If you want to check your template for errors, use
$ sam validate
More info here :
https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-cli-command-reference-sam-validate.html

post request not working in express app deployed in kubernetes locally with minikube

when I request post handler in the express app it returns Cannot POST / but the post handler is defined in there. When is run this app through docker-compose it worlds perfectly but not in Kubernetes cluster using minikube. Here is the express app code.
const keys = require('./keys');
// Express App Setup
const express = require('express');
const bodyParser = require('body-parser');
const cors = require('cors');
const app = express();
app.use(cors());
app.use(bodyParser.json());
// Postgres Client Setup
const { Pool, Client } = require('pg');
const pgClient = new Pool({
user: keys.pgUser,
host: keys.pgHost,
database: keys.pgDatabase,
password: keys.pgPassword,
port: keys.pgPort
});
pgClient.on('error', () => console.log('Lost PG connection'));
pgClient
.query('CREATE TABLE IF NOT EXISTS values (number INT)')
.catch(err => console.log(err));
// Redis Client Setup
const redis = require('redis');
const redisClient = redis.createClient({
host: keys.redisHost,
port: keys.redisPort,
retry_strategy: () => 1000
});
const redisPublisher = redisClient.duplicate();
// Express route handlers
app.get('/', (req, res) => {
res.send('Hi');
});
app.get('/values/all', async (req, res) => {
const {rows} = await pgClient.query('SELECT * from values');
console.log('DATA IN POSTGRES', rows)
res.send(rows);
});
app.get('/values/current', async (req, res) => {
redisClient.hgetall('values', (err, values) => {
res.send(values);
});
});
app.post('/data', (req,res) => {
res.send(req.body.data)
})
app.post('/values', (req, res) => {
console.log("FORM DATA", req.body)
const index = req.body.index;
if (parseInt(index) > 40) {
return res.status(422).send('Index too high');
}
redisClient.hset('values', index, 'Nothing yet!');
redisPublisher.publish('insert', index);
pgClient.query('INSERT INTO values(number) VALUES($1)', [index], (err,ress) => {
if(err){
console.log("PG ERROR ON INSERT")
} else {
console.log('PG DATA', ress.rows[0])
res.send({ working: true });
}
});
});
app.listen(5000, err => {
console.log('Listening');
});
Here is the server deployment config
apiVersion: apps/v1
kind: Deployment
metadata:
name: server-deployment
spec:
replicas: 3
selector:
matchLabels:
component: server
template:
metadata:
labels:
component: server
spec:
containers:
- name: server
image: rajneesh4736/multi-server:v4.0.3
ports:
- containerPort: 5000
env:
- name: REDIS_HOST
value: redis-cluster-ip-service
- name: REDIS_PORT
value: "6379"
- name: PGUSER
value: postgres
- name: PGHOST
value: postgres-cluster-ip-service
- name: PGPORT
value: "5432"
- name: PGDATABASE
value: postgres
- name: PGPASSWORD
valueFrom:
secretKeyRef:
name: pgpassword
key: PGPASSWORD
Here is the server cluster IP config
apiVersion: v1
kind: Service
metadata:
name: server-cluster-ip-service
spec:
type: ClusterIP
selector:
component: server
ports:
- port: 5000
targetPort: 5000

filebeat + kubernetes + elasticsearch not save specific fields

I created a namespace to get logs with filebeats and save to elasticsearch.
Why not save on elasticsearch the fields about Kubernetes how to example follow?
Kubernetes fields
"kubernetes" : {
"labels" : {
"app" : "MY-APP",
"pod-template-hash" : "959f54cd",
"serving" : "true",
"version" : "1.0",
"visualize" : "true"
},
"pod" : {
"uid" : "e20173cb-3c5f-11ea-836e-02c1ee65b375",
"name" : "MY-APP-959f54cd-lhd5p"
},
"node" : {
"name" : "ip-xxx-xx-xx-xxx.ec2.internal"
},
"container" : {
"name" : "istio"
},
"namespace" : "production",
"replicaset" : {
"name" : "MY-APP-959f54cd"
}
}
Currently is being saved like this:
"_source" : {
"#timestamp" : "2020-01-23T12:33:14.235Z",
"ecs" : {
"version" : "1.0.0"
},
"host" : {
"name" : "worker-node1"
},
"agent" : {
"hostname" : "worker-node1",
"id" : "xxxxx-xxxx-xxx-xxxx-xxxxxxxxxxxxxx",
"version" : "7.1.1",
"type" : "filebeat",
"ephemeral_id" : "xxxx-xxxx-xxxx-xxxxxxxxxxxxx"
},
"log" : {
"offset" : xxxxxxxx,
"file" : {
"path" : "/var/lib/docker/containers/xxxx96ec2bfd9a3e4f4ac83581ad90/7fd55e1249aa009df3f8e3250c967bbe541c9596xxxxxac83581ad90-json.log"
}
},
"stream" : "stdout",
"message" : "xxxxxxxx",
"input" : {
"type" : "docker"
}
}
To follow my filebeat.config:
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: kube-system
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
filebeat.config:
inputs:
# Mounted `filebeat-inputs` configmap:
path: ${path.config}/inputs.d/*.yml
# Reload inputs configs as they change:
reload.enabled: false
multiline.pattern: '^[[:space:]]'
multiline.negate: false
multiline.match: after
modules:
path: ${path.config}/modules.d/*.yml
# Reload module configs as they change:
reload.enabled: false
# To enable hints based autodiscover, remove `filebeat.config.inputs` configuration and uncomment this:
#filebeat.autodiscover:
# providers:
# - type: kubernetes
# hints.enabled: true
processors:
- add_cloud_metadata:
- add_kubernetes_metadata:
cloud.id: ${ELASTIC_CLOUD_ID}
cloud.auth: ${ELASTIC_CLOUD_AUTH}
output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
protocol: "http"
setup.ilm.enabled: false
ilm.enabled: false
xpack.monitoring:
enabled: true
DamemonSet is shown below:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: filebeat
namespace: kube-system
labels:
k8s-app: filebeat
spec:
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
hostNetwork: true
terminationGracePeriodSeconds: 30
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat-oss:7.1.1
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
env:
- name: ELASTICSEARCH_HOST
value: xxxxxxxxxxxxx
- name: ELASTICSEARCH_PORT
value: "9200"
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
#privileged: true
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: inputs
mountPath: /usr/share/filebeat/inputs.d
readOnly: true
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: inputs
configMap:
defaultMode: 0600
name: filebeat-inputs
# data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
- name: data
hostPath:
path: /var/lib/filebeat-data
type: DirectoryOrCreate
Before to apply config into kubernetes, I did remove ever registry filebeats of elasticsearch.
As already stated in my comment. It looks like your ConfigMap is missing the paths: to containers' logs. It should be something like this:
type: container
paths:
- /var/log/containers/*${data.kubernetes.container.id}.log
Compare your config file with this one.
I hope it helps.
I had the same problem, I resolved by removing a hostNetwork: true configuration from DaemonSet. This means that the pod name was the same as the node name. Looking at the filebeat startup log, you can see this.

dockerhub in kubernetes give unauthorized: incorrect username or password with right credentials

I'm trying to pull a private image from docker hub and every time I get the error "ImagePullBackOff" using describe on the pods I see the error "unauthorized: incorrect username or password", I created the secret in the cluster using the following guide: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ using the cli method with the correct credentials (I checked and I can login on the website with these one) and this is my yaml file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-typescript
labels:
app: app-typescript
spec:
selector:
matchLabels:
app: app-typescript
replicas: 1
minReadySeconds: 15
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: app-typescript
spec:
containers:
- name: api
image: dockerhuborg/api:latest
imagePullPolicy: Always
env:
- name: "ENV_TYPE"
value: "production"
- name: "NODE_ENV"
value: "production"
- name: "MONGODB_URI"
value: "mongodb://mongo-mongodb/db"
ports:
- containerPort: 4000
imagePullSecrets:
- name: regcred
I found a solution, apparently the problem is that docker hub use different domains for login and containers pulling, so you must edit your secret created with the kubectl command and replace the base64 of .dockerconfigjson with an encoded base64 version of this json (yeah I know maybe I added too much domain but I'm trying to fix this sh*t from about 2 days I don't have patience anymore to find the exact ones)
{
"auths":{
"https://index.docker.io/v1/":{
"username":"user",
"password":"password",
"email":"yourdockeremail#gmail.com",
"auth":"base64 of string user:password"
},
"auth.docker.io":{
"username":"user",
"password":"password",
"email":"yourdockeremail#gmail.com",
"auth":"base64 of string user:password"
},
"registry.docker.io":{
"username":"user",
"password":"password",
"email":"yourdockeremail#gmail.com",
"auth":"base64 of string user:password"
},
"docker.io":{
"username":"user",
"password":"password",
"email":"yourdockeremail#gmail.com",
"auth":"base64 of string user:password"
},
"https://registry-1.docker.io/v2/": {
"username":"user",
"password":"password",
"email":"yourdockeremail#gmail.com",
"auth":"base64 of string user:password"
},
"registry-1.docker.io/v2/": {
"username":"user",
"password":"password",
"email":"yourdockeremail#gmail.com",
"auth":"base64 of string user:password"
},
"registry-1.docker.io": {
"username":"user",
"password":"password",
"email":"yourdockeremail#gmail.com",
"auth":"base64 of string user:password"
},
"https://registry-1.docker.io": {
"username":"user",
"password":"password",
"email":"yourdockeremail#gmail.com",
"auth":"base64 of string user:password"
}
}
}

ansible - define var's value depending on another variable

today I have a loop that allows me to start multiple docker containers
- name: start container current
docker_container:
name: "{{ item.name }}"
image: "{{ item.name }}:{{ item.version }}"
state: started
recreate: true
ports:
- "{{ item.ports }}"
volumes:
- /opt/application/i99/{{ item.type }}/logs:/opt/application/i99/{{ item.type }}/logs
env_file: /opt/application/i99/{{ item.type }}/{{ item.name }}/{{ item.name }}-PreProd-config.list
env:
LOG_FILE_WS: "/opt/application/i99/{{ item.type }}/logs/{{ hostname }}_WS.log"
with_items:
- { name: 'backend', ports: '8000:8000', type: 'current', version: '{{RCD_VERSION_CURRENT}}' }
- { name: 'connecteur', ports: '8400:8400', type: 'current', version: '{{RCD_VERSION_CURRENT}}' }
- { name: 'api-alerting', ports: '8100:8100', type: 'current', version: '{{RCD_VERSION_CURRENT}}' }
- { name: 'api-tracking', ports: '8200:8200', type: 'current', version: '{{RCD_VERSION_CURRENT}}' }
I have a extra variable {{ RCD_APIS }} that contain a list of all my containers name. I would like to loop over that list and define the following variable conditionnally to the name and run the containers
vars to define : ports, type, version
I want to do something like
- name: start container current
docker_container:
name: "{{ item }}"
image: "{{ item }}:{{ version }}"
state: started
user: adi99api
recreate: true
ports:
- "{{ ports }}"
volumes:
- /opt/application/i99/{{ type }}/logs:/opt/application/i99/{{ type }}/logs
env_file: /opt/application/i99/{{ type }}/{{ item }}/{{ name }}-PreProd-config.list
env:
LOG_FILE_WS: "/opt/application/i99/{{ type }}/logs/{{ hostname }}_WS.log"
with_items: "{{ RCD_APIS.split(',') }}"
when: ( item == "backend", ports: '8000:8000', type: 'current', version: '{{RCD_VERSION_CURRENT}}') or
( item == "connecteur", ports: '8400:8400', type: 'pilote', version: '{{RCD_VERSION_PILOTE}}')
# in a vars file, or a `vars` section
---
docker_containers_config:
backend:
ports: '8000:8000'
type: current
version: '{{RCD_VERSION_CURRENT}}'
connecteur:
ports: '8400:8400'
type: current
version: '{{RCD_VERSION_CURRENT}}'
api-alerting:
ports: '8100:8100'
type: 'current'
version: '{{RCD_VERSION_CURRENT}}'
api-tracking:
ports: '8200:8200'
type: 'current'
version: '{{RCD_VERSION_CURRENT}}'
_
# In your tasks
- name: start container current
docker_container:
name: "{{ item }}"
image: "{{ item }}:{{ docker_containers_config[item].version }}"
state: started
recreate: true
ports:
- "{{ docker_containers_config[item].ports }}"
volumes:
- /opt/application/i99/{{ item.type }}/logs:/opt/application/i99/{{ item.type }}/logs
env_file: /opt/application/i99/{{ docker_containers_config[item].type }}/{{ item }}/{{ item }}-PreProd-config.list
env:
LOG_FILE_WS: "/opt/application/i99/{{ docker_containers_config[item].type }}/logs/{{ hostname }}_WS.log"
with_items: "{{ RCD_APIS.split(',') }}"

Resources