Serverless Lambda: Socket hangout Docker Compose - docker

We have a lambda that has a post event. Before deploying we need to test the whole flow on local so we're using serverless-offline. We have our main app inside a docker-compose and we are trying to test calling this lambda from the main app. We're getting this error: Error: socket hang up
I first thought that could be a docker configuration on the dockerfile or docker-compose.yml but we tested with an express app using the same dockerfile from the lambda and I can hit the endpoint from the main app. So know we know that is not docker issue but rather the serverless.yml
service: csv-report-generator
custom:
serverless-offline:
useDocker: true
dockerHost: host.docker.internal
httpPort: 9000
lambdaPort: 9000
provider:
name: aws
runtime: nodejs14.x
stage: local
functions:
payments:
handler: index.handler
events:
- http:
path: /my-endpoint
method: post
cors:
origin: '*'
plugins:
- serverless-offline
- serverless-dotenv-plugin
Here is our current configuration, we've been trying different configurations without success. Any idea how to be able to hit the lambda?

Related

How to avoid dev path when deploying to AWS with serverless locally?

When I deploy my api to AWS with serverless ($ serverless deploy), then there is always the stage added to the api URLs, for example:
foo.execute-api.eu-central-1.amazonaws.com/dev/v1/myfunc
Same thing happens when I run this locally ($ serverless offline):
localhost/dev/v1/myfunc
This is fine when deploying to AWS but for localhost I do not want this.
Question: Is there a way to remove the dev part in the url for localhost only so that the url looks like this?:
localhost/v1/myfunc
I already tried to remove the stage setting in serverless.yml but the default seems to be dev, so it doesn't matter if I specify the stage there or not.
service: my-service
frameworkVersion: "3"
provider:
name: aws
runtime: nodejs16.x
stage: dev
region: eu-central-1
apiGateway:
apiKeys:
- name: my-apikey
value: ${ssm:my-apikey}
functions:
myfunc:
handler: src/v1/myfunc/index.get
events:
- http:
path: /v1/myfunc
method: get
private: true
plugins:
- serverless-esbuild
- serverless-offline
- serverless-dotenv-plugin
The solution was to use --noPrependStageInUrl like this:
$ serverless offline --noPrependStageInUrl

Error in using `gcloud.auth.application-default.login` cmd in gcloud CLI

I am learning Kubernetes in GCP. To deploy the cluster into google cloud, I used skaffold. Following is the YAML file:
apiVersion: skaffold/v2alpha3
kind: Config
deploy:
kubectl:
manifests:
- ./infra/k8s/*
build:
# local:
# push: false
googleCloudBuild:
projectId: ticketing-dev-368011
artifacts:
- image: gcr.io/{project-id}/auth
context: auth
docker:
dockerfile: Dockerfile
sync:
manual:
- src: "src/**/*.ts"
dest: .
In google cloud CLI, when I give the CLI command skaffold dev, the error popped up saying:
getting cloudbuild client: google: could not find default credentials.
Then I give the command in my local terminal gcloud auth application-default login, I was prompted to give consent on the browser, and I gave consent and the page was redirected to a page "successfull authentication". But When I see my terminal, there is an error message showing as
Error saving Application Default Credentials: Unable to write file [C:\Users\Admin\AppData\Roaming\gcloud\application_default_credentials.json]: [Errno 9] Bad file descriptor
And I found there is no such file being created in the above directory. Can someone please help me What I have done wrong?

How to collect docker logs using Filebeats?

I am trying to collect this kind of logs from a docker container:
[1620579277][642e7adc-74e1-4b89-a705-d271846f7ebc][channel1]
[afca2a976fa482f429fff4a38e2ea49f337a8af1b5dca0de90410ecc792fd5a4][usecase_cc][set] ex02 set
[1620579277][ac9f99b7-0126-45ed-8a74-6adc3a9d6bc5][channel1]
[afca2a976fa482f429fff4a38e2ea49f337a8af1b5dca0de90410ecc792fd5a4][usecase_cc][set][Transaction] Aval
=201 Bval =301 after performing the transaction
[1620579277][9211a9d4-3fe6-49db-b245-91ddd3a11cd3][channel1]
[afca2a976fa482f429fff4a38e2ea49f337a8af1b5dca0de90410ecc792fd5a4][usecase_cc][set][Transaction]
Transaction makes payment of X units from A to B
[1620579280][0391d2ce-06c1-481b-9140-e143067a9c2d][channel1]
[1f5752224da4481e1dc4d23dec0938fd65f6ae7b989aaa26daa6b2aeea370084][usecase_cc][get] Query Response:
{"Name":"a","Amount":"200"}
I have set the filebeat.yml in this way:
filebeat.inputs:
- type: container
paths:
- '/var/lib/docker/containers/container-id/container-id.log'
processors:
- add_docker_metadata:
host: "unix:///var/run/docker.sock"
- dissect:
tokenizer: '{"log":"[%{time}][%{uuid}][%{channel}][%{id}][%{chaincode}][%{method}] %{specificinfo}\"\n%{}'
field: "message"
target_prefix: ""
output.elasticsearch:
hosts: ["elasticsearch:9200"]
username: "elastic"
password: "changeme"
indices:
- index: "filebeat-%{[agent.version]}-%{+yyyy.MM.dd}"
logging.json: true
logging.metrics.enabled: false
Although elasticsearch and kibana are deployed successfully, I am getting this error when a new log is generated:
{"error":{"root_cause":[{"type":"index_not_found_exception","reason":"no such index
[filebeat]","resource.type":"index_or_alias","resource.id":"filebeat","index_uuid":"_na_",
"index":"filebeat"}],"type":"index_not_found_exception","reason":"no such index
[filebeat]","resource.type":"index_or_alias","resource.id":"filebeat","index_uuid":"_na_",
"index":"filebeat"},"status":404}
Note: I am using version 7.12.1 and Kibana, Elastichsearch and Logstash are deployed in docker.
I have used logstash as alternative way instead filebeat. However, a mistake was made by incorrectly mapping the path where the logs are obtained from, in the filebeat configuration file. To solve this issue
I have created an enviroment variable to point to right place:
I passed the environment variable as part of the docker volume:
I have pointed the path of the configuration file to the path of the volume inside the container:

ddev: Call the endpoint of a certain port of the web container from another container

I set up a Shopware 6 project with ddev. Now I want to write cypress tests for one of my plugins. The shopware testsuite starts a node express server on port 8005 in the web container. I have configured the port for ddev so that I can open the express endpoint in my browser: http://my.ddev.site:8005/cleanup. That is working.
For cypress I have created a new ddev container with a new docker-compose file:
version: '3.6'
services:
cypress:
container_name: ddev-${DDEV_SITENAME}-cypress
image: cypress/included:4.10.0
tty: true
ipc: host
links:
- web:web
environment:
- CYPRESS_baseUrl=https://web
- DISPLAY
labels:
com.ddev.site-name: ${DDEV_SITENAME}
com.ddev.approot: $DDEV_APPROOT
volumes:
# Project root
- ../shopware:/project
# Storefront and Administration
- ../shopware/vendor/shopware/platform/src/Storefront/Resources/app/storefront/test/e2e:/e2e-Storefront
- ../shopware/vendor/shopware/platform/src/Administration/Resources/app/administration/test/e2e:/e2e-Administration
# Custom plugins
- ../shopware/custom/plugins/MyPlugin/src/Resources/app/administration/test/e2e:/e2e-MyPlugin
# for Cypress to communicate with the X11 server pass this socket file
# in addition to any other mapped volumes
- /tmp/.X11-unix:/tmp/.X11-unix
entrypoint: /bin/bash
I can now successfully open the cypress interface and I see my tests. The problem is now, that always before a cypress test is executed, the express endpoint is called (with the URL from above) and the cypress container seems to has no access to the endpoint. This is the output:
cy.request() failed trying to load:
http://my.ddev.site:8005/cleanup
We attempted to make an http request to this URL but the request failed without a response.
We received this error at the network level:
> Error: connect ECONNREFUSED 127.0.0.1:8005
-----------------------------------------------------------
The request we sent was:
Method: GET
URL: http://my.ddev.site:8005/cleanup
So I can call this endpoint in my browser, but cypress can't. Is there any configuration in the cypress container missing to call the port 8005 from the web container?
You need to add this to the cypress service:
external_links:
- "ddev-router:${DDEV_HOSTNAME}"
and then your http URL will be accessed through the router via ".ddev.site".
If you need a trusted https URL it's a little more complicated, but for http this should work fine.

Kubernetes certbot standalone not working

I'm trying to generate an SSL certificate with certbot/certbot docker container in kubernetes. I am using Job controller for this purpose which looks as the most suitable option. When I run the standalone option, I get the following error:
Failed authorization procedure. staging.ishankhare.com (http-01):
urn:ietf:params:acme:error:connection :: The server could not connect
to the client to verify the domain :: Fetching
http://staging.ishankhare.com/.well-known/acme-challenge/tpumqbcDWudT7EBsgC7IvtSzZvMAuooQ3PmSPh9yng8:
Timeout during connect (likely firewall problem)
I've made sure that this isn't due to misconfigured DNS entries by running a simple nginx container, and it resolves properly. Following is my Jobs file:
apiVersion: batch/v1
kind: Job
metadata:
#labels:
# app: certbot-generator
name: certbot
spec:
template:
metadata:
labels:
app: certbot-generate
spec:
volumes:
- name: certs
containers:
- name: certbot
image: certbot/certbot
command: ["certbot"]
#command: ["yes"]
args: ["certonly", "--noninteractive", "--agree-tos", "--staging", "--standalone", "-d", "staging.ishankhare.com", "-m", "me#ishankhare.com"]
volumeMounts:
- name: certs
mountPath: "/etc/letsencrypt/"
#- name: certs
#mountPath: "/opt/"
ports:
- containerPort: 80
- containerPort: 443
restartPolicy: "OnFailure"
and my service:
apiVersion: v1
kind: Service
metadata:
name: certbot-lb
labels:
app: certbot-lb
spec:
type: LoadBalancer
loadBalancerIP: 35.189.170.149
ports:
- port: 80
name: "http"
protocol: TCP
- port: 443
name: "tls"
protocol: TCP
selector:
app: certbot-generator
the full error message is something like this:
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator standalone, Installer None
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for staging.ishankhare.com
Waiting for verification...
Cleaning up challenges
Failed authorization procedure. staging.ishankhare.com (http-01): urn:ietf:params:acme:error:connection :: The server could not connect to the client to verify the domain :: Fetching http://staging.ishankhare.com/.well-known/acme-challenge/tpumqbcDWudT7EBsgC7IvtSzZvMAuooQ3PmSPh9yng8: Timeout during connect (likely firewall problem)
IMPORTANT NOTES:
- The following errors were reported by the server:
Domain: staging.ishankhare.com
Type: connection
Detail: Fetching
http://staging.ishankhare.com/.well-known/acme-challenge/tpumqbcDWudT7EBsgC7IvtSzZvMAuooQ3PmSPh9yng8:
Timeout during connect (likely firewall problem)
To fix these errors, please make sure that your domain name was
entered correctly and the DNS A/AAAA record(s) for that domain
contain(s) the right IP address. Additionally, please check that
your computer has a publicly routable IP address and that no
firewalls are preventing the server from communicating with the
client. If you're using the webroot plugin, you should also verify
that you are serving files from the webroot path you provided.
- Your account credentials have been saved in your Certbot
configuration directory at /etc/letsencrypt. You should make a
secure backup of this folder now. This configuration directory will
also contain certificates and private keys obtained by Certbot so
making regular backups of this folder is ideal.
I've also tried running this as a simple Pod but to no help. Although I still feel running it as a Job to completion is the way to go.
First, be aware your Job definition is valid, but the spec.template.metadata.labels.app: certbot-generate value does not match with your Service definition spec.selector.app: certbot-generator: one is certbot-generate, the second is certbot-generator. So the pod run by the job controller is never added as an endpoint to the service.
Adjust one or the other, but they have to match, and that might just work :)
Although, I'm not sure using a Service with a selector targeting short-lived pods from a Job controller would work, neither with a simple Pod as you tested. The certbot-randomId pod created by the job (or whatever simple pod you create) takes about 15 seconds total to run/fail, and the HTTP validation challenge is triggered after just a few seconds of the pod life: it's not clear to me that would be enough time for kubernetes proxying to be already working between the service and the pod.
We can safely assume that the Service is actually working because you mentioned that you tested DNS resolution, so you can easily ensure that's not a timing issue by adding a sleep 10 (or more!) to give more time for the pod to be added as an endpoint to the service and being proxied appropriately before the HTTP challenge is triggered by certbot. Just change your Job command and args for those:
command: ["/bin/sh"]
args: ["-c", "sleep 10 && certbot certonly --noninteractive --agree-tos --staging --standalone -d staging.ishankhare.com -m me#ishankhare.com"]
And here too, that might just work :)
That being said, I'd warmly recommend you to use cert-manager which you can install easily through its stable Helm chart: the Certificate custom resource that it introduces will store your certificate in a Secret which will make it straightforward to reuse from whatever K8s resource, and it takes care of renewal automatically so you can just forget about it all.

Resources