How to avoid dev path when deploying to AWS with serverless locally? - serverless

When I deploy my api to AWS with serverless ($ serverless deploy), then there is always the stage added to the api URLs, for example:
foo.execute-api.eu-central-1.amazonaws.com/dev/v1/myfunc
Same thing happens when I run this locally ($ serverless offline):
localhost/dev/v1/myfunc
This is fine when deploying to AWS but for localhost I do not want this.
Question: Is there a way to remove the dev part in the url for localhost only so that the url looks like this?:
localhost/v1/myfunc
I already tried to remove the stage setting in serverless.yml but the default seems to be dev, so it doesn't matter if I specify the stage there or not.
service: my-service
frameworkVersion: "3"
provider:
name: aws
runtime: nodejs16.x
stage: dev
region: eu-central-1
apiGateway:
apiKeys:
- name: my-apikey
value: ${ssm:my-apikey}
functions:
myfunc:
handler: src/v1/myfunc/index.get
events:
- http:
path: /v1/myfunc
method: get
private: true
plugins:
- serverless-esbuild
- serverless-offline
- serverless-dotenv-plugin

The solution was to use --noPrependStageInUrl like this:
$ serverless offline --noPrependStageInUrl

Related

Error in using `gcloud.auth.application-default.login` cmd in gcloud CLI

I am learning Kubernetes in GCP. To deploy the cluster into google cloud, I used skaffold. Following is the YAML file:
apiVersion: skaffold/v2alpha3
kind: Config
deploy:
kubectl:
manifests:
- ./infra/k8s/*
build:
# local:
# push: false
googleCloudBuild:
projectId: ticketing-dev-368011
artifacts:
- image: gcr.io/{project-id}/auth
context: auth
docker:
dockerfile: Dockerfile
sync:
manual:
- src: "src/**/*.ts"
dest: .
In google cloud CLI, when I give the CLI command skaffold dev, the error popped up saying:
getting cloudbuild client: google: could not find default credentials.
Then I give the command in my local terminal gcloud auth application-default login, I was prompted to give consent on the browser, and I gave consent and the page was redirected to a page "successfull authentication". But When I see my terminal, there is an error message showing as
Error saving Application Default Credentials: Unable to write file [C:\Users\Admin\AppData\Roaming\gcloud\application_default_credentials.json]: [Errno 9] Bad file descriptor
And I found there is no such file being created in the above directory. Can someone please help me What I have done wrong?

Serverless Lambda: Socket hangout Docker Compose

We have a lambda that has a post event. Before deploying we need to test the whole flow on local so we're using serverless-offline. We have our main app inside a docker-compose and we are trying to test calling this lambda from the main app. We're getting this error: Error: socket hang up
I first thought that could be a docker configuration on the dockerfile or docker-compose.yml but we tested with an express app using the same dockerfile from the lambda and I can hit the endpoint from the main app. So know we know that is not docker issue but rather the serverless.yml
service: csv-report-generator
custom:
serverless-offline:
useDocker: true
dockerHost: host.docker.internal
httpPort: 9000
lambdaPort: 9000
provider:
name: aws
runtime: nodejs14.x
stage: local
functions:
payments:
handler: index.handler
events:
- http:
path: /my-endpoint
method: post
cors:
origin: '*'
plugins:
- serverless-offline
- serverless-dotenv-plugin
Here is our current configuration, we've been trying different configurations without success. Any idea how to be able to hit the lambda?

How to fetch secrets from vault to my jenkins configuration as code installation with helm?

I am triying to deploy a Jenkins using helm with JCASC to get vault secrets. I am using a local minikube to create mi k8 cluster and a local vault instance in my machine (not in k8 cluster).
Even that I am trying using initContainerEnv and ContainerEnv I am not able to reach the vault values. For CASC_VAULT_TOKEN value I am using vault root token.
This is helm command i run locally:
helm upgrade --install -f values.yml mijenkins jenkins/jenkins
And here is my values.yml file code:
controller:
installPlugins:
# need to add this configuration-as-code due to a known jenkins issue: https://github.com/jenkinsci/helm-charts/issues/595
- "configuration-as-code:1414.v878271fc496f"
- "hashicorp-vault-plugin:latest"
# passing initial environments values to docker basic container
initContainerEnv:
- name: CASC_VAULT_TOKEN
value: "my-vault-root-token"
- name: CASC_VAULT_URL
value: "http://localhost:8200"
- name: CASC_VAULT_PATHS
value: "cubbyhole/jenkins"
- name: CASC_VAULT_ENGINE_VERSION
value: "2"
ContainerEnv:
- name: CASC_VAULT_TOKEN
value: "my-vault-root-token"
- name: CASC_VAULT_URL
value: "http://localhost:8200"
- name: CASC_VAULT_PATHS
value: "cubbyhole/jenkins"
- name: CASC_VAULT_ENGINE_VERSION
value: "2"
JCasC:
configScripts:
here-is-the-user-security: |
jenkins:
securityRealm:
local:
allowsSignup: false
enableCaptcha: false
users:
- id: "${JENKINS_ADMIN_ID}"
password: "${JENKINS_ADMIN_PASSWORD}"
And in my local vault I can see/reach values:
>vault kv get cubbyhole/jenkins
============= Data =============
Key Value
--- -----
JENKINS_ADMIN_ID alan
JENKINS_ADMIN_PASSWORD acosta
Any of you have an idea what I could be doing wrong?
I haven't used Vault with jenkins so I'm not exactly sure about your particular situation but I am very familiar with how finicky the Jenkins helm chart is and I was able to configure my securityRealm (with the Google Login plugin) by creating a k8s secret with the values needed first:
kubectl create secret generic googleoauth --namespace jenkins \
--from-literal=clientid=${GOOGLE_OAUTH_CLIENT_ID} \
--from-literal=clientsecret=${GOOGLE_OAUTH_SECRET}
then passing those values into helm chart values.yml via:
controller:
additionalExistingSecrets:
- name: googleoauth
keyName: clientid
- name: googleoauth
keyName: clientsecret
then reading them into JCasC like so:
...
JCasC:
configScripts:
authentication: |
jenkins:
securityRealm:
googleOAuth2:
clientId: ${googleoauth-clientid}
clientSecret: ${googleoauth-clientsecret}
In order for that to work the values.yml also needs to include the following settings:
serviceAccount:
name: jenkins
rbac:
readSecrets: true # allows jenkins serviceAccount to read k8s secrets
Note that I am running jenkins as a k8s serviceAccount called jenkins in the namespace jenkins
After debugging my jenkins installation I figured out that the main issue was not my values.yml neither my JCASC integration as I was able to see the ContainerEnv values if I go inside my jenkins pod with:
kubectl exec -ti mijenkins-0 -- sh
So I needed to expose my vault server so my jenkins is able to reach it, I used this Vault tutorial to achieve it. Which in, brief, instead of using normal:
vault server -dev
We need to use:
vault server -dev -dev-root-token-id root -dev-listen-address 0.0.0.0:8200
Then we need to export an environment variable for the vault CLI to address the Vault server.
export VAULT_ADDR=http://0.0.0.0:8200
After that, we need to determine the vault address which we are going to redirect our jenkins ping, to do that we need start a minukube ssh session:
minikube ssh
Within this SSH session, retrieve the value of the Minikube host.
$ dig +short host.docker.internal
192.168.65.2
After retrieving the value, we are going to retrieve the status of the Vault server to verify network connectivity.
$ dig +short host.docker.internal | xargs -I{} curl -s http://{}:8200/v1/sys/seal-status
And now we can connect our jenkins pod with our vault, we just need to change CASC_VAULT_URL to use http://192.168.65.2:8200 in our main .yml file like this:
- name: CASC_VAULT_URL
value: "http://192.168.65.2:8200"

scdf 2.1 k8s config security context non root no fs writable

I need config scdf2 skipper , scdf and app pods to run without root and no write into filesystem pod .
i made changes into config yamls
data:
application.yaml: |-
spring:
cloud:
skipper:
server:
platform:
kubernetes:
accounts:
default:
namespace: default
deploymentServiceAccountName: scdf2-server-data-flow
securityContext:
runAsUser: 2000
allowPrivilegeEscalation: false
limits:
Colla
And scdf start runs with user "2000", (there is a problem with writeable local maven repo, fixed with a pvc nfs)...
But, the app pods always starts as root user, no 2000 users.
I've change skipper-config with securitycontext, .. any clues?
TX
What you set as deploymentServiceAccountName is one of the Kubernetes deployer properties that can be used for deploying streaming applications or launching task applications.
Looks like the above configuration is not applied to your SCDF or Skipper server configuration properties as they should at least get applied when deploying applications.
For the SCDF server and Skipper servers, in your SCDF/Skipper server deployment configurations, you need to explicitly set your serviceAccountName (not as deploymentServiceAccountName as its name suggests, the deploymentServiceAccountName is internally converted into the actual serviceAccountName for the respective stream/task apps when they get deployed).
We got it. We use it into skipp/scdf deploy, not in pods deploymente.
Your request:
Into scdf / skipper cfg deployment got:
spec:
containers:
- name: {{ template "scdf.fullname" . }}-server
image: {{ .Values.server.image }}:{{ .Values.server.version }}
imagePullPolicy: {{ .Values.server.imagePullPolicy }}
volumeMounts:
...
serviceAccountName: {{ template "scdf.serviceAccountName" . }}
Do you tell me to change config map scdf/skipper to task and streams? Another property into or before config about deployment
How is it relation about "serviceaccount" and user running process into pod?
How related serviceaccount with running process user "2000"
I cant understand.
Please help, it is very important to running without root and no use local filesystem from pod excepts "tmp" files

Concourse pending for long time before running task

I have a Concourse Pipeline with a Task using a Docker image that is stored in our local Artifactory server. Every time I start the Pipeline it takes about 5 mins until the tasks are finally run. The log looks like this:
I assume that Concourse somehow checks for newer versions of the Docker image. Unfortunately I have no chance to debug since all the logfiles on the Concourse worker VM offer no usable information.
My Questions:
How can I possibly debug what's going on, when Concourse says "preparing build" and the status is "pending".
Is there any chance to avoid Concourse from checking for a newer version of the Docker image? I tagged the Docker image with version latest - might this be an issue?
Any further ideas how I could speed things up?
Here is the detailed configuration of my pipeline and tasks:
pipeline.yml:
---
resources:
- name: concourse-image
type: docker-image
source:
repository: OUR_DOMAIN/subpath/concourse
username: ...
password: ...
insecure_registries:
- OUR_DOMAIN
# ...
jobs:
- name: deploy
public: true
plan:
- get: concourse-image
- task: create-manifest
image: concourse-image
file: concourse/tasks/create-manifest/task.yml
params:
# ...
task.yml:
---
platform: linux
inputs:
- name: git
- name: concourse
outputs:
- name: deployment-manifest
run:
path: concourse/tasks/create-and-upload-cloud-config/task.sh
The reason for this problem was that we pulled the Docker image from an internal Docker registry, which is running on HTTP only. Concourse tried to pull the image using HTTPS and it took around 5 mins until Concourse switched to HTTP (that's what a tcpdump on the worker showed us).
Changing the resource configuration to the following config solved the problems:
resources:
- name: concourse-image
type: docker-image
source:
repository: OUR_SERVER:80/subpath/concourse
username: docker-readonly
password: docker-readonly
insecure_registries:
- OUR_SERVER:80
So basically it was adding the port explicitly to the repository and the insecure_registries.

Resources