In legacy AWS Lambda node.js project there is a publish property in serverless.yml.
service:
name: service-name
publish: false
I didn't pay attention to it before, but after upgrading serverless to v.2.0.0 I get the warning on serverless deploy:
Serverless: Configuration warning at 'service': unrecognized property 'publish'
It seems this property has become deprecated, but what did it do? Is it safe to remove it from serverless.yml?
Related
When deploying a Cloud Run service via a YAML file from the command line, it fails with this error.
ERROR: (gcloud.run.services.replace) spec.template.spec.containers should contain exactly 1 container
This is because the documentation for adding an environment variable is wrong, or confusing at best.
The env node should be a child of the image and not the containers node as it says here.
https://cloud.google.com/run/docs/configuring/environment-variables#yaml
This is correct:
- image: us-east1-docker.pkg.dev/proj/repo/image:r1
env:
- name: SOMETHING
value: Xyz
I have this warning on GitHub Action:
Serverless: Deprecation warning: Service configuration is expected to be placed in a root of a service (working directory). All paths, function handlers in a configuration are > resolved against service directory".
Starting from next major Serverless will no longer permit configurations nested in sub directories.
Does it mean I have to put serverless.yml (Service configuration) in the working directory?
If yes, which one is the working directory?
.github/workflows/deploy.yml
service: myservice
jobs:
deploy:
# other steps here #
- name: Serverless
uses: serverless/github-action#master
with:
args: deploy --config ./src/MyStuff/serverless.yml
I store the serverless.yml in that path because it is related to Stuff.
I want to use multiple serverless.yml.
For AnotherStuff I will create src/AnotherStuff/serverless.yml .
So, what is the error, and the right way to do it?
[edit 21/02/2022]
I'm using the following workaround.
In GitHub Actions I have this job step in my build:
- name: Serverless preparation
run: |
# --config wants the serverless files in the root, so I move them there
echo move configuration file to the root folder
mv ./serverless/serverless.fsharp.yml ./serverless.fsharp.yml
Essentially, they want the file in the root folder... I put the file in the root folder.
We are deploying Java microservices to AWS 'ECR > EKS' using helm3 and Jenkins CI/CD pipeline. However what we see is, if we re-run Jenkins job to re-install the deployment/pod, then the pod does not re-install if there are no code changes. It still keeps the old running pod as is. Use case considered here is, AWS Secrets Manager configuration for db secret pulled during deployment has changed, so service needs to be redeployed by re-triggering the Jenkins job.
Approach 1 : https://helm.sh/docs/helm/helm_upgrade/
I tried using 'helm upgrade --install --force ....' as suggested in helm3 upgrade documentation but it fails with below error in Jenkins log
"Error: UPGRADE FAILED: failed to replace object: Service "dbservice" is invalid: spec.clusterIP: Invalid value: "": field is immutable"
Approach 2 : using --recreate-pods from earlier helm version
With 'helm upgrade --install --recreate-pods ....', I am getting below warning in Jenkins log
"Flag --recreate-pods has been deprecated, functionality will no longer be updated. Consult the documentation for other methods to recreate pods"
However, the pod gets recreated. But as we know --recreate-pods is not soft-restart. Thus we would have downtime, which breaks the microservice principle.
helm version used
version.BuildInfo{Version:"v3.4.0", GitCommit:"7090a89efc8a18f3d8178bf47d2462450349a004", GitTreeState:"clean", GoVersion:"go1.14.10"}
question
How to use --force with helm 3 with helm upgrade for above error ?
How to achieve soft-restart with deprecated --recreate-pods ?
This is nicely described in Helm documentation: https://helm.sh/docs/howto/charts_tips_and_tricks/#automatically-roll-deployments
Below is how I configured it - Thanks to #vasili-angapov for redirecting to correct documentation section.
In deployment.yaml, I added annotations and rollme
kind: Deployment
spec:
template:
metadata:
annotations:
rollme: {{ randAlphaNum 5 | quote }}
As per documentation, each invocation of the template function randAlphaNum will generate a unique random string. Thus random string always changes and causes the deployment to roll.
The other way described in the document is with respect to a changing SHA value for a file.
In the past helm recommended using the --recreate-pods flag as another option. This flag has been marked as deprecated in Helm 3 in favor of the more declarative method above.
I just reinstalled Fabric Samples v2.2.0 from Hyperledger Fabric repository according to the documentation.
But when I try to run asset-transfer-basic application located in fabric-samples/asset-transfer-basic/application-javascript directory by running node app.js the wallet is created and an admin and user is registered. But then it tries to invoke the function as given in app.js and shows this error
error: [Transaction]: Error: No valid responses from any peers. Errors:
peer=peer0.org1.example.com:7051, status=500, message=error in simulation: failed to execute transaction
aa705c10403cb65cecbd360c13337d03aac97a8f233a466975773586fe1086f6: could not launch chaincode basic_1.0:b359a077730d7
f44d6a437ad49d1da951f6a01c6d1eed4f85b8b1f5a08617fe7: error starting container: error starting container:
API error (404): network _test not found
Response of a transaction to invoke a function
This error never occured before. But somehow after reinstalling docker and Hyperledger Fabric fabric-samples it never seems to find the network _test.
N.B. : Before reinstalling name of the network was net_test. But now when I try docker network ls it shows a network called docker_test. I am using Windows Subsystem for Linux (WSL) version 1.
NETWORK ID NAME DRIVER SCOPE
b7ac05456f46 bridge bridge local
acaa5856b871 docker_test bridge local
866f58b9078d host host local
4812f94efb15 none null local
How can I fix the issue occurring when I try to run the application?
In my opinion, the CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE setting seems to be wrong.
you can check docker-compose.yaml or core.yaml
1. docker-compose.yaml
I will explain fabric-samples/test-network as targeting according to your current situation.
You can check in CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE in docker-compose.yaml
Perhaps in your case(fabric-samples/test-network), the value of ${COMPOSE_PROJECT_NAME} was not set properly, so it was set to _test.
Make sure the value is set correctly and change it to your network name.
# hyperledger/fabric-samples/test-network/docker/docker-compose-test-net.yaml
# based v2.2
...
peer0.org1.example.com:
container_name: peer0.org1.example.com
image: hyperledger/fabric-peer:2.2
environment:
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
# - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=${COMPOSE_PROJECT_NAME}_test
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=docker_test
...
2. core.yaml
If you have not set the value in the docker-compose.yaml peer, you need to check the core.yaml referenced by the peer.
you can find the networkMode parameter in core.yaml
# core.yaml
...
vm:
docker:
hostConfig:
# NetworkMode: host
NetworkMode: docker_test
...
If neither is set, it will be set to the default value. However, as you see _test being logged, the wrong value have been set in one of the two section, and you need to correct the value to the value you intended.
This issue is related to docker networking. In complete to #nezuko-response.
Create a file and name it ".env" in the same directory where your docker-compose file exists.
Add the following line in it:
COMPOSE_PROJECT_NAME=net
Use docker-compose up to update the container with the new configurations.
Or bring the HL network down (./network.sh down) and up (./network.sh up), restarting the test-nework.
Otherwise you'll still get the same error even after creating ".env" file.
More explanation about docker networking
run ./network down
then
export COMPOSE_PROJECT_NAME=net
afterwards
./network start
I copied this from someone .This one worked for me !!
Please create a file named ".env" in the same directory where your docker-compose file exists. Add the following line in ".env" file:-
COMPOSE_PROJECT_NAME=net
This worked for me
export COMPOSE_PROJECT_NAME=net
I am using the latest HELM stable/jenkins charts installed on my single node cluster for testing.
Install NFS provisioner.
helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm install nfs-client-provisioner stable/nfs-client-provisioner --version 1.2.8 --set nfs.server=*** --set nfs.path=/k8snfs --set storageClass.name=nfs --wait
Install stable/jenkins. Only custom values were serviceType and storageClass.
helm install jenkins stable/jenkins -f newJenkins.values -n jenkins
The newJenkins.values has the following.
master:
adminPassword: admin
serviceType: NodePort
initContainerEnv:
- name: http_proxy
value: "http://***:80"
- name: https_proxy
value: "http://***:80"
- name: no_proxy
value: "***"
containerEnv:
- name: http_proxy
value: "http://***:80"
- name: https_proxy
value: "http://***:80"
- name: no_proxy
value: "***"
javaOpts: >-
-Dhttp.proxyHost=***
-Dhttp.proxyPort=80
-Dhttps.proxyHost=***
-Dhttps.proxyPort=80
persistence:
storageClass: nfs
Login to Jenkins and Create Jenkins credential of "Kubernetes Service Account".
Under "Configure Clouds", I leave all defaults and press "Test Connection". Test fails.
In the credentials dropdown, I chose 'secret-text' and pressed button again. Still fail.
The error reported was.
Error testing connection https://kubernetes.default: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
When I check in the pod logs, the only error I see it the following.
2020-05-06 01:35:13.173+0000 [id=19] INFO o.c.j.p.k.KubernetesClientProvider$SaveableListenerImpl#onChange: Invalidating Kubernetes client: kubernetes null
I've been googling for a while and many sites mention service account settings, but nothing works.
$ kubectl version --short
Client Version: v1.12.7+1.2.3.el7
Server Version: v1.12.7+1.2.3.el7
$ helm version --short
v3.1.0+gb29d20b
Is there another step?
That error is a common error message reported by the Java Virtual Machine. This is caused when the Java environment does not have information about the HTTPS server to verify that it is a valid website. Sometimes the certificate is provided by an internal Root CA or is a Self-Signed Certificate. This sometimes can confuse the JVM as it is not one of the ones on the Java “trusted” list who can provide these certificates.
Try to add your Java Options in values.yaml file should look like this:
javaOpts: >-
-Dhttp.proxyHost=***
-Dhttp.proxyPort=80
-Dhttps.proxyHost=***
-Dhttps.proxyPort=80
-Djavax.net.ssl.trustStore=$JAVA_HOME/jre/lib/security/cacert
-Djavax.net.ssl.trustStorePassword=changeit
EDIT:
Try to change location of authority file, add debug option (-Djavax.net.debug=ssl) for seeing more detail view of logs. Normally without that parameter we wont be able to see more details log:
javaOpts: >-
-Dhttp.proxyHost=***
-Dhttp.proxyPort=80
-Dhttps.proxyHost=***
-Dhttps.proxyPort=80
-Djavax.net.ssl.trustStore=$JAVA_HOME/lib/security/cacerts
-Djavax.net.ssl.trustStorePassword=changeit
-Djavax.net.debug=ssl
If security is not a core concern in this box, you may in Jenkins web UI go to Manage Jenkins > Manage Plugins > tab Available and search for "skip-certificate-check" plugin.
On installing this, the issue should be fixed. Use this plugin with caution, since it is not advised from security perspective.
Also the repo stable is going to be deprecated very soon and is not being updated. I suggest use jenkins chart from Helm Hub.
Please take a look: certification-path-jenkins, adding-ca-cert, adding-path-certs.