In the CONTRIBUTING guide, there is a section on Docker that says:
This allows you to run the CDK in a CDK-compatible directory with a command like:
What does "CDK-compatible" mean in this context?
In this context CDK-compatible means containing a valid CDK project with a cdk.json.
Related
I am using spring-boot 2.7.1 with native configuration as the guide follows in the link.
Spring native official doc
My problem is that when running bootBuildImage, the buildpack ["gcr.io/paketo-buildpacks/java-native-image:7.19.0"] is trying to download external dependency paketo-buildpacks/bellsoft-liberica from https://download.bell-sw.com/vm/22.3.0/bellsoft-liberica-vm-core-openjdk17.0.5+8-22.3.0+2-linux-amd64.tar.gz which is not allowed by company firewall.
I then researched that you can configure dependeny-mapping bindings towards these dependencies within required buildpack, at-least using this pack cli guide.
But when using purely pack-cli the gradle bootBuildImage gets a bit irrelevant and then I have to use some external tool to fix the native docker container and image. And I would like to only use the bootBuildImage to map these dependency-bindings.
I found this binding function within Gradle bootBuildImage docs. but I am not sure what string it expects, if the path should be similar to pack-cli config or not, can't find any relevant info.
The provided image show the bootBuildImage config
bootBuildImage {
builder = 'docker.io/paketobuildpacks/builder:tiny'
runImage = 'docker.io/paketobuildpacks/run:tiny-cnb'
buildpacks = ['gcr.io/paketo-buildpacks/java-native-image']
binding("bindnings/bellsoft-jre-config:/platform/bindings/bellsoft-jre-config")
environment = [
"BP_NATIVE_IMAGE" : "true",
]
}
The dependency-mapping config contains 2 files:
The type file contains:
echo "dependency-mapping" >> type
The sha256 (bellsoft-liberica) file 3dea0f7a9312c738d22b5e399b6ce9abe13b45b2bc2c04346beb941a94e8a932 contains:
'echo "https://download.bell-sw.com/vm/22.3.0/bellsoft-liberica-vm-core-openjdk17.0.5+8-22.3.0+2-linux-amd64.tar.gz" >> 3dea0f7a9312c738d22b5e399b6ce9abe13b45b2bc2c04346beb941a94e8a932'
And yes I'm aware that this is the exact same url, but this is just to test that the binding config is correctly setup. Because if ok it should fail on untrusted certificate when downloading instead.
Currently the build fails with:
Caused by: org.springframework.boot.buildpack.platform.docker.transport.DockerEngineException: Docker API call to 'localhost/v1.24/containers/create' failed with status code 400 "Bad Request"
at org.springframework.boot.buildpack.platform.docker.transport.HttpClientTransport.execute(HttpClientTransport.java:156)
at org.springframework.boot.buildpack.platform.docker.transport.HttpClientTransport.execute(HttpClientTransport.java:136)
at org.springframework.boot.buildpack.platform.docker.transport.HttpClientTransport.post(HttpClientTransport.java:108)
at org.springframework.boot.buildpack.platform.docker.DockerApi$ContainerApi.createContainer(DockerApi.java:340)
at org.springframework.boot.buildpack.platform.docker.DockerApi$ContainerApi.create(DockerApi.java:331)
at org.springframework.boot.buildpack.platform.build.Lifecycle.createContainer(Lifecycle.java:237)
at org.springframework.boot.buildpack.platform.build.Lifecycle.run(Lifecycle.java:217)
at org.springframework.boot.buildpack.platform.build.Lifecycle.execute(Lifecycle.java:151)
at org.springframework.boot.buildpack.platform.build.Builder.executeLifecycle(Builder.java:157)
at org.springframework.boot.buildpack.platform.build.Builder.build(Builder.java:115)
at org.springframework.boot.gradle.tasks.bundling.BootBuildImage.buildImage(BootBuildImage.java:521)
Which i assume is caused by invalid binding config. But I can't find what is should be.
Paketo configuration (binding)
Dependency mapping bindings can be tricky. There are a number of things that have to be just right, or the buildpacks won't pick up the binding and won't map dependencies.
While there are talks of how we can change this in buildpacks to make swapping out dependencies easier, the short-term solution is to use binding-tool.
You can run bt dm -b paketo-buildpacks/bellsoft-liberica and it will go download the dependencies from the specified buildpack and generate the binding files for you.
It will by default download dependencies and write the bindings to $PWD/bindings but you can change that. For example, I like to put my dependencies in my home directory so I can share them across apps. Ex: SERVICE_BINDING_ROOT=~/.bt/bindings bt dm ..., or export SERVICE_BINDING_ROOT=~/.bt/bindings (or whatever command you run to set an env variable in your shell).
Once you have the bindings created, you just need to point your app to them. How you set the property differs between Maven & Gradle, but the value of the property is the same. It should be <local-path>:<container-path>.
The local path should be the full or relative path to where you created the bindings with bt dm. The container path should almost always be /platform/bindings. This maps your full set of bindings locally to the full set of bindings that the buildpacks will consume. In other words, put all of your bindings into the same directory locally, map that to /platform/bindings and the buildpacks will see everything.
For example with Gradle: binding("bindings/:/platform/bindings").
You can adjust the container path by setting SERVICE_BINDING_ROOT in the container as well, but it doesn't offer a lot of advantage.
You can also set multiple entries for bindings, so long as the paths are unique. So you could set binding("/home/user/.bt/bindings/foo:/platform/bindings/foo") and also binding("bindings/bar:/platform/bindings/bar"). That would let you take bindings from two different locations locally and map them into the /platform/bindings directory so both would be visible to buildpacks. This gives you more fine-grained control but as you can see becomes pretty verbose.
Details on configuring Maven and configuring Gradle for buildpacks can be found at those links.
When using the following command to deploy a new Cloud Run revision,
gcloud run services replace service.yaml
The deployment fails with this error
ERROR: (gcloud.run.services.replace) ALREADY_EXISTS: Revision named 'yourservicename-00001-soj' with different configuration already exists.
This occurs when you have followed Google's documentation which instructs you to pull down the current service YAML description into a file, make edits and then redeploy it.
This is because the documentation is wrong, or Google's service has regressed since it was authored.
Edit the YAML and remove spec.template.metadata.name and try again.
I am an encountered an issue while creating database backup using create_backup_from_onprem script in OCI CLI. i noticed that object storage namespace is not correct while executing backup script.
[oracle#oracledev oci-cli-scripts]$ ./create_backup_from_onprem --config-file /home/oracle/.oci/config --display-name testimport01 --availability-domain $AD --edition STANDARD_EDITION --opc-installer-dir /home/oracle/migrate --tmp-dir /home/oracle/migrate/onprem_upload --compartment-id $C --rman-password *****
oci._vendor.requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://swiftobjectstorage.ap-mumbai-1.oraclecloud.com/v1/dbbackupbom/iF0ydees7V0yWxyuAYtF/parameter.log
and parameter.log
Either the bucket named 'iF0ydees7V0yWxyuAYtF' does not exist in the namespace 'dbbackupbom' or you are not authorized to access it
My correct namespace is bmnoo8fd7ute
[oracle#oracledev oci-cli-scripts]$ oci os ns get
{
"data": "bmnoo8fd7ute"
}
Not sure how to correct object storage namespace in CLI. could you please any help me on this?
Adding cross reference to GitHub issue in OCI CLI in case the OCI database team can answer -- https://github.com/oracle/oci-cli/issues/201.
You have to change the tenant ocid in oci config file, for which the default location is ~/.oci/config. You might do it manually or using the oci setup config command. You can overwrite the current values, or you can create a new profile, which you can refer to in oci calls.
For more information please see the CLI config file doc.
In case the oci config file is already contains the correct value, you need to reinstall the oci_install specifying the right tenant ocid for -tOCID. (in this case OCID of bmnoo8fd7ute)
java -jar oci_install.jar -host swiftobjectstorage.ap-mumbai-1.oraclecloud.com -pvtKeyFile oci_private_key -pubFingerPrint oci_public_fingerprint -uOCID user_ocid -tOCID tenancy_ocid -walletDir /wallet_directory -libDir /library_directory
Update:
As the dbbackupbom is an internal resource ID, you can't change that with the reinstallation of oci_install. Rather this should be an authorization issue.
Please check if you have the right policies in place. If not, create a policy like this:
Name of the policy: ObjectStorageAccess
Add below statements:
Allow group ObjectAdmins to manage buckets in tenancy
Allow group ObjectAdmins to manage objects in tenancy
Finally add your user to ObjectAdmins, or use a different group which you are already part of.
Finally I got found correct answer and resolution for this issue.
Cause: Error occurred due to newly launched region having some internal bug. i used Mumbai region which was recently launched.
Resolution : Choose another stable region. Ashburn region works for me.
I have just created a .env file to separate my environment variables from my main docker-compose file. I can run this document on my local machine fine with no errors or issues but when I try run it through my CD pipeline I get the following error.
[error]Top level object in 'C:\BuildAgent_work\r38\a\"Myproject Name"\drop\ .env' needs to be an object not 'class 'str'.
I first thought this was because I had set up my build/CI process wrong but I have played around with it and have had no luck.
I have also done some research online to find others with the same problem but none relate to DevOps in anyway so it has been unhelpful
I am not sure how to reproduce this problem but if anyone knows I can try provide some of my code if needed
Edit:
Here is a snippet of my .env file. Check comment below for my thoughts
ContainerInfrastructure_Version=6.7.93-beta.1
ContainerInfrastructureCore_Version=6.7.41-beta.1
AuthenticationWebService_Version=6.7.52-beta.1
CRM_Version=6.7.52-beta.1
Expected result:
Deploys successfully
What I'm getting during the docker-compose task:
[error]Top level object in 'C:\BuildAgent_work\r38\a\Goldpine.ReleaseManagement\drop.env' needs to be an object not 'class 'str'.
Ok so I figured it out. I'm not sure how to explain this briefly but I'll do my best.
So the problem was within DevOps itself not my code. It turns out a .env file only works/gets picked up if you run the docker-compose command from within the working directory that the docker-compose.yml file exists.
When it went into DevOps it was not running the command from within the downloaded artefact directory but by creating a path to it using the -f command.
So long story short, If you use a .env file you need to set the working directory within the CD pipeline to your artefact folder for it to be able to see the .env file correctlly.
I hope this is clear enough if not just let me know and I'll try change it accordingly :)
I would like to deploy some js files to bintray, reading the instructions on http://docs.travis-ci.com/user/deployment/bintray/ I see that the name of the version is configured in the descriptor file. I would like it to use the git-tag instead. So that to make a release I only need to push a tag without having to modify a configuration file on each release. Is this possible?
You need to use the built-in environment variables. The one that you are looking for is TRAVIS_TAG. The full list is here.
you have to generate the descriptor file as part of your build. There you can check the environment variable using e.g. python or shell.