Using pulumi Vs aws cdk for infrastructure provisioning - devops

I recently cake to know about pulumi and its capabilities for provisioning infrastructure. I want to draw a comparison between pulumi and aws cdk
Can any tell me what are features that pulumi supports and aws cdk cant? I went through the documentation and different blogs but I am not able to find a good comparison in between them. And why should I use pulumi?

Related

Is there a way to deploy Cloud Functions directly as zip artifacts to Google Cloud Platform? and not rely on the default Cloud Build?

The default setup for firebase functions is to run firebase deploy, which will:
Upload the whole project to Cloud Build
Cloud Build will extract the functions
It will run npm install.
Create the ZIP artefacts
Upload the ZIP artefacts to the cloud
The question is if you know of a way to make these ZIP artefacts on our side and upload them directly?
Default Cloud Build steps
List of the Cloud Build deployments
From my point of view - there are plenty of options how to deploy one or more cloud functions.
The Deploying Cloud Functions documentation provides some initial context.
The easiest way, from my point of view, to use gcloud functions deploy command - see Cloud SDK CLI - gcloud functions deploy
As a side note - from my personal point of view - an idea to use Cloud Build - is not bad, and it has many benefits (security, organization of CI/CD, etc.) but it is up to you. Personally I use Cloud Build with Terraform, and configured deployment in a such a way, that only updated Cloud Functions are redeployed.
There is different "level" of cloud build to take into consideration.
If it's the first step, I mean create a ZIP with the code of your function, no problem, you can do it on your side. Then you can deploy the zip through the console or with api calls
In both case, you need a zip and you deploy it. And, if you use the gcloud functions deploy command, it do exactly the same thing: create a zip, send it to storage and deploy the function from that ZIP!
That's was for the first stage, where you manage the ZIP creation and sending on the cloud.
HOWEVER, to deploy the ZIP code to Google Cloud Platform you need to package that code in a runnable stuff, because you only have a function and a function isn't runnable and can't handle HTTP request.
Therefore, Google Cloud run a Cloud Build under the hood and use Buildpacks to package your code in a container (yes, all is container at Google Cloud), and deploy that container on the Cloud Functions platform.
You can't discard that container creation, without container your Cloud Functions can't run. Or you can use Cloud Run and build your own container on your side and deploy it without Cloud Build.

How to deploy frontend angular app and backend Django app on GCP at same time?

I have a Angular project and Django (backend) project. As of now I am using Gitlab CI/CD to deploy individual app on Google Cloud Platform. But In production environment in future I want to deploy my code at the same time. How to do this on Google Cloud Platform?
There are several tools for CI/CD on Google Cloud Platform. You could use Google App Engine with Cloud Build. You can find a pretty straightforward tutorial here. Or you could take advantage of the Gitlab Google Kubernetes Engine integration. You can find an example on the official documentation by going here.

How to setup Multiple Environment (Dev,stage,Prod) using Jenkins?

I have a scenario where I am given a AWS instance ( Production, staging, testing), a Docker for development and need to work on a infrastructure work flow. We also need to take care of continuous deployment and integration using Jenkins.
Can you please help me out to figure out a rigid Work flow to create an environment with the above Tech space.
Man, with these tools it is possible to do a lot of different strategies of CI and CD. i recommend to adding GIT and Jenkins in your deck of development tools.
Start doing it in a simple way, first build you application then create new docker images and deliver this in a dev server, then think in how to deliver
to others envs.
After that store this images in a private repository using nexus or similar and try to think about an hierarchy of images (base images to app and ready images).
It all depends on your goals.

Staging in Cloud Foundry - .cfignore

We have .cfignore file mentioning the files that should not be staged in Cloud Foundry.
Through CF CLI tool, cf push is considering to ignore file mentioned in .cfignore before pushing files to Cloud Foundry.
But Jenkins configured using push to cloud foundry plugin does not consider ignoring the files mentioned in .cfignore
With this, Cloud Foundry some undesirable files at staging time, which makes staging failed.
How to resolve this error?
Through CF CLI tool, cf push is considering to ignore file mentioned in .cfignore before pushing files to Cloud Foundry.
Yes, this is the standard behavior of the official cf cli.
https://github.com/cloudfoundry/cli
But Jenkins configured using push to cloud foundry plugin does not consider ignoring the files mentioned in .cfignore
The official cf cli is only one of many clients that can interact with the Cloud Foundry API. Just because the cf cli behaves one way, does not guarantee that other clients will behave the same way. If the client you've chosen to use does not behave the way you want open a bug/feature request with the author/owner of your client, or switch and use a different client that offers the functionality you want.
Hope that helps!

concourse integration with azure key vault

For a cloud-native greenfield project, we are planning to use concourse.
In the project, Azure Key Vault is selected as a secret management tool and I need to integrate concourse with it. In the documentation or examples, I could not see any clue to use both.
Is there a way or workaround to integrate both?
Thanks.

Resources