How to use terraform to deploy a Google Cloud Run service from source? - google-cloud-run

I'm currently deploying a Cloud Run service using the following command line:
gcloud run deploy webserver --source webserver/ --other-flags...
The Terraform configuration for Cloud Functions could build from a .zip in a google_storage_bucket_object using the source block, but the cloud_run_service documentation doesn't have a similar block.
The gcloud run deploy --source command reports that it's equivalent to gcloud builds submit --pack image=[IMAGE] webserver/ and gcloud run deploy webserver --image [IMAGE], but the Cloud Build resources don't appear to allow submitting builds at all.
What's the simplest way to translate this gcloud run deploy --source command to Terraform?

Related

How to run Servicemix commands inside docker in Karaf console using deploy script?

Colleagues, it so happened that I am now using the servicemix 7.0 technology in one old project. There are several commands that I run manually.
build image servicemix
docker build --no-cache -t servicemix-http:latest .
start container and copy data from local folder source and folder .m
docker run --rm -it -v %cd%:/source -v %USERPROFILE%/.m2:/root/.m2 servicemix-http
start console servicemix and run command
feature:repo-add file:/../source/source/transport/feature/target/http-feature.xml
run command
feature:install http-feature
copy deploy files from local folder to deploy folder into servicemix
docker cp /configs/. :/deploy
update image servicemix
docker commit servicemix-http
Now I describe the gitlab-ci.yml for deployment.
And the question concerns the servicemix commands that were launched from the karaf console.
feature:repo-add
feature:install
Is there any way to script them?
If all you need is to install features from specific feature repositories on startup you can add the features and feature-repositories to /etc/org.apache.karaf.features.cfg.
You can use ssh with private-key to pass commands to Apache karaf.
ssh karaf#localhost -p 8101 -i ~/.ssh/karaf_private_key -- bundle:list
You can also try running commands through Karaf_home/bin/client, albeit had no luck using it with Windows 10 ServiceMix 7.0.1 when I tried. It kept throwing me NullReferenceException so guessing my java installation is missing some security related configuration.
Works well when running newer Karaf installations on docker though and can be used through docker exec as well.
# Using password - (doesn't seem to be an option for servicemix)
docker exec -it karaf /opt/apache-karaf/bin/client -u karaf -p karaf -- bundle:list
# Using private-key
docker exec -it karaf /opt/apache-karaf/bin/client -u karaf -k /keys/karaf_key -- bundle:list

Continues integration for docker on TFS

I need some help with docker configuration on TFS. I've got some experience but not much. I have configured classic CI for .NET projects few times where these steps were used:
-get sources
-run tests
-build
-copy files to server
Recently I've started to use Docker and I would like to automate this process, because now I have to copy files manually to remote machine and then run these commands:
dotnet publish --configuration=Release -o pub
docker build . -t netcore-rest
docker run -e Version=1.1 -p 9000:80 netcore-rest
I saw few tutorials for VSTS, but I'd like to configure it for classic TFS.
I don't have docker hub. What interesting me is:
-how to kill/remove currently running container
-build new one from copied files
-run a new container
Thank you
PS. I have already installed docker on my agent machine, so it's able to build an image.

Run a script inside docker container using octopus deploy

Trying to do config transformation once a docker container has been created and the docker CP command does not allow wildcard and file type searches. While testing manually, it was found that it was possible to solve this issue but running the docker exec command and running powershell inside our container. After some preliminary tests it doesn't look like this works out of the box with octopus deploy. Is there a way to run process steps inside a container with octopus deploy?
Turns out you can run powershell scripts that already exist in the container with the exec command:
docker exec <container> powershell script.ps1 -argument foo
This command will run a script as you would expect in command line.

How does the Jenkins CloudBees Docker Build Plugin set its Shell Path

I'm working with a Jenkins install I've inherited. This install has the CloudBees Docker Custom Build Environment Plugin installed. We think this plugin gives us a nifty Build inside a Docker container checkbox in our build configuration. When we configure jobs with this option, it looks like (based on Jenkins console output) Jenkins runs the build with the following command
Docker container 548ab230991a452af639deed5eccb304026105804b38194d98250583bd7bb83q started to host the build
[WIP-Tests] $ docker exec -t 548ab230991a452af639deed5eccb304026105804b38194d98250583bd7bb83q /bin/sh -xe /tmp/hudson7939624860798651072.sh
However -- we've found that this runs /bin/sh with a very limited set of environmental variables -- including a $PATH that doesn't include /bin! So
How does the CloudBees Docker Custom Build Environment Plugin setup its /bin/sh environment. Is this user configurable via the UI (if so, where?)
It looks like Jenkins is using docker exec -- which i think means that it must have (invisibly) setup a container with a long running process using docker run. Doesn't anyone know how the CloudBees Docker Custom Build Environment Plugin plugin invokes docker run, and if this is user manageable?
Considering this plugin is "up for adoption", I would recommend the official JENKINS/Docker Pipeline Plugin.
It source code show very few recent commits.
But don't forget any container has a default entrypoint set to /bin/sh
ENTRYPOINT ["/bin/sh", "-c"]
Then:
The docker container is ran after SCM has been checked-out into a slave workspace, then all later build commands are executed within the container thanks to docker exec introduced in Docker 1.3
That means the image you will be pulling or building/running must have a shell, to allow the docker exec to execute a command in it.

steps required to run Docker image using kubernetes

I have developed a simple Docker image. This can be run using command
docker run -e VOLUMEDIR=agentsvolume -v /c/Users/abcd/config:/agentsvolume app-agent
Same thing if I want to run using kubernetes, can someone guide me what are the steps to do it?
Do I must create Pods/ Controller or service.. am not able to get clear steps to run using Kubernetes?
This kubernetes command is the equivalent to your docker run command:
kubectl run --image=app-agent app-agent --env="VOLUMEDIR=agentsvolume"
This will create a deployment called app-agent.

Resources