codeship stuck using old image of codeship/google-cloud-deployment - docker

I've been updating one of our projects that is built and deployed using codeship pro. We use codeship/google-cloud-deployment docker image to deploy google cloud functions. I need features that are only available in a recent version of the gcloud sdk, but codeship always uses an old version the sdk and seems stuck fetching a cached version of the image.
codeship-services.yml
googlecloudproductiondeployment:
image: codeship/google-cloud-deployment
encrypted_env_file: deploy/deploy-production.env.encrypted
cached: false
volumes:
- ./:/deploy
codeship-steps.yml
- name: Deploy CF to prod
tag: ^deploy-production$
service: googlecloudproductiondeployment
command: /deploy/deploy/google-deploy-cf.sh
deploy/google-deploy-cf.sh
#!/bin/bash
set -e
PROJECT=my-project
FUNCTION_NAME=my-function
SOURCE_REPO=my-repo
# Authenticate on google SDK
codeship_google authenticate
# Re-deploy the CF
gcloud version
gcloud beta functions deploy $FUNCTION_NAME --region europe-west1 --runtime nodejs8 --env-vars-file /deploy/deploy/cf-env.production.yaml --trigger-http --source https://source.developers.google.com/projects/my-project/repos/${PROJECT}/fixed-aliases/${CI_BRANCH} --memory 128MB --entry-point run --timeout 540s
Output observed in codeship:
googlecloudproductiondeployment build/pull started
googlecloudproductiondeployment build/pull finished successfully
googlecloudproductiondeployment Activated service account credentials for: [***#***.iam.gserviceaccount.com]
googlecloudproductiondeployment Google Cloud SDK 204.0.0
googlecloudproductiondeployment alpha 2017.09.15
googlecloudproductiondeployment beta 2017.09.15
googlecloudproductiondeployment bq 2.0.34
googlecloudproductiondeployment core 2018.06.04
googlecloudproductiondeployment gsutil 4.31
googlecloudproductiondeployment kubectl
googlecloudproductiondeployment deployng
googlecloudproductiondeployment ERROR: (gcloud.beta.functions.deploy) unrecognized arguments: 2018-10-08 07:42:29 googlecloudproductiondeployment --runtime (did you mean '--timeout'?)
googlecloudproductiondeployment nodejs8
googlecloudproductiondeployment --env-vars-file
googlecloudproductiondeployment /deploy/deploy/cf-env.production.yaml
Expected output:
I expect to see Google Cloud SDK 218.0.0, the version noted in the last commit in codeship's google-cloud-deployment github repo.
Steps tried:
Adding :latest to the image in codeship-services.yml.
Clicking on Reset Cache on the project page on codeship.
Even after reseting the cache, I always see Image exists, using cached image in the logs for my googlecloudproductiondeployment service on codeship.
Using jet locally, I can fore codeship to pull the latest version by running docker rmi codeship/google-cloud-deployment before jet steps. However, I do not have control over the docker cache on codeship.
It seems codehip is stuck using an old version of the codeship/google-cloud-deployment image. On docker hub this image has no tags other than latest, so I don't know how to force codeship to get a specific version. Please help!

Apologies for the trouble.
We've gone ahead and ensured that versions of codeship/google-cloud-deployment will remain current.
In general we will be tracking behind the most current Google Cloud SDK by two to three weeks. But this will maintain much closer parity to most recent versions. We can also expedite updates of Google Cloud SDK now as needed.
If you reset your project cache and restart your build, you will note (as of this time of writing) that the Google Cloud SDK is now set to version 219.0.1 for the codeship/google-cloud-deployment image.

Related

Finding deployed Google Tag Manager server-side version in GCP

I've recently joined a new company which already has a version of Google Tag Manager server-side up and running. I am new to Google Cloud Platform (GCP), and I have not been able to find the supposed docker image in the image repository for our account. Or, at least I am trying to figure out how to check if there is one and how do I correlate its digest to what image we've deployed that is located at gcr.io/cloud-tagging-10302018/gtm-cloud-image.
I've tried deploying it both automatically provisioned in my own cloud account and also running the manual steps and got it working. But I can't for the life of me figure out how to check which version we have deployed at our company as it is already live.
I suspect it is quite a bit of an old version (unless it auto-updates?), seeing as the GTM server-side docker repository has had frequent updates.
Being new to the whole container imaging with docker, I figured I could use Cloud shell to check it that way, but it seems when setting up the specific Appengine instance with the shell script provided (located here), it doesn't really "load" a docker image as if you'd deployed it yourself. At least I don't think so, because I can't find any info using docker commands in the Cloud shell of said GCP project running the flexible Appengine environment.
Any guidance on how to find out which version of GTM server-side is running in our Appengine instance?
To check what docker images your App Engine Flex uses is by ssh to the instance. To ssh to your App Engine instances is by going to the instance tab then choosing the correct service and version then click the ssh button or you can access it by using this gcloud command on your terminal or cloud shell:
gcloud app instances ssh "INSTANCE_ID" --service "SERVICE_NAME" --version "VERSION_ID" --project "PROJECT_ID"
Once you have successfully ssh to your instance, run docker images command to list your docker images

cloud run deploy fails with permission error

Running gcloud beta run deploy --image gcr.io/mynippets-dev/web:latest when gcloud project is set to 'mysnippets-dev' returns the following:
ERROR: (gcloud.beta.run.deploy) Google Cloud Run Service Agent must have permission to read the image, gcr.io/mynippets-dev/web:latest. Ensure that the provided container image URL is correct and that the above account has permission to access the image. If you just enabled the Cloud Run API, the permissions might take a few minutes to propagate. Note that [mynippets-dev/web] is not in project [mysnippets-dev]. Permission must be granted to the Google Cloud Run Service Agent from this project
It should be noted that both the GCR image and the Cloud Run account live in project 'mysnippets-dev'. But for some reason, it thinks it's a cross project deployment and maybe thinking it's 'mynippets-dev/web' with the /web (the GCR repository).
I can also repro the same issue in Cloud Run UI.
Deployment should succeed.
This looks like it is most likely a typo with mynippets-dev vs mysnippets-dev (missing an 's')
Cloud Run interprets this as a cross-project deployment, which is allowed, but requires sufficient permissions.
If this isn't intended to be a cross project deployment, this should succeed with this command.
gcloud beta run deploy --image gcr.io/mysnippets-dev/web:latest

gcloud ERROR: (gcloud.app.deploy) Error Response: [3]

After running gcloud app deploy i am having the next error trying to deploy my application to a container using gcloud and the google cloud API.
Step 5 : CMD npm start
---> Running in cb3b29e90183
---> 296d95a6ac52
Removing intermediate container cb3b29e90183
Successfully built 296d95a6ac52
PUSH
The push refers to a repository [us.gcr.io/<ID_PROJECT>/appengine/default.20160906t225412] (len: 1)
296d95a6ac52: Preparing
296d95a6ac52: Pushing
296d95a6ac52: Pushed
d6a5f487b829: Preparing
d6a5f487b829: Pushing
d6a5f487b829: Pushed
b71be5d9c21a: Preparing
b71be5d9c21a: Pushing
b71be5d9c21a: Pushed
75d5a58c171b: Preparing
75d5a58c171b: Pushing
75d5a58c171b: Pushed
9ff051f37ab2: Image already exists
363507e00b22: Image already exists
818131a74c7c: Image already exists
cc57a274adf5: Image already exists
c7c7a273971f: Image already exists
b21b3e3bc691: Image already exists
latest: digest:sha256:70668fb04a90187c890eb6ba3119b6af46838a5518f7a96e8996f1d5fda6dc52 size: 33255
DONE
Updating service [default]...failed.
ERROR: (gcloud.app.deploy) Error Response: [3] Docker image us.gcr.io/<PROJET_ID>/appengine/default.20160906t225412:latest was either not found, or you do not have access to it.
I just recently updated my google cloud SDK from the version 122.0.0 to the version 124.0.0 I am running this in my local machine mac environment, this is the complete version's list:
gcloud --version
Google Cloud SDK 124.0.0
bq 2.0.24
bq-nix 2.0.24
core 2016.08.29
core-nix 2016.08.29
gcloud
gsutil 4.21
gsutil-nix 4.21
I found the error and the solution, apparently the gcloud SDK version upgrade, from 122.0.0 to 124.0.0 got corrupted my project id in the gcloud portal.
I tried to switch back from 124.0.0 to 122.0.0 unsuccessfully and also upgrade again to 126.0.0, but finally I found that creating a new project and migrating all my containers made the trick and once there everything worked correctly!.
I have to say it, gcloud is a very useful and powerful tool, but with an error like this and finding out that there is actually few support provide it from Google makes me think to move back to AWS.
App Engine no longer supports Docker V1 format images for new deployments. It looks like the error message used doesn't really convey this.
Here are the docs on how to tell which docker format an image is in:
https://cloud.google.com/container-registry/docs/ui
We'll work on getting the error message fixed. Sorry about the hassle.

gcloud docker push reliability

I have been having a lot of problems pushing images with gcloud docker push over the past few weeks. I've read through the many stack overflow discussions and github issues and workarounds but I haven't come across a solution to the inconsistency yet.
Typically I will attempt to push a container image or two. The first push will almost always fail with the following retry-until-timeout output:
I can only get around it with gcloud auth login. At most 5 minutes later I will attempt to push a second image, and will again see the retry-until-timeout issue. I will see this on every attempt until I gcloud auth login again.
Often I will have to manually retry several more times immediately after authenticating before the image is actually pushed.
Am I actually being logged out (I can still access pods and instances, etc with kubectl and gcloud machines)? If so, why is being logged out inconsistent and what does building docker containers do that it would invalidate my local gcloud session?
If not, why can't I gcloud docker push until I authenticate again? After that, why is this still inconsistent (I suspect it may have little or nothing to do with the real issue).
Is there a way to make pushing images on OSX with docker-machine and gcloud docker push reliable? Is there another way to get images to the cloud repository (preferably from the command line)?
gcloud --version
alpha 2016.01.12
beta 2016.01.12
bq 2.0.18
bq-nix 2.0.18
core 2016.02.11
core-nix 2016.02.05
gcloud
gsutil 4.16
gsutil-nix 4.15
kubectl
kubectl-darwin-x86_64 1.1.7
docker --version
Docker version 1.10.1, build 9e83765
docker-machine --version
docker-machine version 0.6.0, build e27fb87
virtualbox version 5.0.14 r105127
I had the same or similar problem. After a few minutes of retry loop depicted with screenshoot above, the command will fail with net/http: TLS handshake timeout.
The solution that fixed it for me was editing the docker daemon configuration with
DOCKER_OPTS="--max-concurrent-uploads=1"
I had a feeling this issue was connected with docker clogging up the network, as I noticed even browsing to gmail can get a timeout(!)
Switching to regular docker push doesn't help timeouts. This appears to be related to your ISP and uploading assets.
I was receiving the same error. After moving the Docker build process to the cloud (which has a much larger pipeline), gcloud docker builds and deploys the image just fine.
I never faced the problems you mentioned with gcloud docker, but regarding your last point,
Is there another way to get images to the cloud repository (preferably from the command line)?
it is indeed possible to push to the gcr.io repos without going through gcloud, e.g:
docker login -e dummy#example.com -p $(gcloud auth print-access-token) -u _token https://gcr.io
docker push [your-image]
Credits to mattmoor, more info in original answer here:
Access google container registry without the gcloud client

Google Cloud Container Registry Issues while pushing docker images

I have a Google project on which i am one of the owner. It was created by another developer and he added me as the owner. Now within that i created a VM instance within which i installed docker. After installing docker, i created an image of my node.js application by providing the git repository as the argument.
However after setting the gcloud config parameters, its giving me 500 error while trying to push that docker image
Error: Status 500 trying to push repository <project-id>/<image-name>: "Internal Error."
My gcloud and docker version information :-
Google Cloud SDK 0.9.71
Docker version 1.7.1, build 786b29d
you probably were hit by the Google Cloud Storage outage that was going on last night: https://status.cloud.google.com/incident/storage/16027
would you mind trying again?
Sorry for the inconvenience!
Jeffrey van Gogh
Google Container Registry Team

Resources