GC Cloud Build: Exception in Remote-Builder hello world - docker

I try to set up remote builde of my AppEngine app with Remote Builder image.
Here is my cloudbuild.yaml:
steps:
- name: gcr.io/{PROJECT_NAME}/remote-builder
env:
- ZONE=us-east1-b
- INSTANCE_NAME=Remote_Cloud_Build
- INSTANCE_ARGS=--image-project cos-cloud --image-family cos-stable
I've taken this values from remote builder example
But when I try to deploy it with gcloud builds submit --config cloudbuild.yaml
I get an error:
/bin/run-builder.sh: line 2: $'\r': command not found
Could you please help me with it?
Thanks in advance!

The issue was related to windows style run-builder.sh file ending. I've changed it from CRLF to LF and it works

Related

CircleCI Serverless framework build fails

CircleCI Install Serverless CLC build suddenly fails, Have not changed anything in config file.
version: 2.1
orbs:
aws-cli: circleci/aws-cli#3.1
build-tools: circleci/build-tools#3.0.0
node: circleci/node#5.0.3
python: circleci/python#2.1.1
docker: circleci/docker#2.2.0
serverless-framework: circleci/serverless-framework#2.0.0
- serverless-framework/setup
- node/install-packages
- setup_remote_docker
I was able to fix it by updating the serverless-framework version to serverless-framework: circleci/serverless-framework#2.0.1

Amplify save environment variables to backend

Following the docs, I set my environment variable in the console ($CLIENT_ID).
In the console I added the echo command to try and insert the variable into a .env.
The error I keep getting is There was an issue connecting to your repo provider. When I remove the echo line the build passes. I've tried single/double quotes and putting the line above/below the other lines under the build commands phase.
Here's the backend section for the build process.
backend:
phases:
build:
commands:
- echo 'CLIENT_ID=$CLIENT_ID' >> backend/.env
- '# Execute Amplify CLI with the helper script'
- amplifyPush --simple
I wrote an comment but to make it easier, I quote from answers from this question
build:
commands:
- npm run build
- VARIABLE_NAME_1=$VARIABLE_NAME_1 # it works like this
- VARIABLE_NAME_2=${VARIABLE_NAME_2} # it also works this way
Please thumb up on the original answers, and flag this question as duplicated.
Seems this is a feature request:
https://github.com/aws-amplify/amplify-cli/issues/4347

Docker image deployed to Google Compute Engine keeps restarting

I built an image with Google Cloud Build using Docker Compose. In my cloudbuild.yml file I have the following steps:
Build the docker image using docker compose
Tag the built image
Create an instance template
Create instance group
Now here is the problem every time a new instance gets built the created container from the image keeps restarting and never actually boots up. In spite of this I can build the image and start it as a container on the instance independent from the image from cloud build.
I managed to find some clues from the logs:
E1219 19:13:52 7f28dce6d700 api_server.cc:184 Metadata request unsuccessful: Server responded with 'Forbidden' (403): Transport endpoint is not connected
oauth2.cc:289 Getting auth token from metadata server docker
I also got some clue by running the following in the instance:
docker -a -i start <container_id>
Output: Unrecognized input header: 99
The cloudbuild.yml file looks like (I've replaced some variables with ...):
#cloudbuild.yaml
steps:
- name: 'docker/compose:1.22.0'
args: ['-f', 'docker/docker-compose.tb.prod.yml', 'up', '-d']
- name: 'gcr.io/cloud-builders/docker'
args: ['tag', 'tb:latest', '...']
- name: 'gcr.io/cloud-builders/gcloud'
args: [
'beta', 'compute', '--project=...', 'instance-templates', 'create-with-container',
'tb-app-staging-${COMMIT_SHA}',
'--machine-type=n1-standard-2', '--network=...', '--network-tier=PREMIUM', '--metadata=google-logging-enabled=true',
'--maintenance-policy=MIGRATE', '--service-account=...',
'--scopes=https://www.googleapis.com/auth/cloud-platform,https://www.googleapis.com/auth/devstorage.read_only,https://www.googleapis.com/auth/logging.write,https://www.googleapis.com/auth/monitoring.write,https://www.googleapis.com/auth/servicecontrol,https://www.googleapis.com/auth/service.management.readonly,https://www.googleapis.com/auth/trace.append',
'--tags=http-server,https-server', '--image=cos-stable-69-10895-62-0', '--image-project=cos-cloud', '--boot-disk-size=20GB', '--boot-disk-type=pd-standard',
'--container-restart-policy=always', '--labels=container-vm=cos-stable-69-10895-62-0',
'--boot-disk-device-name=...',
'--container-image=...',
]
- name: 'gcr.io/cloud-builders/gcloud'
args: [
'beta', 'compute', '--project=...', 'instance-groups',
'managed', 'rolling-action', 'start-update',
'tb-app-staging',
'--version',
'template=...',
'--zone=europe-west1-b',
'--max-surge=20',
'--max-unavailable=9999'
]
images: ['...']
timeout: 1200s
I found the issue and I'll answer this question myself just incase someone else runs into the same issue.
The problem was that in my docker-compose.yml I have the configuration for stdin_open and tty set to true but my cloudbuild.yml file did not accept it and was failing silently (annoying!).
To fix the issue you will need to use the flags --container-stdin and --container-tty on the create-with-container command.
More details can be found on the google docs https://cloud.google.com/compute/docs/containers/configuring-options-to-run-containers
I has a similar issue the reason was setting USER in Dockerfile. I was using changing user to 'node' which is user available in official nodejs images. But does not work on Google cloud containers.
FROM node:current-buster-slim
USER node

How to copy a file or jar file that has built from jenkins to a diff host server

Am having a jenkins job where am building a jar file. after the build is done I need to copy that jar file to a different server and deploy it there.
Am trying this yml file to achieve the same but it is looking for the file in the different server other than the jenkins server.
---
# ansible_ssh_private_key_file: "{{inventory_dir}}/private_key"
- hosts: host
remote_user: xuser
tasks:
- service: name=nginx state=started
become: yes
become_method: sudo
tasks:
- name: test a shell script
command: sh /home/user/test.sh
tasks:
- name: copy files
synchronize:
src: /var/jenkins_home/hadoop_id_rsa
dest: /home/user/
could you please suggest is there any other way or what could be approach to copy a build file to the server using jenkins to deploy.
Thanks.
Hi as per my knowledge you can use Publish Over ssh plugin in jenkins. actually i am not clear about your problem. but hoping that this can help you. plugin details: https://wiki.jenkins-ci.org/display/JENKINS/Publish+Over+SSH+Plugin if it wont help you, please comment. can you please more specific. (screen shot if possible)
Use remote ssh script in the build step no plug in is required
scp –P 22 Desktop/url.txt user#192.168.1.50:~/Desktop/url.txt
Setup passwords less authentication use the below link for help
https://www.howtogeek.com/66776/how-to-remotely-copy-files-over-ssh-without-entering-your-password/

Getting an error while trying to use a command under the lifecycle tag on kubernetes

im successfully running kubernetes, gcloud and postgres but i wanna make some modifications after pod startup , im trying to move some files so i tried these 3 options
1
image: paunin/postgresql-cluster-pgsql
lifecycle:
postStart:
exec:
command: [/bin/cp /var/lib/postgres/data /tmpdatavolume/]
2
image: paunin/postgresql-cluster-pgsql
lifecycle:
postStart:
exec:
command:
- "cp"
- "/var/lib/postgres/data"
- "/tmpdatavolume/"
3
image: paunin/postgresql-cluster-pgsql
lifecycle:
postStart:
exec:
command: ["/bin/cp "]
args: ["/var/lib/postgres/data","/tmpdatavolume/"]
on option 1 and 2, im getting the same errors (from kubectl get events )
Killing container with docker id f436e40f5df2: PostStart handler: Error ex
ecuting in Docker Container: -1
and on option 3 it wont even let me upload the yaml file giving me this error
error validating "postgres-master.yaml": error validating data: found invalid field args for v1.ExecAction; if you choose to ignore these errors, turn validation off with --validate=false
any help would be appreciated! thanks.
pd: i just pasted part of my yaml file since i wasnt getting any errors since i added those new lines
Here's the document about lifecycle hooks you might find useful.
Your option 1 won't work and should give you the error you saw, it should be ["/bin/cp","/var/lib/postgres/data","/tmpdatavolume/"] instead. Option 2 is also the right way to specify it. Can you kubectl exec into your pod and type those commands to see what error messages that generates? Do something like kubectl exec <pod-name> -i -t -- bash -il
The error message shown in option 3 means that you're not passing a valid configuration to the API server. To learn the API definition, see v1.Lifecycle and after a few clicks into its child fields you'll find args isn't valid under lifecycle.postStart.exec.
Alternatively, you can find those API definition using kubectl explain, e.g. kubectl explain pods.spec.containers.lifecycle.postStart.exec in this case.

Resources