How to copy a file or jar file that has built from jenkins to a diff host server - jenkins

Am having a jenkins job where am building a jar file. after the build is done I need to copy that jar file to a different server and deploy it there.
Am trying this yml file to achieve the same but it is looking for the file in the different server other than the jenkins server.
---
# ansible_ssh_private_key_file: "{{inventory_dir}}/private_key"
- hosts: host
remote_user: xuser
tasks:
- service: name=nginx state=started
become: yes
become_method: sudo
tasks:
- name: test a shell script
command: sh /home/user/test.sh
tasks:
- name: copy files
synchronize:
src: /var/jenkins_home/hadoop_id_rsa
dest: /home/user/
could you please suggest is there any other way or what could be approach to copy a build file to the server using jenkins to deploy.
Thanks.

Hi as per my knowledge you can use Publish Over ssh plugin in jenkins. actually i am not clear about your problem. but hoping that this can help you. plugin details: https://wiki.jenkins-ci.org/display/JENKINS/Publish+Over+SSH+Plugin if it wont help you, please comment. can you please more specific. (screen shot if possible)

Use remote ssh script in the build step no plug in is required
scp –P 22 Desktop/url.txt user#192.168.1.50:~/Desktop/url.txt
Setup passwords less authentication use the below link for help
https://www.howtogeek.com/66776/how-to-remotely-copy-files-over-ssh-without-entering-your-password/

Related

Bitbucket auto deploy to Linux server (DigitalOcean Droplet)

I have encountered a problem while attempting to deploy my code to a Droplet server (running Ubuntu) using BitBucket Pipeline.
I have set the necessary environment variables (SSH_PRIVATE_KEY, SSH_USER, SSH_HOST) and added the public key of the SSH_PRIVATE_KEY to the ~/.ssh/authorized_keys file on the server. When I manually deploy from the server, there are no issues with cloning or pulling. However, during the automatic CI deployment stage, I am encountering the error shown in the attached image.
This is my .yml configuration.
Thanks for helps in advance.
To refer to the values of the variables in defined in the configuration, you script should use $VARIABLE_NAME, not VARIABLE_NAME as that is literally that string.
- pipe: atlassian/ssh-run:latest
variables:
SSH_KEY: $SSH_KEY
SSH_USER: $SSH_USER
SERVER: $SSH_HOST
COMMAND: "/app/deploy_dev01.sh"
Also, note some pitfalls exists when using an explicit $SSH_KEY, it is generally easier and safer to use the default key provided by Bitbucket, see Load key "/root/.ssh/pipelines_id": invalid format

Bitbucket pipelines: Why does the pipeline not seem to be using my custom docker image?

In my pipelines yml file, I specify a custom image to use from my AWS ECR repository. When the pipeline runs, the "Build setup" logs suggests that the image was pulled in and used without issue:
Images used:
build : 123456789.dkr.ecr.ca-central-1.amazonaws.com/my-image#sha256:346c49ea675d8a0469ae1ddb0b21155ce35538855e07a4541a0de0d286fe4e80
I had worked through some issues locally relating to having my Cypress E2E test suite run properly in the container. Having fixed those issues, I expected everything to run the same in the pipeline. However, looking at the pipeline logs it seems that it was being run with an image other than the one I specified (I suspect it's using the Atlassian default image). Here is the source of my suspicion:
STDERR: /opt/atlassian/pipelines/agent/build/packages/server/node_modules/.cache/mongodb-memory-server/mongodb-binaries/4.0.14/mongod: /usr/lib/x86_64-linux-gnu/libcurl.so.4: version `CURL_OPENSSL_3' not found (required by /opt/atlassian/pipelines/agent/build/packages/server/node_modules/.cache/mongodb-memory-server/mongodb-binaries/4.0.14/mongod)
I know the working directory of the default Atlassian image is "/opt/atlassian/pipelines/agent/build/". Is there a reason that this image would be used and not the one I specified? Here is my pipelines config:
image:
name: 123456789.dkr.ecr.ca-central-1.amazonaws.com/my-image:1.4
aws:
access-key: $AWS_ACCESS_KEY_ID
secret-key: $AWS_SECRET_ACCESS_KEY
cypress-e2e: &cypress-e2e
name: "Cypress E2E tests"
caches:
- cypress
- nodecustom
- yarn
script:
- yarn pull-dev-secrets
- yarn install
- $(npm bin)/cypress verify || $(npm bin)/cypress install && $(npm bin)/cypress verify
- yarn build:e2e
- MONGOMS_DEBUG=1 yarn start:e2e && yarn workspace e2e e2e:run
artifacts:
- packages/e2e/cypress/screenshots/**
- packages/e2e/cypress/videos/**
pipelines:
custom:
cypress-e2e:
- step:
<<: *cypress-e2e
For anyone who happens to stumble across this, I suspect that the repository is mounted into the pipeline container at "/opt/atlassian/pipelines/agent/build" rather than the working directory specified in the image. I ran a "pwd" which gave "/opt/atlassian/pipelines/agent/build", though I also ran a "cat /etc/os-release" which led me to the conclusion that it was in fact running the image I specified. I'm still not entirely sure why, even testing everything locally in the exact same container, I was getting that error.
For posterity: I was using an in-memory mongo database from this project "https://github.com/nodkz/mongodb-memory-server". It generally works by automatically downloading a mongod executable into your node_modules and using it to spin up a mongo instance. I was running into a similar error locally, which I fixed by upgrading my base image from a Debian 9 to a Debian 10 based image. Again, still not sure why it didn't run the same in the pipeline, I suppose there might be some peculiarities with how containers are run in pipelines that I'm unaware of. Ultimately my solution was installing mongod into the image itself, and forcing mongodb-memory-server to use that executable rather than the one in node_modules.

How to both run an AWS SAM template locally and deploy it to AWS successfully (with Java Lambdas)?

I'm trying to build an AWS application using SAM (Serverless Application Model) with the Lambdas written in Java.
I was able to get it running locally by using a resource definition like this in the template:
Resources:
HelloWorldFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: HelloWorldFunction
Handler: helloworld.App::handleRequest
Runtime: java8
Events:
HelloWorld:
Type: Api
Properties:
Path: /hello
Method: get
But to get the sam package phase to upload only the actual code (and not the whole project directory) to S3 I had to change it to this:
...
Properties:
CodeUri: HelloWorldFunction/target/HelloWorld-1.0.jar
...
as documented in the AWS SAM example project README.
However, this breaks the ability to run the application locally with sam build followed by sam local start-api.
I tried to get around this by giving the CodeUri value as a parameter (with --parameter-overrides) and this works locally but breaks the packaging phase because of a known issue with the SAM translator.
Is there a way to make both the local build and the real AWS deployment working, preferably with the same template file?
The only workaround I've come up myself with so far is to use different template files for local development and actual packaging and deployment.
To avoid maintaining two almost equal template files I wrote a script for running the service locally:
#!/bin/bash
echo "Copying template..."
sed 's/CodeUri: .*/CodeUri: HelloWorldFunction/' template.yaml > template-local.yaml
echo "Building..."
if sam build -t template-local.yaml
then
echo "Serving local API..."
sam local start-api
else
echo "Build failed, not running service."
fi
This feels less than optimal but does the trick. Would love to hear better alternatives, still.
Another idea that came to mind was extending a mutual base template with separate CodeUri values for these cases but I don't think SAM templates support anything like that.

How write dockerfile to properly pull code from my github

I'm working on building a website in Go, which is hosted on my home server via docker.
What I'm trying to do:
I make changes to my website/server locally, then push them to github. I'd like to write a dockerfile such that it pulls this data from my github, builds the image, which my docker-compose file will then use to create the container.
Unfortunately, all of my attempts have been somewhat close but wrong.
FROM golang:1.8-onbuild
MAINTAINER <my info>
RUN go get <my github url>
ENV webserver_path /website/
ENV PATH $PATH: webserver_path
COPY website/ .
RUN go build .
ENTRYPOINT ./website
EXPOSE <ports>
This file is kind of a combination of a few small guides I found through google searches, but none quite gave me the information I needed and it never quite worked.
I'm hoping somebody with decent docker experience can just put a Dockerfile together for me to use as a guide so I can find what I'm doing wrong? I think what I'm looking for can be done in only a few lines, and mine is a little more verbose than needed.
ADDITIONAL BUT PROBABLY UNNECESSARY INFORMATION BELOW
Project layout:
Data: is where my go files are Sidenote: This was throwing me errors when trying to build image, something about not being in the environment path. Not sure if that is helpful
Static: CSS, JS, Images
TPL: go template files
Main.go: launches server/website
There are several strategies:
Using of pre-build app. Build your app using
go build command according to target system architecture and OS (using GOOS and GOARCH system variable for example) then use COPY docker command to move this builded file (with assets and templates) to your WORKDIR and finally run it via CMD or ENTRYPOINT (last is preferable). Dockerfile for this example will look like:
FROM scratch
ENV PORT 8000 EXPOSE $PORT
COPY advent / CMD ["/advent"]
Build by dockerfile. Typical Dockerfile:
# Start from a Debian image with the latest version of Go installed
# and a workspace (GOPATH) configured at /go.
FROM golang
# Copy the local package files to the container's workspace.
ADD . /go/src/github.com/golang/example/outyet
# Build the outyet command inside the container.
# (You may fetch or manage dependencies here,
# either manually or with a tool like "godep".)
RUN go install github.com/golang/example/outyet
# Run the outyet command by default when the container starts.
ENTRYPOINT /go/bin/outyet
# Document that the service listens on port 8080.
EXPOSE 8080
Using GitHub. Build your app and pull to dockerhub as ready to use image.
Github supports Webhooks which can be used to do all sorts of things automagically when you push to a git repo. Since you're already running a web server on your home box, why don't you have Github send a POST request to that when it receives a commit on master and have your home box re-download the git repo and restart web services from that?
I was able to solve my issue by just creating an automated build through docker hub, and just using this for my dockerfile:
FROM golang-onbuild
EXPOSE <ports>
It isn't exactly the correct answer to my question, but it is an effective workaround. The automated build connects with my github repo the way I was hoping my dockerfile would.

AWS Elastic Beanstalk Post Deploy Script

I'm attempting to add a post deployment script to my Elastic Beanstalk. I've read a few blog posts (read the references) that reference adding a config file to the .elasticbeanstalk directory. The config file is supposed to create a shell script and copy it to the directory:
/opt/elasticbeanstalk/hooks/appdeploy/post
In my .elasticbeanstalk folder, I have the following files:
config.yml
branch-defaults:
staging:
environment: env-name
global:
application_name: app-name
default_ec2_keyname: aws-eb
default_platform: 64bit Amazon Linux 2015.03 v1.3.1 running Ruby 2.2 (Passenger Standalone)
default_region: us-west-2
profile: eb-cli
sc: git
test.config (the config I added to test out how the post deployment script works):
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/01_test.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
# This is my test shell script
After I run eb deploy though, the shell script doesn't get copied to the above directory. My /opt/elasticbeanstalk/appdeploy/post directory is empty. The .elasticbeanstalk folder is correct in the /var/app/current/ directory. I also can not see anything in the log files that references the new script or an error. I've checked eb-tools.log, eb-activity.log, cron, and grepped all logs for keywords around those files. Still, no luck.
Thanks in advance for any help/suggestions!
References:
https://forums.aws.amazon.com/thread.jspa?threadID=137136
http://www.dannemanne.com/posts/post-deployment_script_on_elastic_beanstalk_restart_delayed_job
The right place to drop your config file is .ebextensions directory in your app and not . elasticbeanstalk. You should then see your shell script in the /opt/elasticbeanstalk/hooks/appdeploy/post directory.
As noted in the forum post
"Dropping files directly into the hooks directories is risky as this is not the documented method, is different in some containers and may change in the future"

Resources