Jenkins CI deploying docker image on Heroku - docker

I have a Jenkins CI and use it to build (mvn) and containerize (docker) my app using Jenkins scripted pipeline. Lastly, I want to deploy my container to Heroku dyno (I have already created an app).
I have followed this documentation https://devcenter.heroku.com/articles/container-registry-and-runtime and have been successfully pushed my docker image to registry.heroku.com/sunset-sailing-4049/web.
The issue is since this announcement https://devcenter.heroku.com/changelog-items/1426 I now need to explicitly execute "heroku container:release web" in order to get my docker container running from registry to app dyno. This is where I am royally stuck. See my below issues:
Heroku is not recognized by Jenkins. (My Jenkins is running on ec2, I have installed heroku toolbelt as ec2-user user. But Jenkins throws error: heroku: command not found). How do I resolve this issue?
How to do "heroku login" from Jenkins, since the login command prompts for browser login. I have added ssh key but I do not know how to use it from the command line, hence Jenkins "shell script"
The only other way I could think of is deploying via heroku pipeline using a dummy git repo onto which Jenkins will upload the source code on a successful build.
Would really appreciate your help solving the above 2 issues.
Thanks in Advance.

You need install heroku as user under which jenkins is running. Or if you installed it globally it may be not in PATH of user under which jenkins is running.
There are multiple options for setting PATH:
Set for specific command.
If your job is pipeline just wrap heroku command in withEnv closure:
withEnv(['PATH+HEROKU=/use/local/bin/']) {
your heroku command here
}
Set path for jenkins slave: go to [Manage Jenkins] -> [Manage Nodes], configure your node and set Environment variable PATH to $PATH:/use/local/bin/. This way all jobs running on slave will get environment variable injected.
For automated cli interactions heroku supports API tokens. You can either put it in ~/.netrc on build machine or supply as environment variable (see here).

(writing here incase someone is facing the same scenario)
ok I took #vladimir's suggestion and did the below:
Heroku command (for jenkins running on ec2):
The below command is needed to push a built docker image to heroku via jenkins/or other ci/cd tool; Because of a recent change (https://devcenter.heroku.com/changelog-items/1426) pushing to heroku registry isn't sufficient any longer. In order to execute the below command you need to install heroku toolbelt.
heroku container:release web
Install snap on amazon linux like below:
follow instruction to enable epel https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/add-repositories.html
Then Modify /etc/yum.repos.d/epel.repo. Under the section marked [epel] , change enabled=0 to enabled=1.
Then do
sudo yum install epel-release
sudo yum install yum-plugin-copr
sudo yum copr enable ngompa/snapcore-el7
sudo yum -y install snapd
sudo systemctl enable --now snapd.socket
Then install heroku toolbelt:
sudo snap install --classic heroku
Deploying to docker image to heroku:
In Jenkins scripted pipeline:
withCredentials([string(credentialsId: 'heroku-api-cred', variable: 'herokuRegistryApiCred')]) {
sh "docker login -u email#example.com -p ${herokuRegistryApiCred} registry.heroku.com"
}
// Tag docker img (in my case it was an image in dockerhub)
sh "docker tag dockerhubusername/pvtreponame:${imageTag} registry.heroku.com/your_app_name/release_type[ie>web]"
sh "docker push registry.heroku.com/your_app_name/web"
sh "/usr/local/bin/heroku container:release web --app=your_app_name"
sh "docker logout registry.heroku.com"
In order to run the app inside docker (in my case it was java) I had to add the below line (otherwise it was crashing because 1. tell app about heroku's port binding. 2. tell web process to run command. The ENTRYPOINT ["java","-jar","my_spring_boot_app-0.0.1-SNAPSHOT.jar"] does not work on heroku.):
CMD ["web", "java $JAVA_OPTS -Dserver.port=$PORT -jar /usr/app/my_spring_boot_app-0.0.1-SNAPSHOT.jar"]

Related

Retrieve gitlab pipeline docker image to debug job

I've got a build script that is running in Gitlab and generates some files that are used later in the build process.
The reason is Gitlab pipeline fails and the failure is not reproduced locally. Is there a way to troubleshoot the failure?
As far as I know Gitlab pipelines are running in Docker containers.
Is there a way to get the docker image of the failed Gitlab pipeline to analyze it locally (e.g. take a look at the generated files)?
When the job container exits, it is removed automatically, so this would not be feasible to do.
However, you might have a few other options to debug your job:
Interactive web session
If you are using self-hosted runners, the best way to do this would probably be with a interative web session. That would allow you to have an interactive shell session inside the container. (be aware, you may have to edit the job to sleep for some time in order to keep the container alive long enough to inspect it)
Artifact files
If you're not using self-hosted runners, another option would be to artifact the generated files on failure:
artifacts:
when: on_failure
paths:
- path/to/generated-files/**/*
You can then download the artifacts and debug them locally.
Use the job script to debug
Yet another option would be to add debugging output to the job itself.
script:
- generate-files
# this is just an example, you can make this more helpful,
# depending on what information you need for debugging
- cat path/to/generated-files/*
Because debugging output may be noisy, you can consider using collapsible sections to collapse debug output by default.
script:
- generate-files
- echo "Starting debug section"
# https://docs.gitlab.com/ee/ci/jobs/index.html#custom-collapsible-sections
- echo -e "\e[0Ksection_start:`date +%s`:debugging[collapsed=true]\r\e[0KGenerated File Output"
- cat path/to/generated-files/*
- echo -e "\e[0Ksection_end:`date +%s`:debugging\r\e[0K"
Use the gitlab-runner locally
You can run jobs locally with, more or less, the same behavior of the GitLab runner by installing the gitlab runner locally and using the gitlab-runner exec command.
In this case, you could run your job locally and then docker exec into your job:
In your local repo, start the job by running the gitlab-runner exec command, providing the name of the job
In another shell, run docker ps to find the container ID of the job started by gitlab-runner exec
exec into the container using its ID: docker exec -it <CONTAINER_ID> /bin/bash (assuming bash is available in your image)

When using sudo in Jenkins pipline sh command, it always asks for inputing password even I've set sudoers to NOPASSWD

I'm running Jenkins 2.319.2 installed from Debian Bullseye repository and set up some nodes to run my tasks.
In my Jenkins pipeline task, which is running on a node instead of on the master node, I set up several stages, including checkout, build, deploy and finally I have to restart a system service using systemctl. The last step needs to be run with root privileges, so I set up the running user on the node in sudoers config to let it run systemctl without password (NOPASSWD). However, my task always asks for password when running the final step, and hence fails.
If I directly log in the user with ssh, I can run sudo systemctl without needing to input password. In my other freestyle task, I also used the same way to run sudo systemctl restart myservice in the "execute shell", without any problem. But in the pipeline stage it always asks for password. No idea why.

Jenkins, run job in Docker

I am using Jenkins for my CI/CD and want to run most of my jobs in Docker.
I installed the plugin "CloudBees Docker Custom Build Environment Plugin" which allow me to run my jobs in a given docker as below:
When I check logs I see this:
docker exec --tty --user 996:994 890fd5fc166283923e61ea515d5f49a149e508c231281c39dc05e14d6ab43a09
The uid 996 is the jenkins user which does not even exist inside docker.
That is a problem because I can't do anything once inside docker (apt update, apt install)
Do you guys have any idea how, using that plugin, I can use the real user inside the docker ? (in this case the user must be "node")
Thanks

Jenkins pipeline issue with Docker

When I was trying to run a Jenkins pipeline project, it failed giving this message under the "docker pull node:6-alpine":
<.jenkins/workspace/simple-node-js-react-npm-app#tmp/durable-431710c5/script.sh: line 2: docker: command not found
script returned exit code 127>
I have no idea what's going on here, and I couldn't access the directory mentioned in the error. I am pretty new to Jenkins.
As mentioned here, using the JENKINS Docker Plugin or JENKINS Docker Pipeline Plugin would not be enough to allow a node to use docker.
You still need to install docker on the node itself.
Please follow below steps:
Install the docker engine (yum install docker) on server where Jenkins is running.
Verify docker is installed: run command which docker.
Click on Jenkins manage plugin and install docker plugin.

How can I run docker-compose up as shell script by Jenkins

I am trying to run command docker-compose up -d as a build step of Jenkins, in "execute shell". Job fails and gives me following console log:
docker-compose up --build -d
Couldn't connect to Docker daemon at http+docker://localunixsocket - is it running?
If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable.
Build step 'Execute shell' marked build as failure
Finished: FAILURE`
When I cd into jenkins workspace (/var/lib/jenkins/workspace/app/) and tried to run docker-compose up, at first I could get a normal build. Right now I get error in console: ERROR: Error processing tar file(archive/tar: invalid tar header):. Of course app builds and runs normally in home directory when invoked from console.
Docker is running on host. It is possible to run docker-compose by regular user. I did add jenkins user to docker group. I even tried following some asian tutorial from http://blog.csdn.net/qiyueqinglian/article/details/46559825 that made me change DOCKER_OPTS in default/docker, but after restarting docker service it was not running on port 4243, so I didn't understood translation or it is not working on ubuntu 16.04 (host system).
Jenkins is not running in container, it is casually installed on host, no VM no docker, nothing. I tried removing docker and jenkins completly from host (purge etc) and reinstalling, still the same errors.
Any ideas?
Hit this command as a root user and try again
usermod -aG docker jenkins

Resources