After jibBuildTar, what command to use to load to AWS ECR? - docker

Is there an AWS CLI command to simply load my image tar to AWS? (I do not have docker on my computer at all nor do I want to increase the image size of our CD pipeline as I prefer to keep it very very small since it adds to the time to start in circleCI/githubCI).
Is there a way to do this?
If not, what is the way to do this with docker so I do not have to upload into local docker registry and instead can just directly load to AWS ECR.
Context:
Our CI job builds on the PR 'and' writes a version/build number on the image and into a file in CI on the PR as well BUT NONE OF THIS should be deployed yet until on master branch. PR cannot merge into master until up to date with master and a merge of master into PR triggers CI again so PR is guaranteed to be latest when landing on master. This CI job has the artifact that can be deployed by CD (ie. there is no need to rebuild it all over again which takes a while sometimes)
Our CD job triggers on merge to master and reads the artifacts and needs to deploy them to AWS ECR and the image for our CD is very very small having just AWS tooling right now (no need for java, no need for gradle, etc. etc.).

I am sorry because I don't fully understand your requirements, but maybe you could use tools like skopeo in some way.
Among other things, it will allow you to copy your container tar to different registries including AWS ECR without the need to install docker, something like this:
skopeo login -u AWS -p $(aws ecr get-login-password) 111111111.dkr.ecr.us-east-1.amazonaws.com
skopeo copy docker-archive:build/your-jib-image.tar docker://111111111.dkr.ecr.us-east-1.amazonaws.com/myimage:latest
This approach is documented as well as a possible solution in this Github issue in the GoogleContainerTools repository.
In the issue they recommend another tool named crane which looks similar in its purpose to skopeo although I have never tested it.

Related

Build multiple projects

I'm exploring the Jenkins world to see if it can fit my needs for this case.
I need to build two git repositories (backend and frontend). For the backend, I would need:
Choose the branch we want to build from a list
Checkout the branch and build Docker image using the Dockerfile
push to ECR
release to a specific Kubernetes deployment
After backend build, we have to build the frontend by doing:
Choose the branch we want to build from a list
Checkout the branch and run npm script to build
deploy to S3 folder
Build of the project should be triggered only manually, by the project owner (who is not a developer )
Is Jenkins the right way to go? And if yes, could you point me to how you would do it?
Thanks
Yes, you can definitely implement what you need with Jenkins. There are different ways to implement each step. But here are some things you can consider using.
For Branch listing, you can consider using a plugin like List Git
Branches Plugin
For Docker image Building and pushing you can use Jenkins Docker Steps.
For K8S stuff you can probably use a Shell script or can use something like Kubecli
For S3 stuff you can use S3 Publisher Plugin.

Using jenkins and docker to deploy to server

Hey I am currently learn Jenkins pipeline for CI and CD
I was successfully deploy my express js by Jenkins
On locally machine my server
It was for server and my ENV was show off on my public repository
I am here trying to understand more how to hide that ENV on my Jenkins? That use variable
And is that possible to use variable on Dockerfile also to hide my ENV ?
On my Jenkins Pipeline
I run my ENV on docker run -p -e myEnV=key
I do love to hide my ENV so people didn't know my keys inside on my Jenkinsfile and Dockerfile
I am using multi branches in jenkins because I follow the article on hackernoon for deploy react and node js app with Jenkins
And anyway, what advantages to push our container or image to Docker Hub?
If we push it to there and if we want to move our server to another server
We just need to pull our repo Docker Hub to use that to new server because what we have been build everytime it push to our repo Docker Hub , right ?
For your first question, you should use EnvInject Plugin. or If you are running Docker from the pipeline, then set Environment variable in Jenkins, then access these environment variables in Docker run command.
in the pipeline, you can access environment variable like this
${env.DEVOPS_KEY}
So your Docker run command will be
docker run -p -e myEnV=${env.DEVOPS_KEY}
But make sure you have set DEVOPS_KEY in the Jenkins server.
Using EnvInject it pretty much simple.
You can also inject from the file.
For your Second Question, Yes just the pull the image from docker-hub and use it.
Anyone from your Team can pull and run so only the Jenkins server will build and push the image. So it will save time for others and the image will be up to date and will also available remotely.
Never push or keep sensitive data in your Docker image.
Using Docker Hub or any kind of registry like Sonatype Nexus, Registry, JFrog Artifactory helps you to keep your images with their tags and share it with anyone. It also means that the images are safe there. If your local environment goes down, the images will stay there. It also helps for version control. If you are using multibranch pipelines, that means that you probably will generate different images with different tags.
Running Jenkins, working the jobs, doing the deployment is not a good practice. In my experience from previous work, the best exaples are: The server starts being bloated after some time, Jenkins doesn't work the most important times that you need it, The application you have deployed does not work because Jenkins has too many jobs that takes all the resources.
Currently, I am running different servers for Jenkins Master and Slave. Master instance does not run any jobs, only the master instances do. This keeps Jenkins alive all the time. If slaves goes down, you can simply set another slave.
For deployment, I am using Ansible which can simultaneously deploy the same docker image to multiple servers. It is easy to use and in my opinion quite safe as well.
For the sensitive data such as keys, password, api keys, you are right about using -e flag. You can also use --env-file. This way, you can keep it outside of docker image and keep the file. For passwords, I prefer to have a shell script that generates the passwords in environment files.
If you are planning to use the environment as it is, you can keep the value that you are going to set as environment variable inside Jenkins safely. then you can get that value as a variable. You can see it in Jenkins website

dockerhub automated build from single repo with single dockerfile building multiple images

I have a single git repository on github with:
a Dockerfile which builds multiple images meant to be used together. (a maven build produces a war file and sql files by downloading then from artifact repositories; a multi-stage build then creates a slim tomcat image with the war and a slim mysql image with the sql data preloaded).
a docker-compose.yml file that uses the "target" instruction to build and run containers on the images from the multi-stage build.
This works well during development. But it forces users to build images on their computer. I want the users to be able to only download images. The setup should also be using dockerhub's automated build to keep images up to date.
How can I set it up to achieve this ? What command(s) or file(s) do I give the users to allow them to download images and run containers ? If it is not possible, what can I do to make it possible (split the repo? copypaste the dockerfile? publish intermediate images to dockerhub and ensure a correct build order ? Don't use dockerhub's automated build ?)
To use dockerhub's automated builds you would need to build one image per Dockerfile and have one Dockerfile per repo. The image name comes from the source repository name (with the github org/user name as the docker image user name and the github repo name as the docker image name). Multistage builds work in automated builds but only one image is published per Dockerfile (the final image of the build).
You could build the images in your CI or even on your local machine and then push to dockerhub. You'd just need to have an account on dockerhub and be logged in to that account when you use the docker push command. When doing this push there doesn't have to be any mapping to GitHub repos but your image names should start with <dockerhub_user>/ as a kind of prefix (explained at https://docs.docker.com/docker-hub/repos/). It's ok if they are built under a different name as you could rename by retagging before pushing. This way you can also build the images however you like.
When you have images in dockerhub you can just refer to them in the docker-compose file using the form image: <dockerhub_user>/<dockerhub_image_name>:<tag>. The images will automatically be pulled when the user does docker-compose up.
Here are some tips and links that should help your situation:
Automated builds are a convenient way to deploy your images.
This part is pretty easy. You'll need accounts with Docker Hub and Github. You can register these accounts for free.
When you create a a repository on Docker Hub you can link it to your Github repository to automate the build.
Recommendations:
Split your services into separate Dockerfiles. Ideally you should use separate repositories: Docker Compose will pull them together at the end. A division of services will also help if anyone wants to implement e.g. a cloud database backend for their deployment.
Don't store database files inside a container. Containers should be ephemeral
For a robust design, test your builds.
Docker Hub automated builds are very flexible with the use of build hooks.
This part is a little tricky because I haven't found the best documentation. It also might not be necessary if you split your Dockerfile.
I've successfully created automated builds with multiple tags and targets using a hook at hooks/build but after reading the documentation it looks like you should also be able to use hooks/post_build.
Your hook could simply build the correct layer and push the tag to Docker Hub
For your repository that should look like:
#!/usr/bin/env bash
docker build --target lutece-mysql -t lutece/mysql .
docker push lutece/mysql
If you end up using hooks/build you might need to build the final target as the last step.
Recommendations:
If you need multiple tags for an image use a hook at hooks/post_push to add additional tags. That way each tag should link users to the same image. e.g.
#!/usr/bin/env bash
docker tag lutece/tomcat:master lutece/tomcat:latest
docker push lutece/tomcat:latest
Additionally you can use build hooks to label your image with things like build date and git commit.
Deployment with Docker Compose
Unfortunately I haven't done this part so I can't confirm how to get this to work.
With your repositories in Docker Hub and a working docker-compose.yml your clients may only need to run docker-compose up in the directory with your docker-compose.yml file. Docker Compose should pull in the images from Docker Hub.

Docker: updating image and registry

What is the right workflow for updating and storing images?
For example:
I download source code from GitHub (project with Docker files, docker-compose.yml)
I run "docker build"
And I push new image to Docker Hub (or AWS ECR)
I make some changes in source code
Push changes to GitHub
And what I should do now to update registry (Docker Hub)?
A) Should I run again "docker build" and then push new image (with new tag) to registry?
B) Should I somehow commit changes to existing image and update existing image on Docker Hub?
This will depend on what for you will use your docker image and what "releasing" policy you adopt.
My recommendation is that you sync the tags you keep on Docker Hub with the release/or tags you have in GitHub and automate as much as you can your production with a continuous integration tools like Jenkins and GitHub webooks.
Then your flow becomes :
You do your code modifications and integrate them in GitHub ideally using a pull request scheme. This means your codes will be merged into your master branch.
Your Jenkins is configured so that when master is changed it will build against your docker file and push it to Docker hub. This will erase your "latest" tag and make sure your latest tag in docker hub is always in sync with your master release on GitHub
If you need to keep additional tags, this will be typical because of different branches or releases of your software. You'll do the same as above with the tag hooked up through Jenkins and GitHub webhooks with a non-master branch. For this, take a look at how the official libraries are organized on GitHub (for example on Postgres or MySQL images).

Git to docker export

I'd like some opinions on this workflow. The intention is to semi-automate and revision control the creation/export of docker containers.
I have some docker directories with a dockerfile etc inside (enough to build a docker image from). At the moment, I've set up a process where this becomes a local git repo, then I set up a bare repo on a remote server. Then I add in an 'update' hook to the remote repo that will take the name of the repo and call a script that proceeds to clone that repo, build docker image, start a container, export container, delete repo. Then I end up with a .tar of my docker container every time I push an update to that repo.
The only issue is that I have to manually copy the hook to each remote repo I set up (considering .git/hooks doesn't get pushed from local).
So I'm looking for some feedback on whether this whole process has any intelligence to it or if I am going about it the completely wrong way.
What you are looking for is called "Continuous Integration".
There are multiple ways to achieve it, but here's how I do it:
Set up a Jenkins server
Put all docker files into one git repo, as modules if necessary
Have Jenkins check for changes in the repo every few minutes
Have Jenkins build the docker images after pulling in the changes

Resources