Directory structure for project with Dockerfile, Jekinsfile, Kubernetes deployment yaml, pip requirements.txt, and test scripts? - docker

Would the following directory structure work?
The goal is to have Jenkins trigger off GitHub commits and run Multi-branch Pipelines that build and test containers. (I have everything running on Kubernetes, including Jenkins)
/project
.git
README.md
Jenkinsfile
/docker_image_1
Dockerfile
app1.py
requirements.txt
/unit_tests
unit_test1.py
unit_test2.py
/docker_image_2
Dockerfile
app2.py
requirements.txt
/unit_tests
unit_test1.py
unit_test2.py
/k8s
/dev
deployment.yaml
/production
deployment.yaml
/component_tests
component_tests.py
Is the k8s folder that has the deployment.yamls in the right place?
Are the test folders in good locations? The tests in "component_tests" will ideally be doing more end-to-end integrated testing that involve multiple containers
I see a lot of repos have Jenkins file and Dockerfile in the same directory level. What are the pros and cons of that?

There's no good answer to this question currently.
Kubernetes provides a standard API for deployment, but as a technology it relies on additional 3rd party tooling manage the build part of the ALM workflow. There a lots of options available for turning your source code into a container running on Kubernetes. Each has it's own consequences for how your source code is organised and how a deployment might be invoked from a CI/CD server like Jenkins.
I provide the following collection of options for your consideration, roughly categorized. Represents my current evaluation list.
"Platform as a service" tools
Tooling the manages the entire ALM lifecycle of your code. Powerful but more complex and opinionated.
Deis workflow
Openshift
Fabric8 (See also Openshift.io)
Build and deploy tools
Tools useful for the code/test/code/retest workflow common during development. Can also be invoked from Jenkins to abstract your build process.
Draft
Forge
Kcompose
Fabric8 Maven plugin (Java)
Psykube
YAML templating tools
The kubernetes YAML was never designed to be used by human beings. Several initatives to make this process simpler and more standardized.
Helm
Ksonnet
Deployment monitoring tools
These tools have conventions where they expect to find Kubernetes manifest files (or helm charts) located in your source code repository.
Keel
Kube-applier
Kubediff
Landscaper
Kit
CI/CD tools with k8s support
Spinnaker
Gitlab
Jenkins + Kubernetes CI plugin
Jenkins + Kubernetes plugin

This is really left much to your preference.
In our projects we tend to split services into separate repositories not subfolders, but we also had a case where we had a bunch of Scala microserviced managed in similar way (although dockers were built with sbt plugin for docker)
One big advice I would give you is that in the long run managing your kubernetes manifests like that might become serious pain in the back. I went through this, and my suggestion is to use helm charts from day one.
I assume that your "component_tests" are end-to-end tests. Other then naming I see no problem with that. For cases where we test solutions that span multiple repos we keep them in a separate repo as well though.

Related

Run a gitlab CI pipeline in Docker container

Absolute beginner in DevOps here. I have a Gitlab repo that I would like to build and run its tests in the Gitlab pipeline CI.
So far, I'm only testing locally on my machine with a specific runner. There's a lot information out there and I'm starting to get lost with what to use and how to use it.
How would I go about creating a container with the tools that I need ? (VS compiler, cmake, git, etc...)
My application contains an SDK that only works on windows, so I'm not sure building on another platform would work at all, so how do I select a windows based container?
How would I use that container in the yml file in gitlab so that I can build my solution and run my tests?
Any specific documentation links or suggestions are welcomed and appreciated.
How would I go about creating a container with the tools that I need ? (VS compiler, cmake, git, etc...)
you can install those tools before the pipeline script runs. I usually do this in before_script.
If there's large-ish packages that need to be installed on every pipeline run, I'd recommend that you make yourown image, with all the required build dependencies, push it to GitLab and then just use it as your job image.
My application contains an SDK that only works on windows, so I'm not sure building on another platform would work at all, so how do I select a windows based container?
If you're using gitlab.com - Windows runners are currently in beta, but available for use.
SaaS runners on Windows are in beta and shouldn’t be used for production workloads.
During this beta period, the shared runner quota for CI/CD minutes applies for groups and projects in the same manner as Linux runners. This may change when the beta period ends, as discussed in this related issue.
If you're self-hosting - setup your own runner on Windows.
How would I use that container in the yml file in gitlab so that I can build my solution and run my tests?
This really depends on:
previous parts (you're using GL.com / self hosted)
how your application is built
what infrastructure you have access to
What I'm trying to say is that I feel like I can't give you a good answer without quite some more information

Configuring different build tools for Jenkins Instance in Openshift

We are providing Jenkins As a Service using Openshift as orchestration platform in our corporate environment. Different teams use different tools and their version to configure jobs.
For instance, we have 3 different combinations of java and maven. 5 different version of npm, 2 different version of python.
I wanted to know what is the best practice of configuring different tools?
Do I need to create and use slave image for each combination and different version of tool?
Is it a good practice to keep a simple slave image like different jdk versions (1.7, 1.8 etc) and configure JDK, NPM, Maven, Python packages as tools and use a persistent volume on slave. So that, during build these packages are setup on the fly in the PVC.
Is that an anti-pattern to use tools this way in docker slave images?
I have accomplished this by creating a git repository called jenkins and the structure of the repository looks like
master/
plugins.txt
config-stuff
agents/
base/
nodejs8/
nodejs10/
nodejs12/
maven/
java8/
openshift/
templates/
build.yaml
deploy.yaml (this includes the deployment and configmaps to attach the agents)
params/
build
deploy
We are able to build each agent independently and the master independently. We place the deployment template on the OpenShift cluster so the user has to do oc process openshift//jenkins | oc apply -f - to install Jenkins in a namespace. However, you should also look into helm for installing Jenkins as a helm chart.
In my view is better to create images separately for tools for specific apps - Java for Java tools, Python, only Python tools. You can use Docker Compose that you will have all tools available from single host. You will preserve volume data when containers are created.
Compose supports variables in the compose file. You can use these variables to customize your composition for different environments, or different users.
Compose preserves all volumes used by your services. When docker-compose up runs, if it finds any containers from previous runs, it copies the volumes from the old container to the new container. This process ensures that any data you’ve created in volumes isn’t lost.
Compose caches the configuration used to create a container. When you restart a service that has not changed, Compose re-uses the existing containers. Re-using containers means that you can make changes to your environment very quickly.
Here is example of compose file: compose-file.

Continuous Integration and Continuous Delivery for react app with Jenkins

I want to setup a CI and CD processes for a React App for the company I'm working for, the following technologies are used:
React for frontend
Flask for backend
Docker
GitHub for source control management
currently we are using a script to build the app and than deploy it manually to AWS S3 bucket, I've read some article and watched tutorials and almost all of them cover Java based project and use Maven as a build tool to package the project before deploying.
appreciate if you could help.
I agree that the question is a bit broad but here but generally speaking you should ave a different CI pipeline for your frontend and backend application.
The implications of this are many since this will allow you to:
To use different release cycles for your backend/frontend application
Reduced build time
You might however at some point run an integration step to make sure everything holds together. Generally speaking your pipeline should look like (this should run on every commit):
Also make sure you choose a CI/CD tool that doesn't get in your way and that's flexible enough (i.e: GitLab, Jenkins).
Build docker image
Linter (to ensure a minimum code formatting and quality)
Unit Testing
Code coverage (Code coverage perse it's a bit useless but combined with how it evolves and enforcing a minimum % might help with quality)
Functional testing (this makes more sense for your backend stack if it uses a database for instance ...)
If everything passes then push to DockerHub
Deploy the recently built image to the corresponding environment. Example merging to develop implies deployment to your staging environment

DevOps vs Docker

I am wondering how exactly does docker fit into CI /CD .
I understand that with help of containers, you may focus on code , rather than dependencies/environment. But once you check-in your code, you will expect tools like TeamCity, Jenkins or Bamboo to take care of integration build , integration test/unit tests and deployment to target servers ( after approvals) where you will expect same Docker container image to run the built code.
However, in all above, Docker is nowhere in the CI/CD cycle , though it comes into play when execution happens at server. So, why do I see articles listing it as one of the things for DevOps.
I could be wrong , as I am not a DevOps guru, please enlighten !
Docker is just another tool available to DevOps Engineers, DevOps practitioners, or whatever you want to call them. What Docker does is it encapsulates code and code dependencies in a single unit (a container) that can be run anywhere where the Docker engine is installed. Why is this useful? For multiple reasons; but in terms of CI/CD it can help Engineers separate Configuration from Code, decrease the amount of time spent doing dependency management etc., can use it to scale (with the help of some other tools of course). The list goes on.
For example: If I had a single code repository, in my build script I could pull in environment specific dependencies to create a Container that functionally behaves the same in each environment, as I'm building from the same source repository, but it can contain a set of environment specific certificates and configuration files etc.
Another example: If you have multiple build servers, you can create a bunch of utility Docker containers that can be used in your CI/CD Pipeline to do a certain operation by pulling down a Container to do something during a stage. The only dependency on your build server now becomes Docker Engine. And you can change, add, modify, these utility containers independent of any other operation performed by another utility container.
Having said all of that, there really is a great deal you can do to utilize Docker in your CI/CD Pipelines. I think an understanding of what Docker is, and what Docker can do is more important that a "how to use Docker in your CI/CD" guide. While there are some common patterns out there, it all comes down to the problem(s) you are trying to solve, and certain patterns may not apply to a certain use case.
Docker facilitates the notion of "configuration as code". I can write a Dockerfile that specifies a particular base image that has all the frameworks I need, along with the custom configuration files that are checked into my repository. I can then build that image using the Dockerfile, push it to my docker registry, then tell my target host to pull the latest image, and then run the image. I can do all of this automatically, using target hosts that have nothing but Linux installed on them.
This is a simple scenario that illustrates how Docker can contribute to CI/CD.
Docker is also usefull for building your applications. If you have multiple applications with different dependencies you can avoid having a lot of dependencies and conflicts on your CI machine by building everything in docker containers that have the necessary dependencies. If you need to scale in the future all you need is another machine running your CI tool (like jenkins slave), and an installation of docker.
When using microservices this is very important. One applicatio can depend on an old version of a framework while another needs the new version. With containers thats not problem.
Docker is a DevOps Enabler, Not DevOps Itself: Using Docker, developers can support new development, enhancement, and production support tasks easily. Docker containers define the exact versions of software in use, this means we can decouple a developer’s environment from the application that needs to be serviced or enhanced.
Without Pervasive Automation, Docker Won’t Do Much for You : You can’t achieve DevOps with bad code. You must first ensure that the code being delivered is of the highest quality by automating all developer code delivery tasks, such as Unit testing, Integration testing, Automated acceptance testing (AAT), Static code analysis, code review sign offs & pull request workflow, and security analysis.
Leapfrogging to Docker without Virtualization Know-How Won’t Work : Leapfrogging as an IT strategy rarely works. More often than not new technologies bring about abstractions over existing technologies. It is true that such abstractions increase productivity, but they are not an excuse to skip the part where we must understand how a piece of technology works.
Docker is a First-Class Citizen on All Computing Platforms : This is the right time to jump on to the Docker bandwagon. For the first time ever Docker is supported on all major computing platforms in the world. There are two kinds of servers: Linux servers and Windows servers. Native Docker support for Linux existed from Day 1, since then Linux support has been optimized to the point of having access to the pint-sized.
Agile is a Must to Achieve DevOps : DevOps is a must to achieve Agile. The point of Agile is adding and demonstrating value iteratively to all stakeholders without DevOps you likely won’t be able to demonstrate the value you’re adding to stakeholders in a timely manner. So why is Agile also a must to achieve DevOps? It takes a lot of discipline to create a stream of continuous improvement and an Agile framework like Scrum defines fundamental qualities that a team must possess to begin delivering iteratively.
Docker saves the wastage for your organization capital and resources by containerizing our application. Containers on a singe host are isolated from each other and thy uses same OS resources. This frees up RAM, CPU and storage etc. Docker makes it easy to package our application along with all the required dependencies in an image. For most of the application we have readily available base images. One can create customized base image as well. We build our own custom image by writing simple Dockerfile. We can have this image shipped to central registry from where we can PULL it to deploy into various environments like QA, STAGE and PROD. This All these activities can be automated by CI tools like Jenkins.
In a CI/CD pipeline you can expect the Docker coming into picture when the build is ready. Initially CI server (Jenkins) will checkout the code from SCM in a temporary workspace where the application is built. Once you have the build artifact ready, you can package it as an image with the dependencies. Jenkins does this by executing simple docker build commands.
Docker removes what we all know the matrix from hell problem, making the environments independent with its container technology. An open source project Docker changed the game by simplifying container workflows and this has resulted in a lot of excitement around using containers in all stages of the software delivery lifecycle, from development to production.
It is not just about containers, it involves building Docker images, managing your images and dependencies on any Docker registry, deploying to an orchestration platform, etc. and it all comes under CI/CD process.
DevOps is a culture or methodology or procedure to deliver our development is very fast. Docker is a one of the tool in our devops culture to deploy application as container technology (use less resources to deploy our application).
Docker just package devloper environment to run on other system so that developer need not to worry about whether there code work in there system and not work in production due to differences in environment and operating system.
It just make the code portable to other environments.

Chef and Docker

I am a bit confused. As a part of a course we are supposed to setup a CI and CD solution using Jenkins, Docker and Chef, how the flow is going to be is not specified.
We have been setting up Jenkins, so that for every new git commit it creates a Jenkins slaves that spins up the specific containers needed for a test, then tears down them and reports the result.
So, have been looking around today for information regarding using Chef and Docker for continuous delivery/deployment. The use case that I see is the following, specify in Chef the machine deployment options, how many machines for each server, database and so on. When the Jenkins slave successfully builds and tests the application, it is time to deploy. Remove any old container and build new containers, handle configurations and other necessary management in Chef.
Have been looking around for information of similar use cases and there does not seem to be super much information about it. Have been tinkering with the chef-provision plugin with chef-provision-docker but the information regard to using for example the docker plugin is not super intuitive. Then I stumble across this article (https://coderanger.net/provisioning/) which basically does not recommend new projects to start using the chef-provision plugin.
Is there something I am missing, is this kind of use case not that popular or even just stupid? Are there any other plugins that I have missed or another setup with chef that is more suitable?
Cheers in advance!
This kind of purely procedural stuff isn't really what Chef is for. You would want to use something integrated directly with Jenkins as a plugin probably. Or if you're talking about cookbook integration tests there are the kitchen-docker and kitchen-dokken drivers which can handle the container management for you.
EDIT: The above was not really what the question was about, new answer.
The tool you're looking for is usually called a resource scheduler or cluster orchestrator. Chef can do this either via chef-provisioning or the docker cookbook. Between those two I would use the latter. But that said, Chef is really not the best tool for this job. There is a whole generation of dedicated schedulers including Mesos+Marathon, Kubernetes, Nomad, and docker-compose+swarm. Of all of those, Nomad is probably the simplest but Kube has a huge community following and is growing quickly. I would consider using Chef for this an intermediary step at best.
I would suggest to use container orchestrations platforms like kubernetes, docker swarm or mesos. Personally i would recommend to use kubernetes since it is the leading platform out of the three listed.
chef is a good configuration management tool and using it for provisioning containers would work but it is not the best solution. You would come across issues like managing where the containers should be provisioned and monitoring container status and how to handle their failures. A platform like kubernetes would handle this for you.
this would be useful to get some insigths:
https://www.youtube.com/watch?v=24X18e4GVbk
more to read:
http://www.devoperandi.com/how-we-do-builds-in-kubernetes/

Resources