ci/cd for Poly-repo project - tfs

I have a project in which front-end and back-end are in different tsf 2018 repositories
front-end is written on ReactJS and back-end is written on Java, i want to setup ci/cd that so when new commit is pushed into front-end repo it will create a new bundle via npm run build command and drop it into specific folder inside back-end repo.
Is it possible to do it with tsf 2018? and if so where i can find any tutorials or examples for ci/cd setup for projects with such structure

Usually you don't push the built ui into the back-end repo.
It depends on how you deploy your application but in the case of containers or image deployments, you could use tools such as jenkins or kubernetes to build a single image containing the backend with the frontend.
To make the deployment process more efficient you could split the build into 3 builds:
front-end change -> npm run build
-> mv ${FRONT_END}/build/* ${BACK_END}/public -> deploy
back-end change -> mvn install

Related

Run a gitlab CI pipeline in Docker container

Absolute beginner in DevOps here. I have a Gitlab repo that I would like to build and run its tests in the Gitlab pipeline CI.
So far, I'm only testing locally on my machine with a specific runner. There's a lot information out there and I'm starting to get lost with what to use and how to use it.
How would I go about creating a container with the tools that I need ? (VS compiler, cmake, git, etc...)
My application contains an SDK that only works on windows, so I'm not sure building on another platform would work at all, so how do I select a windows based container?
How would I use that container in the yml file in gitlab so that I can build my solution and run my tests?
Any specific documentation links or suggestions are welcomed and appreciated.
How would I go about creating a container with the tools that I need ? (VS compiler, cmake, git, etc...)
you can install those tools before the pipeline script runs. I usually do this in before_script.
If there's large-ish packages that need to be installed on every pipeline run, I'd recommend that you make yourown image, with all the required build dependencies, push it to GitLab and then just use it as your job image.
My application contains an SDK that only works on windows, so I'm not sure building on another platform would work at all, so how do I select a windows based container?
If you're using gitlab.com - Windows runners are currently in beta, but available for use.
SaaS runners on Windows are in beta and shouldn’t be used for production workloads.
During this beta period, the shared runner quota for CI/CD minutes applies for groups and projects in the same manner as Linux runners. This may change when the beta period ends, as discussed in this related issue.
If you're self-hosting - setup your own runner on Windows.
How would I use that container in the yml file in gitlab so that I can build my solution and run my tests?
This really depends on:
previous parts (you're using GL.com / self hosted)
how your application is built
what infrastructure you have access to
What I'm trying to say is that I feel like I can't give you a good answer without quite some more information

Best practice for running Symfony 5 project with Docker and Docker-Swarm

I have an existing Symfony 5 project with a mysql database and a nginx Webserver. I wanna dockerize this project, but on the web I have found different opinions how to do that.
My plan is to write a multi-stage Docker file with at least a dev and a prod stage and let this build with docker-swarm. In my opinion it is useful to install the complete code during the build and having multiple composer.json files (one for every stage). In the web I have found opinions to not install the app new on every build but to copy the vendor and var folder to the container. Another opinion was to start the installation after the build process of the container is ready. But I think with that the service is not ready, when the app is successfully deployed.
What are you thinking is the best practice here?
Build exactly the same image for all environments
Do not build 2 different images for prod and dev. One of the main docker benefits is, that you can provide exactly the same environment for production and dev.
You should control your environment with ENV vars. for example, you can enable Xdebug for dev and disable it for prod.
Composer has option to install dev and production packages. You should use this feature.
If you decide to install some packages to dev. Try to use the same Dockerfile for both environment. Do not use Dockerfile.prod and Dockerfile.dev It will introduce some mess in the future.
Multistage build
You can do multistage build described in the official Docker documentation if your build environment requires much more dependencies than your runtime.
Example of it is compilation of the program. During compilation you need a lot of libraries, and you produce single binary. So your runtime does not need all dev libraries.
First stage can be build in second stage you just copy binary and it is.
Build all packages into the docker image
You should build your application when Docker image is building. All libraries and packages should be copied into image, you should not install them when the application is starting. Reasons:
Application starts faster when everything is installed
Some of the libraries can be changed in future or removed. You will be in troubles and probably you will spend a lot of time to do debugging.
Implement health checks
You should implement Health-Check. Applications require external dependencies, like Passwords, API KEY, some non sensitive data. Usually, We inject data with environment variables.
You should check if all required variables are passed, and have good format before your application is started. You can implement Health-Check or you can check it in the entrypoint.
Test your solution before it is released
You should implement mechanism of testing your images. For example in the CI:
Run your unit tests before image is built
Build the Docker image
Start new application image with dummy data. If you require PostgreSQL DB you can start another container,
Run integration tests.
Publish new version of the image only if all tests pass.

Possible to have a job depend on another repo?

We have a project where there are integration tests for the web-services implemented in the mobile projects. Would it be possible, for instance, to build and run a test target in the iOS repo every time the back-end is deployed? If so how would one go about this?
This is not supported out of the box.
You could however use the trigger a build feature during a build on the backend repository, so a build on the frontend repository is started.
See https://docs.travis-ci.com/user/triggering-builds/ for the required building blocks.
Downside is that you will not see a build error in the right place.
Alternative is to clone the frontend repository in a separate job on the backend repository, and run the tests there. This means that breaking changes in the backend will be visible in the GitHub UI.

Directory structure for project with Dockerfile, Jekinsfile, Kubernetes deployment yaml, pip requirements.txt, and test scripts?

Would the following directory structure work?
The goal is to have Jenkins trigger off GitHub commits and run Multi-branch Pipelines that build and test containers. (I have everything running on Kubernetes, including Jenkins)
/project
.git
README.md
Jenkinsfile
/docker_image_1
Dockerfile
app1.py
requirements.txt
/unit_tests
unit_test1.py
unit_test2.py
/docker_image_2
Dockerfile
app2.py
requirements.txt
/unit_tests
unit_test1.py
unit_test2.py
/k8s
/dev
deployment.yaml
/production
deployment.yaml
/component_tests
component_tests.py
Is the k8s folder that has the deployment.yamls in the right place?
Are the test folders in good locations? The tests in "component_tests" will ideally be doing more end-to-end integrated testing that involve multiple containers
I see a lot of repos have Jenkins file and Dockerfile in the same directory level. What are the pros and cons of that?
There's no good answer to this question currently.
Kubernetes provides a standard API for deployment, but as a technology it relies on additional 3rd party tooling manage the build part of the ALM workflow. There a lots of options available for turning your source code into a container running on Kubernetes. Each has it's own consequences for how your source code is organised and how a deployment might be invoked from a CI/CD server like Jenkins.
I provide the following collection of options for your consideration, roughly categorized. Represents my current evaluation list.
"Platform as a service" tools
Tooling the manages the entire ALM lifecycle of your code. Powerful but more complex and opinionated.
Deis workflow
Openshift
Fabric8 (See also Openshift.io)
Build and deploy tools
Tools useful for the code/test/code/retest workflow common during development. Can also be invoked from Jenkins to abstract your build process.
Draft
Forge
Kcompose
Fabric8 Maven plugin (Java)
Psykube
YAML templating tools
The kubernetes YAML was never designed to be used by human beings. Several initatives to make this process simpler and more standardized.
Helm
Ksonnet
Deployment monitoring tools
These tools have conventions where they expect to find Kubernetes manifest files (or helm charts) located in your source code repository.
Keel
Kube-applier
Kubediff
Landscaper
Kit
CI/CD tools with k8s support
Spinnaker
Gitlab
Jenkins + Kubernetes CI plugin
Jenkins + Kubernetes plugin
This is really left much to your preference.
In our projects we tend to split services into separate repositories not subfolders, but we also had a case where we had a bunch of Scala microserviced managed in similar way (although dockers were built with sbt plugin for docker)
One big advice I would give you is that in the long run managing your kubernetes manifests like that might become serious pain in the back. I went through this, and my suggestion is to use helm charts from day one.
I assume that your "component_tests" are end-to-end tests. Other then naming I see no problem with that. For cases where we test solutions that span multiple repos we keep them in a separate repo as well though.

Deploy apps from release server

I don't like when it comes to release my projects on production server.. May be i just don't have enough experience, nobody taught me how to do this in a right way.
For now i have several repos with scala (on top of spray). I have everything to build and run this projects on my local machine (of course, i develop them). So installed jenkins on my production server in order to sync from git, build and run. It works for now but i don't like it, because i need to install jenkins on every machine i want to have run my projects. What if i want to show my project to my friend in cafe?
So i've come with idea: what if i run tests before building app, make portable build (e.q. with sbt native packager) and save it on remote server "release server". That server just keeps these ready to be launched apps.
Then i go to production server, run bash script that downloads executables from release server and runs my project on a machine
In future i want to:
download and run projects inside docker containers.
keep ready to be served static files for frontend. Run docker
container with nginx and linked volume with static files
I heard about nexus (http://www.sonatype.org/nexus/), that artist use to save their songs, images, so on. I believe there should be open source projects that expose idea like mine
Any help is appreciated!
A common anti-pattern, in my opinion, is to build the software every time you perform a deployment.You are best advised to separate the process of build from the act of deployment by introducing a binary repository manager (you've mentioned on such example, nexus).
Best Practice - Using a Repository Manager
Binary repository manager
How can I automatically deploy a war from Nexus to Tomcat?
Only successfully tests builds get pushed to the repository, so you can treat each successful build as a mini-release. A by-product of this is that your production server does not have to have all the build software pre-installed (like, Jenkins, ANT , Maven, etc).
It should be noted that modern repository managers like Nexus and Artifactory now support Docker registries too, so that you use these for deploying docker images too.
Update
A related chef question, a technology where there is no intermediate binary file (like a jar). In this case the software is still "released" by creating a tar distribution stored in the repo.
chef cookbook delivery - chef server vs. artifactory + berkshelf

Resources