i'm working on a react/spring application and i am supposed to create an continuous integration pipeline to automate the whole process . I'm almost done but i'm still confused about docker role in the whole process . I thought when using it makes me allowed to talk about continuous deployment but after some research i found myself completely wrong . Can anyone tell me is there anything wrong about this architecture and especially docker role within it .
Continous Itegration Architecture
Related
I've been given a small node.js website to test.
Initially, I tried to keep it all JavaScript and even managed to write a few tests and weave those into a CI YAML file that instructs GitLab to deploy the container, build the site, run the tests...
However, my tests are getting more and more complicated and I must resort to my Java skills.
Now, the problem is that I don't know to form the CI tasks: There is no one container that has all the needed technology (nor is it what CONTAINERS are for, anyway).
On the other hand, I don't know to get more than one image in the task.
Somehow, my mind imagines I could deploy as: one container has the node.js stuff and builds and runs the site, exposing an endpoint.
Another container has the Java-Maven-Chrome stuff and builds and runs the tests, which access the site via the exposed endpoint.
Or maybe I have the whole concept wrong?
Would appreciate to learn what is the professional solution here. Surely, I am not the first Java QA guy, trying to test a node.js website!
I would really appreciate some example for the YAML file. Because, I can only imagine it as having one field in the beginning "image" - and then that's where my container goes and no room for another.
Currently I am developing a software which uses Camunda and interacts with a certain Camunda process.
All of this is intended to be delivered as a single 'package'. To do this, I plan on simply using a Camunda Docker container and another container which runs my software. To put it all together I intend to use docker-compose.
For the Camunda container I currently use the 'official' image which you can find here: https://hub.docker.com/r/camunda/camunda-bpm-platform/
I know that it is not for production use yet, but currently it works fine.
To clean it up a little, I derive from the official image and delete certain files and folders. This is my Dockerfile:
FROM camunda/camunda-bpm-platform:7.11.0
RUN rm -r /camunda/webapps/camunda-invoice
RUN rm -r /camunda/webapps/examples
The problem I am encountering right now is that, as far as I know, you have to use the REST Api of Camunda to deploy a process. I did not find any information on deploying it by putting it in a certain directory or something.
For my use case it would be perfect if the Camunda process was deployed while building the Docker image. But I cannot think of a solution of how to do that.
Theoretically, I have to start the Camunda engine during building, deploy the process and stop it afterwards. Did anyone try that already with the official Camunda docker image? Or do you have a better solution for my problem?
An alternative would be to deploy the process via the software I am developing. But I think that is a rather ugly solution and I would like to avoid that.
Thanks in advance,
Timo
Docker seems to be the incredible new tool to solve all developer headaches when it comes to packaging and releasing an application, yet i'm unable to find simple solutions for just upgrading a existing application without having to build or buy into whole "cloud" systems.
I don't want any kubernetes cluster or docker-swarm to deploy hundreds of microservices. Just simply replace an existing deployment process with a container for better encapsulation and upgradability.
Then maybe upgrade this in the future, if the need for more containers increases so manual handling would not make sense anymore
Essentially the direct app dependencies (Language and Runtime, dependencies) should be bundled up without the need to "litter" the host server with them.
Lower level static services, like the database, should still be in the host system, as well as a entry router/load-balancer (simple nginx proxy).
Does it even make sense to use it this way? And if so, is there any "best practice" for doing something like this?
Update:
For the application i want to use it on, i'm already using Gitlab-CI.
Tests are already run inside a docker environment via Gitlab-CI, but deployment still happens the "old way" (syncing the git repo to the server and automatically restarting the app, etc).
Containerizing the application itself is not an issue, and i've also used full docker deployments via cloud services (mostly Heroku), but for this project something like this is overkill. No point in paying hundreds of $$ for a cloud server environment if i need pretty much none of the advantages of it.
I've found several of "install your own heroku" kind of systems but i don't need or want to manage the complexity of a dynamic system.
I suppose basically a couple of remote bash commands for updating and restarting a docker container (after it's been pushed to a registry by the CI) on the server, could already do the job - though probably pretty unreliably compared to the current way.
Unfortunately, the "best practice" is highly subjective, as it depends entirely on your setup and your organization.
It seems like you're looking for an extremely minimalist approach to Docker containers. You want to simply put source code and dependencies into a container and push that out to a system. This is definitely possible with Docker, but the manner of doing this is going to require research from you to see what fits best.
Here are the questions I think you should be asking to get started:
1) Is there a CI tool that will help me package together these containers, possibly something I'm already using? (Jenkins, GitLab CI, CircleCI, TravisCI, etc...)
2) Can I use the official Docker images available at Dockerhub (https://hub.docker.com/), or do I need to make my own?
3) How am I going to store Docker Images? Will I host a basic Docker registry (https://hub.docker.com/_/registry/), or do I want something with a bit more access control (Gitlab Container Registry, Harbor, etc...)
That really only focuses on the Continuous Integration part of your question. Once you figure this out, then you can start to think about how you want to deploy those images (Possibly even using one of the tools above).
Note: Also, Docker doesn't eliminate all developer headaches. Does it solve some of the problems? Absolutely. But what Docker, and the accompanying Container mindset, does best is shift many of those issues to the left. What this means is that you see many of the problems in your processes early, instead of those problems appearing when you're pushing to prod and you suddenly have a fire drill. Again, Docker should not be seen as a solve-all. If you go into Docker thinking it will be a solve-all, then you're setting yourself up for failure.
I would like a Jenkins master and slave setup for running specs on standard Rails apps (PostgreSQL, sidekiq/redis, RSPec, capybara-webkit, a common Rails stack), using docker so it can be put on other machines as well. I got a few good stationary machines collecting dust.
Can anybody share an executable docker jenkins rails stack example?
What prevents that from being done?
Preferable with master-slave setup too.
Preface:
After days online, following several tutorials with no success, I am about to abandon project. I got a basic understanding of docker, docker-machine, docker compose and volumes, I got a docker registry of a few simple apps.
I know next to nothing about Jenkins, but I've used Docker pretty extensively on other CI platforms. So I'll just write about that. The level of difficulty is going to vary a lot based on your app's dependencies and quirks. I'll try and give an outline that's pretty generally useful, and leave handling application quirks up to you.
I don't think the problem you describe should require you to mess about with docker-machine. docker build and docker-compose should be sufficient.
First, you'll need to build an image for your application. If your application has a comprehensive Gemfile, and not too many dependencies relating to infrastructure etc (e.g. files living in particular places that the application doesn't set up for itself), then you'll have a pretty easy time. If not, then setting up those dependencies will get complicated. Here's a guide from the Docker folks for a simple Rails app that will help get you started.
Once the image is built, push it to a repository such as Docker Hub. Log in to Docker Hub and create a repo, then use docker login and docker push <image-name> to make the image accessible to other machines. This will be important if you want to build the image on one machine and test it on others.
It's probably worth spinning off a job to run your app's unit tests inside the image once the image is built and pushed. That'll let you fail early and avoid wasting precious execution time on a buggy revision :)
Next you'll need to satisfy the app's external dependencies, such as Redis and postgres. This is where the Docker Compose file comes in. Use it to specify all the services your app needs, and the environment variables etc that you'll set in order to run the application for testing (e.g. RAILS_ENV).
You might find it useful to provide fakes of some non-essential services such as in-memory caches, or just leave them out entirely. This will reduce the complexity of your setup, and be less demanding on your CI system.
The guide from the link above also has an example compose file, but you'll need to expand on it. The most important thing to note is that the name you give a service (e.g. db in the example from the guide) is used as a hostname in the image. As #tomwj suggested, you can search on Docker Hub for common images like postgres and Redis and find them pretty easily. You'll probably need to configure a new Rails environment with new hostnames and so on in order to get all the service hostnames configured correctly.
You're starting all your services from scratch here, including your database, so you'll need to migrate and seed it (and any other data stores) on every run. Because you're starting from an empty postgres instance, expect that to take some time. As a shortcut, you could restore a backup from a previous version before migrating. In any case, you'll need to do some work to get your data stores into shape, so that your test results give you useful information.
One of the tricky bits will be getting Capybara to run inside your application Docker image, which won't have any X displays by default. xvfb (X Virtual Frame Buffer) can help with this. I haven't tried it, but building on top of an image like this one may be of some help.
Best of luck with this. If you have the time to persist with it, it will really help you learn about what your application really depends on in order to work. It certainly did for me and my team!
There's quite a lot to unpack in that question, this is a guide of how to get started and where to look for help.
In short there's nothing preventing it, although it's reasonably complex and bespoke to setup. So hence no off-the-shelf solution.
Assuming your aim is to have Jenkins build, deploy to Docker, then test a Rails application in a Dockerised environment.
Provision the stationary machines, I'd suggest using Ansible Galaxy roles.
Install Jenkins
Install Docker
Setup a local Docker registry
Setup Docker environment, the way to bring up multiple containers is to use docker compose this will allow you to bring up the DB, redis, Rails etc... using the public docker hub images.
Create a Jenkins pipeline
Build the rails app docker image this will contain the rails app.
Deploy the application, this updates the application in the Docker swarm, from the local Docker registry.
Test, run the tests against the application now running.
I've left out the Jenkins master/slave config because if you're only running on one machine you can increase the number of executors. E.g. the master can execute more jobs at the expense of speed.
Docker says "it possible to build, ship and run any app, anywhere"
Dockerising the app looks promising solution to ship and run any app, anywhere with less pain.
But how it's going to help us in building an application?
There are a few interesting use cases at build time for docker.
You could use docker to brink up database with known state inside containers for your integration tests to hit. using the Docker Maven Plugin.
Having a predefined container for your application which will not change during the development cycle is useful. Especially as you can use the same container when you go to prod deployments. This is different from say vagrant which you would not use for your production deployment.
Since there are so many docker containers already available your would not need to spend time figuring out how to deploy and mange all the various tools and services your deployment may need.