I am trying to find an easy to set up docker hosting. What I have is a private git repository with an application, that I can get running locally just with checking it out and running docker-compose up -d. I am not at the moment looking for a production-ready solution, just for a way to get it running somewhere so that few of the potential customers can see the progress, paly with the app a little and suggest improvements. So any service where it is not too much hassle to get it running and accessible from the web.
Solution 1
You could use play-with-docker . This is a free online docker environment accessible via web. The docker-compose tool is also available. The only downside is that the environment will expire after 4 hours. An other similar free online service is also katacoda.
Solution 2
Create AWS account and deploy a linux VM in the free tier. The free tier enable you to create a VM with limited resources for one year.
Solution 3
Prepare a virtual box VM with everyting is needed to run your application.
If you need I can provide further details about the above solutions.
Related
Docker seems to be the incredible new tool to solve all developer headaches when it comes to packaging and releasing an application, yet i'm unable to find simple solutions for just upgrading a existing application without having to build or buy into whole "cloud" systems.
I don't want any kubernetes cluster or docker-swarm to deploy hundreds of microservices. Just simply replace an existing deployment process with a container for better encapsulation and upgradability.
Then maybe upgrade this in the future, if the need for more containers increases so manual handling would not make sense anymore
Essentially the direct app dependencies (Language and Runtime, dependencies) should be bundled up without the need to "litter" the host server with them.
Lower level static services, like the database, should still be in the host system, as well as a entry router/load-balancer (simple nginx proxy).
Does it even make sense to use it this way? And if so, is there any "best practice" for doing something like this?
Update:
For the application i want to use it on, i'm already using Gitlab-CI.
Tests are already run inside a docker environment via Gitlab-CI, but deployment still happens the "old way" (syncing the git repo to the server and automatically restarting the app, etc).
Containerizing the application itself is not an issue, and i've also used full docker deployments via cloud services (mostly Heroku), but for this project something like this is overkill. No point in paying hundreds of $$ for a cloud server environment if i need pretty much none of the advantages of it.
I've found several of "install your own heroku" kind of systems but i don't need or want to manage the complexity of a dynamic system.
I suppose basically a couple of remote bash commands for updating and restarting a docker container (after it's been pushed to a registry by the CI) on the server, could already do the job - though probably pretty unreliably compared to the current way.
Unfortunately, the "best practice" is highly subjective, as it depends entirely on your setup and your organization.
It seems like you're looking for an extremely minimalist approach to Docker containers. You want to simply put source code and dependencies into a container and push that out to a system. This is definitely possible with Docker, but the manner of doing this is going to require research from you to see what fits best.
Here are the questions I think you should be asking to get started:
1) Is there a CI tool that will help me package together these containers, possibly something I'm already using? (Jenkins, GitLab CI, CircleCI, TravisCI, etc...)
2) Can I use the official Docker images available at Dockerhub (https://hub.docker.com/), or do I need to make my own?
3) How am I going to store Docker Images? Will I host a basic Docker registry (https://hub.docker.com/_/registry/), or do I want something with a bit more access control (Gitlab Container Registry, Harbor, etc...)
That really only focuses on the Continuous Integration part of your question. Once you figure this out, then you can start to think about how you want to deploy those images (Possibly even using one of the tools above).
Note: Also, Docker doesn't eliminate all developer headaches. Does it solve some of the problems? Absolutely. But what Docker, and the accompanying Container mindset, does best is shift many of those issues to the left. What this means is that you see many of the problems in your processes early, instead of those problems appearing when you're pushing to prod and you suddenly have a fire drill. Again, Docker should not be seen as a solve-all. If you go into Docker thinking it will be a solve-all, then you're setting yourself up for failure.
I would like a Jenkins master and slave setup for running specs on standard Rails apps (PostgreSQL, sidekiq/redis, RSPec, capybara-webkit, a common Rails stack), using docker so it can be put on other machines as well. I got a few good stationary machines collecting dust.
Can anybody share an executable docker jenkins rails stack example?
What prevents that from being done?
Preferable with master-slave setup too.
Preface:
After days online, following several tutorials with no success, I am about to abandon project. I got a basic understanding of docker, docker-machine, docker compose and volumes, I got a docker registry of a few simple apps.
I know next to nothing about Jenkins, but I've used Docker pretty extensively on other CI platforms. So I'll just write about that. The level of difficulty is going to vary a lot based on your app's dependencies and quirks. I'll try and give an outline that's pretty generally useful, and leave handling application quirks up to you.
I don't think the problem you describe should require you to mess about with docker-machine. docker build and docker-compose should be sufficient.
First, you'll need to build an image for your application. If your application has a comprehensive Gemfile, and not too many dependencies relating to infrastructure etc (e.g. files living in particular places that the application doesn't set up for itself), then you'll have a pretty easy time. If not, then setting up those dependencies will get complicated. Here's a guide from the Docker folks for a simple Rails app that will help get you started.
Once the image is built, push it to a repository such as Docker Hub. Log in to Docker Hub and create a repo, then use docker login and docker push <image-name> to make the image accessible to other machines. This will be important if you want to build the image on one machine and test it on others.
It's probably worth spinning off a job to run your app's unit tests inside the image once the image is built and pushed. That'll let you fail early and avoid wasting precious execution time on a buggy revision :)
Next you'll need to satisfy the app's external dependencies, such as Redis and postgres. This is where the Docker Compose file comes in. Use it to specify all the services your app needs, and the environment variables etc that you'll set in order to run the application for testing (e.g. RAILS_ENV).
You might find it useful to provide fakes of some non-essential services such as in-memory caches, or just leave them out entirely. This will reduce the complexity of your setup, and be less demanding on your CI system.
The guide from the link above also has an example compose file, but you'll need to expand on it. The most important thing to note is that the name you give a service (e.g. db in the example from the guide) is used as a hostname in the image. As #tomwj suggested, you can search on Docker Hub for common images like postgres and Redis and find them pretty easily. You'll probably need to configure a new Rails environment with new hostnames and so on in order to get all the service hostnames configured correctly.
You're starting all your services from scratch here, including your database, so you'll need to migrate and seed it (and any other data stores) on every run. Because you're starting from an empty postgres instance, expect that to take some time. As a shortcut, you could restore a backup from a previous version before migrating. In any case, you'll need to do some work to get your data stores into shape, so that your test results give you useful information.
One of the tricky bits will be getting Capybara to run inside your application Docker image, which won't have any X displays by default. xvfb (X Virtual Frame Buffer) can help with this. I haven't tried it, but building on top of an image like this one may be of some help.
Best of luck with this. If you have the time to persist with it, it will really help you learn about what your application really depends on in order to work. It certainly did for me and my team!
There's quite a lot to unpack in that question, this is a guide of how to get started and where to look for help.
In short there's nothing preventing it, although it's reasonably complex and bespoke to setup. So hence no off-the-shelf solution.
Assuming your aim is to have Jenkins build, deploy to Docker, then test a Rails application in a Dockerised environment.
Provision the stationary machines, I'd suggest using Ansible Galaxy roles.
Install Jenkins
Install Docker
Setup a local Docker registry
Setup Docker environment, the way to bring up multiple containers is to use docker compose this will allow you to bring up the DB, redis, Rails etc... using the public docker hub images.
Create a Jenkins pipeline
Build the rails app docker image this will contain the rails app.
Deploy the application, this updates the application in the Docker swarm, from the local Docker registry.
Test, run the tests against the application now running.
I've left out the Jenkins master/slave config because if you're only running on one machine you can increase the number of executors. E.g. the master can execute more jobs at the expense of speed.
I am trying to help a sysadmin group reduce server & service downtime on the projects they manage. Their biggest issue is that they have to take down a service, install upgrade/configure, and then restart it and hope it works.
I have heard that docker is a solution to this problem, but usually from developer circles in the context of deploying their node/python/ruby/c#/java, etc. applications to production.
The group I am trying to help is using vendor software that requires a lot of configuration and management. Can docker still be used in this case? Can we install any random software on a container? Then keep that in a private repository, upgrade versions, etc.?
This is a windows environment if that makes any difference.
Docker excels at stateless applications. You can use it for persistent data style applications, but requires the use of volume commands.
Can docker still be used in this case?
Yes, but it depends on the application. It should be able to be installed headless, and a couple other things that are pretty specific. (EG: talking to third party servers to get an license can create issues)
Can we install any random software on a container?
Yes... but: remember that when the container restarts, that software will be gone. It's better to create it as an image, and then deploy it.See my example below.
Then keep that in a private repository, upgrade versions, etc.?
Yes.
Here is an example pipeline:
Create a Dockerfile for the OS and what steps it takes to install the application. (Should be headless)
Build the image (at this point, it's called an image, not a container)
Test the image locally by creating a local container. This container is what has the configuration data such as environment variables, the volumes for persistent data it needs, etc.
If it satisifies the local developers wants, then you can either:
Let your build servers create the image and publish it an internal
docker registry (best practice)
Let your local developer publish it
to an internal docker registry
At that point, your next level environments can then pull down the image from the docker registry, configure them and create the container.
In short, it will require a lot of elbow grease but is possible.
Can we install any random software on a container?
Generally yes, but you can have many problems with legacy software which was developed to work on bare metal.
At first it can be persistence problem, but it can be solved using volumes.
At second program that working good on full OS can work not so good in container. Containers have some difference with VM's or bare metal. For example due to missing init process some containers have zombie process issue. About others difference you can read here
Docker have big profit for stateless apps, but some heave legacy apps can work not so good inside containers and should be tested good before using it in production.
I have an aws ec2 account, where I am running couple of web apps on nginx. I don't know much about docker, except it is a container that takes snapshot of filesystem. Now, for some reason I am forced to switch accounts. I have opened a new aws ec2 account. Can I use docker to set up a container in my old virtual system, then get an image and deploy in my new system? This way I can remove the headache of having to install many components, configure nginx and all applications in my new system. Can I do that? If so, how?
According to the best practices of Docker and its CaaS, images are not supposed to "virtualize" a whole lot of services, on the contrary. Docker does not aim at taking a snapshot of the system (it uses FS overlay to create images, but theses are not snapshots).
So basically, if your (yet unclear) question is: "Can I virtualize my whole system into one image" the answer is: "No".
What you can do is using an image for each of your service (you'll find everything you need on the hub.docker) to keep a clean system on your new one.
Another solution would be to list all the installed Linux packages on your old system, and installed them on the new one and copy all the configuration files.
When trying to move a web container (Tomcat) to the latest technologies for better growth and support, I came across this blog. This part seems ideal for my needs:
... we are also incorporating Kubernetes into Mesos to manage the deployment of Docker workloads. Together, we provide customers with a commercial-grade, highly-available and production-ready compute fabric.
Now, how to setup a local test environment to try this out? All these technologies seem interchangable! I can run docker on mesos, mesos on docker, etc etc etc. Prepackaged instances allow me to run on others Clouds. Other videos also make this seem great! Running out on the cloud is not a viable (allowed) option for me. Unfortunately, I can not find 'instructions' on how to setup the configuration described/marketed/advertised.
If I am new to these technologies, and know there will be a learning curve, is there a way to get initialized for doing such a "simple task": running a tomcat container on a Docker machine that is running Mesos/Kubernetes? That is, without spending days trying to learn and figure out each individual part! This is the picture from the blog site referenced:
Assuming that I "only" know how to create a docker container(s) (for say, centos-7). What commands, in what order, (i.e. the secret 'code') do I need to use to configure small (2 or 3) local environment to try out running Tomcat?
Although I searched quite a bit, apparently not enough! Someone pointed me to this:
https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/mesos-docker.md
which is pretty close to exactly what I was looking for.