So, I have been running into this problem constantly --
I usually spin up EC2 machines temporarily for running some benchmarks or small projects and then shut them down when the work is done.
However, everytime I spin up a new machine, I have to setup my environment all over again.
Here are the things I want to be configured easily:
I have a custom .vimrc, .tmux.conf, .zshrc file that I need to be setup
I have to re-install all the basic packages on ubuntu
I have to re-install all the vim plugins
Some times partitin/format the harddisk and do other sys-admin work.
I've used Docker before but I find Docker to be more invasive for what I need. It's an additional software that I've to run and I have to mount filesystems, setup extra networking bridge for that, configure ssh-in and ssh-out for etc. So I would like to avoid Docker if possible for this.
I think Vagrant has similar problems.
I am wondering if I should just create an EC2 AMI for this. Is that the best solution to this problem?
Thanks!
just dump your configuration to a S3 bucket and get it back. you can also create init scripts on your machines to install them as you like (or fetch data directly as you need). you can even get more advanced but i guess its not needed.
There's a few options you could use.
One option is to create a startup script that installs everything you like. You can then launch new machines that are fully configured. However, it takes a bit of work to get the script right.
Another option is to Stop the instance when not in use and start it later. You won't be charged for EC2, but you will be charged for the EBS volume storage.
Or, you could create an AMI of the instance, then launch a new instance later from the AMI. This can be slightly cheaper because the AMI only keeps the storage blocks that are in-use and AMI/snapshot storage is cheaper than EBS storage.
Related
Google Cloud Run is new. Is it possible to run WordPress docker on it? Perhaps using gce as database for the mysql/mariadb. Can't find any discussion on this
Although I think this is possible, it's not a good use of your time to go through this exercise. Cloud Run might not be the right tool for the job.
UPDATE someone blogged a tutorial about this (use at your own risk): https://medium.com/acadevmy/how-to-install-a-wordpress-site-on-google-cloud-run-828bdc0d0e96
Here are a few points to consider;
(UPDATE: this is not true anymore) Currently Cloud Run doesn't support natively connecting to Cloud SQL (mysql). There's been some hacks like spinning up a cloudsql_proxy inside the container: How to securely connect to Cloud SQL from Cloud Run? which could work OK.
You need to prepare your wp-config.php beforehand and bake it into your container image. Since your container will be wiped away every now and then, you should install your blog (creates a wp-config.php) and bake the resulting file into the container image, so that when the container restarts, it doesn't lose your wp-config.php.
Persistent storage might be a problem: Similar to point #2, restarting a container will delete the files saved to the container after it started. You need to make sure stuff like installed plugins, image uploads etc SHOULD NOT write to the local filesystem of the container. (I'm not sure if wordpress lets you write such files to other places like GCS/S3 buckets.) To achieve this, you'd probably end up using something like the https://wordpress.org/plugins/wp-stateless/ plugin or gcs-media-plugin.
Any file written to local filesystem of a Cloud Run container also count towards your container's available memory, so your application may run out of memory if you keep writing files to it.
Long story short, if you can make sure your WP installation doesn't write/modify files on your local disk, it should be working fine.
I think Cloud Run might be the wrong tool for the job here since it runs "stateless" containers, and it's pretty damn hard to make WordPress stateless, especially if you're installing themes/plugins, configuring things etc. Not to mention, your Cloud SQL server won't be "serverless", and you'll be paying for it while it's not getting any requests as well.
(P.S. This would be a good exercise to try out and write a blog post about! If you do that, add it to the awesome-cloudrun repo.)
I would like a Jenkins master and slave setup for running specs on standard Rails apps (PostgreSQL, sidekiq/redis, RSPec, capybara-webkit, a common Rails stack), using docker so it can be put on other machines as well. I got a few good stationary machines collecting dust.
Can anybody share an executable docker jenkins rails stack example?
What prevents that from being done?
Preferable with master-slave setup too.
Preface:
After days online, following several tutorials with no success, I am about to abandon project. I got a basic understanding of docker, docker-machine, docker compose and volumes, I got a docker registry of a few simple apps.
I know next to nothing about Jenkins, but I've used Docker pretty extensively on other CI platforms. So I'll just write about that. The level of difficulty is going to vary a lot based on your app's dependencies and quirks. I'll try and give an outline that's pretty generally useful, and leave handling application quirks up to you.
I don't think the problem you describe should require you to mess about with docker-machine. docker build and docker-compose should be sufficient.
First, you'll need to build an image for your application. If your application has a comprehensive Gemfile, and not too many dependencies relating to infrastructure etc (e.g. files living in particular places that the application doesn't set up for itself), then you'll have a pretty easy time. If not, then setting up those dependencies will get complicated. Here's a guide from the Docker folks for a simple Rails app that will help get you started.
Once the image is built, push it to a repository such as Docker Hub. Log in to Docker Hub and create a repo, then use docker login and docker push <image-name> to make the image accessible to other machines. This will be important if you want to build the image on one machine and test it on others.
It's probably worth spinning off a job to run your app's unit tests inside the image once the image is built and pushed. That'll let you fail early and avoid wasting precious execution time on a buggy revision :)
Next you'll need to satisfy the app's external dependencies, such as Redis and postgres. This is where the Docker Compose file comes in. Use it to specify all the services your app needs, and the environment variables etc that you'll set in order to run the application for testing (e.g. RAILS_ENV).
You might find it useful to provide fakes of some non-essential services such as in-memory caches, or just leave them out entirely. This will reduce the complexity of your setup, and be less demanding on your CI system.
The guide from the link above also has an example compose file, but you'll need to expand on it. The most important thing to note is that the name you give a service (e.g. db in the example from the guide) is used as a hostname in the image. As #tomwj suggested, you can search on Docker Hub for common images like postgres and Redis and find them pretty easily. You'll probably need to configure a new Rails environment with new hostnames and so on in order to get all the service hostnames configured correctly.
You're starting all your services from scratch here, including your database, so you'll need to migrate and seed it (and any other data stores) on every run. Because you're starting from an empty postgres instance, expect that to take some time. As a shortcut, you could restore a backup from a previous version before migrating. In any case, you'll need to do some work to get your data stores into shape, so that your test results give you useful information.
One of the tricky bits will be getting Capybara to run inside your application Docker image, which won't have any X displays by default. xvfb (X Virtual Frame Buffer) can help with this. I haven't tried it, but building on top of an image like this one may be of some help.
Best of luck with this. If you have the time to persist with it, it will really help you learn about what your application really depends on in order to work. It certainly did for me and my team!
There's quite a lot to unpack in that question, this is a guide of how to get started and where to look for help.
In short there's nothing preventing it, although it's reasonably complex and bespoke to setup. So hence no off-the-shelf solution.
Assuming your aim is to have Jenkins build, deploy to Docker, then test a Rails application in a Dockerised environment.
Provision the stationary machines, I'd suggest using Ansible Galaxy roles.
Install Jenkins
Install Docker
Setup a local Docker registry
Setup Docker environment, the way to bring up multiple containers is to use docker compose this will allow you to bring up the DB, redis, Rails etc... using the public docker hub images.
Create a Jenkins pipeline
Build the rails app docker image this will contain the rails app.
Deploy the application, this updates the application in the Docker swarm, from the local Docker registry.
Test, run the tests against the application now running.
I've left out the Jenkins master/slave config because if you're only running on one machine you can increase the number of executors. E.g. the master can execute more jobs at the expense of speed.
I am trying to help a sysadmin group reduce server & service downtime on the projects they manage. Their biggest issue is that they have to take down a service, install upgrade/configure, and then restart it and hope it works.
I have heard that docker is a solution to this problem, but usually from developer circles in the context of deploying their node/python/ruby/c#/java, etc. applications to production.
The group I am trying to help is using vendor software that requires a lot of configuration and management. Can docker still be used in this case? Can we install any random software on a container? Then keep that in a private repository, upgrade versions, etc.?
This is a windows environment if that makes any difference.
Docker excels at stateless applications. You can use it for persistent data style applications, but requires the use of volume commands.
Can docker still be used in this case?
Yes, but it depends on the application. It should be able to be installed headless, and a couple other things that are pretty specific. (EG: talking to third party servers to get an license can create issues)
Can we install any random software on a container?
Yes... but: remember that when the container restarts, that software will be gone. It's better to create it as an image, and then deploy it.See my example below.
Then keep that in a private repository, upgrade versions, etc.?
Yes.
Here is an example pipeline:
Create a Dockerfile for the OS and what steps it takes to install the application. (Should be headless)
Build the image (at this point, it's called an image, not a container)
Test the image locally by creating a local container. This container is what has the configuration data such as environment variables, the volumes for persistent data it needs, etc.
If it satisifies the local developers wants, then you can either:
Let your build servers create the image and publish it an internal
docker registry (best practice)
Let your local developer publish it
to an internal docker registry
At that point, your next level environments can then pull down the image from the docker registry, configure them and create the container.
In short, it will require a lot of elbow grease but is possible.
Can we install any random software on a container?
Generally yes, but you can have many problems with legacy software which was developed to work on bare metal.
At first it can be persistence problem, but it can be solved using volumes.
At second program that working good on full OS can work not so good in container. Containers have some difference with VM's or bare metal. For example due to missing init process some containers have zombie process issue. About others difference you can read here
Docker have big profit for stateless apps, but some heave legacy apps can work not so good inside containers and should be tested good before using it in production.
I have an aws ec2 account, where I am running couple of web apps on nginx. I don't know much about docker, except it is a container that takes snapshot of filesystem. Now, for some reason I am forced to switch accounts. I have opened a new aws ec2 account. Can I use docker to set up a container in my old virtual system, then get an image and deploy in my new system? This way I can remove the headache of having to install many components, configure nginx and all applications in my new system. Can I do that? If so, how?
According to the best practices of Docker and its CaaS, images are not supposed to "virtualize" a whole lot of services, on the contrary. Docker does not aim at taking a snapshot of the system (it uses FS overlay to create images, but theses are not snapshots).
So basically, if your (yet unclear) question is: "Can I virtualize my whole system into one image" the answer is: "No".
What you can do is using an image for each of your service (you'll find everything you need on the hub.docker) to keep a clean system on your new one.
Another solution would be to list all the installed Linux packages on your old system, and installed them on the new one and copy all the configuration files.
My goal is to simulate a cluster environment where I can test my applications and tools.
I need to have minimum of 3 Docker nodes (not container) running, and have access to them over ssh.
I have tried the following:
1 - Installing multiple VMs Machines from Ubuntu MinimalCD
Result: ended up with huge files to maintain, repeating the process is really harmful, and unpleasant.
2- Downloading Vagrant box that has docker inside (there are some here).
Result: I can't access them over ssh, and can't really fire up more than one box (Ok, I can but it is still not optimal).
3- Tried to run "Kitematic" multiple times, but had no success with it.
What do you do to test your clustering tools for docker?
My only "easy" solution is to run multiple instances from some provider and pay per hour usage, but that is really not that easy when I am offline, and when I just don't want to pay.
I don't need to run multiple "containers", but multiple "hosts", which I can then join together into a single Cluster to simulate distributed Data Center.
You could use docker-machine to create a few VMs locally. You can connect to all of them by changing the environment variables.
You might also be interested in something like https://github.com/dnephin/compose-swarm-sandbox/. It creates multiple docker hosts inside containers using https://github.com/dnephin/docker-swarm-slave.
If you are using something other than swarm, you would just remove that service from /srv/.
I would recommend you to use docker-machine for this purpose as they are very light weight and very easy to install, run and manage.
Try creating 3-4 docker machine, pull the swarm image on them and make a cluster and use docker compose to manage the cluster in one go.
Option 2 should be a valid option but what you looked at was to use a VM box using the docker provisionner. I would recommend looking at vagrant docker provider you do not need a vagrant box in this scenario but docker images. The Vagrant file though is still there and you can easily setup your multiple machines from the single Vagrant file
here is a nice blog but I am sure there are plenty of other good articles that explains in detail
I recommend Running CoreOS on Vagrant, it has been designed for your request with cluster enable and 3 instances will be started by default.
With etcd and fleetd, you should be fine to get cluster work properly.