Can I run a series of linux commands with Jenkins without deploying an application? - jenkins

I want to automate running a series of linux commands on a linux environment, either a newly launched ec2 instance with amazonlinux2 ami or a container with amazonlinux2 image. I won’t be deploying an application. Is this something I can accomplish with Jenkins? I basically need Jenkins to launch a new ec2 instance or a container and run some commands, and even possibly terminate the instance/container once it finishes running the commands (this last part is optional). Thank you.

Related

Is there a way to convert Dockerfile to an EC2 instance (for example)?

Is there a way to convert Dockerfile to an EC2 instance (for example)?
I.e., a script to interpret the Dockerfile script and install all the correct versions of dependencies and any other deployment operations on a bare metal ec2 instance.
I do not mean to run the docker images inside Docker but to deploy it directly on the instance.
I do not think you can do this with the help of tools, but you can do this with the help of Dockerfile itself.
First, choose the OS for your EC2 launch which used in the Dockerfile that you can find in the start of Dockerfile, suppose FROM ubuntu, so choose ubuntu for your EC2 machine rest of the command will be same for as you perform in the Dockerfile.
But again we also need behaviour like Docker means to say that we just want to create it once and run on different EC2 machine on a different region, so for this you need to launch the instance and prepare one instance and test it accordingly then create AWS AMI from that EC2 instance, now you can treat this AWS AMI like Docker image.
Amazon Machine Image (AMI)
An Amazon Machine Image (AMI) provides the information required to
launch an instance. You must specify an AMI when you launch an
instance. You can launch multiple instances from a single AMI when you
need multiple instances with the same configuration. You can use
different AMIs to launch instances when you need instances with
different configurations
creating-an-ami
Or the second option is to put the complete script in the user-data section, you can consider this entrypoint of the Docker where we want to prepare thing during run time.

Best Practices for Cron on Docker

I've transitioned to using docker with cron for some time but I'm not sure my setup is optimal. I have one cron container that runs about 12 different scripts. I can edit the schedule of the scripts but in order to deploy a new version of the software running (some scripts which run for about 1/2 day) I have to create a new container to run some of the scripts while others finish.
I'm considering either running one container per script (the containers will share everything in the image but the crontab). But this will still make it hard to coordinate updates to multiple containers sharing some of the same code.
The other alternative I'm considering is running cron on the host machine and each command would be a docker run command. Doing this would let me update the next run image by using an environment variable in the crontab.
Does anybody have any experience with either of these two solutions? Are there any other solutions that could help?
If you are just running docker standalone (single host) and need to run a bunch of cron jobs without thinking too much about their impact on the host, then making it simple running them on the host works just fine.
It would make sense to run them in docker if you benefit from docker features like limiting memory and cpu usage (so they don't do anything disruptive). If you also use a log driver that writes container logs to some external logging service so you can easily monitor the jobs.. then that's another good reason to do it. The last (but obvious) advantage is that deploying new software using a docker image instead of messing around on the host is often a winner.
It's a lot cleaner to make one single image containing all the code you need. Then you trigger docker run commands from the host's cron daemon and override the command/entrypoint. The container will then die and delete itself after the job is done (you might need to capture the container output to logs on the host depending on what logging driver is configured). Try not to send in config values or parameters you change often so you keep your cron setup as static as possible. It can get messy if a new image also means you have to edit your cron data on the host.
When you use docker run like this you don't have to worry when updating images while jobs are running. Just make sure you tag them with for example latest so that the next job will use the new image.
Having 12 containers running in the background with their own cron daemon also wastes some memory, but the worst part is that cron doesn't use the environment variables from the parent process, so if you are injecting config with env vars you'll have to hack around that mess (write them do disk when the container starts and such).
If you worry about jobs running parallel there are tons of task scheduling services out there you can use, but that might be overkill for a single docker standalone host.

What is image "containersol/minimesos" in minimesos?

I was able to setup the minimesos cluster on my laptop and also could deploy a small command-line utility. Now the questions;
What is the image "containersol/minimesos" used for? It is pulled but I don't see it running, when I do "docker ps". "docker images" lists it.
How come when I run "top" inside the mesos-agent container, I see all the processes running in my host (laptop)? This is a bit strange.
I was trying to figure out what's inside minimesos script. I see that there's just one "docker run ... " command. Would really appreciate if I could get to know what the aforementioned command does that results into 4 containers (1 master, 1 slave, 1 zk, 1 marathon) running on my laptop.
containersol/minimesos runs the Java code that is the core of minimesos. It only runs until it executes the command from the CLI. When you do minimesos up the command name and the minimesosFile will be passed to this container. The container in turn will execute the Java code that will create the other containers that form the Mesos cluster specified in the minimesosFile. That should answer #3 also. Take a look at MesosCluster class thats the root of where the magic happens.
I don't know the answer to #2 will get back to you when I find out.
Every minimesos command runs as a short lived container, whose image is containersol/minimesos.
When you run 'minimesos up' it launches containersol/minimesos with 'up' as the argument. It then launches a cluster by starting other containers like containersol/mesos-agent and containersol/mesos-master. After the cluster is up the containersol/minimesos container exists and is removed.
We have separated cli and minimesos core as a refactoring to prepare for the upcoming API module. We are creating an API to support clients for different programming language. The first client will be a Golang client.
In this new setup minimesos will run launch a long running API server and any minimesos cli commands call the API. The clients will also launch the API server and call the API.

How can my friend and I share an exact development environment together while on different operating systems?

I use a Mac for development and deployment, and have a need for creating an isolated environment. I've been exploring vagrant and docker and it seems that in order to run Docker, I need to be on a linux environment. I'm running an instance of vagrant with Ubuntu, the same as my partner uses on their desktop.
My question is, can my partner run the docker container off their Ubuntu instance instead of having to setup Vagrant like myself? Does my server and app run inside my Docker instance? (I'm using MEAN).
Trying to build a workflow and piece it all together.
He could probably get docker to run but packaging it all inside of a vagrant VM really is the way to go as that will keep it transportable across the board.
You can skip the vagrant file and just share the Docker images. There should be no detectable host differences from within the container.

Running and Deploying Rails to Docker Container

I am a total noob to linux containers and been spending some time learning about Docker, and forgive my confusion thought this question. Currently, I have a Rails app in production deployed via capistrano. My cloud servers are maintained with Opscode Chef on the Debian Wheezy distribution. For development, I have a Vagrant VM preinstalled with the app and services.
If I were to employ Docker, where would my app sit? The container or the host? How would I deploy (production) and share directories (development)? Can I run all my additional services ie memcache, redis, postgresql, etc on the same server using docker? I can maybe envision the potential of Docker but having trouble seeing its practical use.
Seems like containers are part of the future. Any guidance for someone making the switch from virtualization?
If I were to employ Docker, where would my app sit?
It could sit inside the container or it could sit on the host(you can use docker build to copy the app into the container)
How would I deploy (production) and share directories (development)?
Deploying your app would mean committing your local container into an image, publishing it
and running a container out of the published images on your servers. I have not tried sharing directories between host and container, but you can try this : https://gist.github.com/jpetazzo/5668338 . You can also write a Dockerfile which can copy a directory to a target in the container. Docker's docs on building images will help you there.
Can I run all my additional services ie memcache, redis, postgresql, etc on the same server using docker?
Yes. You will be running multiple containers on the same server.
I'm no expert and I haven't even used docker myself, but as I understand it, your app sits inside a docker container. You would deploy ideally a whole container with your own ruby version installed and so on.
The big benefit is, that you can test exactly the same container in your staging system that you're going to ship to production then. So you're able to test the complete system with all installed C extensions, the exact same ls command and so on.

Resources