I'm trying to create a lab for virtual machines in the VMWare Hypervisor to install a cluster in RHEL; I've seen that Packer and Terraform are very interesting but I can't find clear and /or detailed samples about their workflow, specifically how to create the image in Packer and then let that Terraform consume it.
I have seen that Packer has build capabilities but even some type of deploy ones and I don't understand if these overlaps Terraform; I've read that some type of automation is possible via another HashiCorp product, Atlas, but I don't want to use it, at least at this stage of study and trial of the software.
So what I'd like to do is create a VMWare compatible virtual machine images with Packer (RHEL base plus other capabilities), pass them to a Terraform artifact that creates the vm in my esxi.
Hope to find guidance.
I am not sure of your specific use case, but Terraform does have data-sources that make it easier to connect images built by Packer to Terraform: https://www.terraform.io/docs/configuration/data-sources.html
Here is a partial example of using a Packer Built AMI for an AWS EC2 instance:
data "aws_ami" "bastion" {
filter {
name = "state"
values = ["available"]
}
filter {
name = "tag:Name"
values = ["Bastion"]
}
most_recent = true
}
resource "aws_instance" "bastion" {
ami = "${data.aws_ami.bastion.id}"
# ...
}
I have also used bash scripts to parse out Packer generated values and dump them into tfvars files, which Terraform consumed.
Related
Is there a way to convert Dockerfile to an EC2 instance (for example)?
I.e., a script to interpret the Dockerfile script and install all the correct versions of dependencies and any other deployment operations on a bare metal ec2 instance.
I do not mean to run the docker images inside Docker but to deploy it directly on the instance.
I do not think you can do this with the help of tools, but you can do this with the help of Dockerfile itself.
First, choose the OS for your EC2 launch which used in the Dockerfile that you can find in the start of Dockerfile, suppose FROM ubuntu, so choose ubuntu for your EC2 machine rest of the command will be same for as you perform in the Dockerfile.
But again we also need behaviour like Docker means to say that we just want to create it once and run on different EC2 machine on a different region, so for this you need to launch the instance and prepare one instance and test it accordingly then create AWS AMI from that EC2 instance, now you can treat this AWS AMI like Docker image.
Amazon Machine Image (AMI)
An Amazon Machine Image (AMI) provides the information required to
launch an instance. You must specify an AMI when you launch an
instance. You can launch multiple instances from a single AMI when you
need multiple instances with the same configuration. You can use
different AMIs to launch instances when you need instances with
different configurations
creating-an-ami
Or the second option is to put the complete script in the user-data section, you can consider this entrypoint of the Docker where we want to prepare thing during run time.
I work on VM on google cloud for my Machine learning work.
In order to avoid installing all the libraries and module from scratch every time I create a new VM on GCP or whatever, I want to save the VM that I created on Google Cloud and save on GitHub as a docker image. So that next time, I would just load it and run it as a docker image and get my VM ready for work.
Is this a straightforward task? Any ideas on how to do that, please?
When you create a Compute Engine instance, it is built from an artifact called an "image". Google provides some OS images from which you can build. If you then modify these images by (for example) installing packages or performing other configuration, you can then create a new custom image based upon your current VM state.
The recipe for this task is fully documented within the Compute Engine documentation here:
https://cloud.google.com/compute/docs/images/create-delete-deprecate-private-images
Once you have created a custom image, you can instantiate new VM instances from these custom images.
I have tests that I run locally using a docker-compose environment.
I would like to implement these tests as part of our CI using Jenkins with Kubernetes on Google Cloud (following this setup).
I have been unsuccessful because docker-in-docker does not work.
It seems that right now there is no solution for this use-case. I have found other questions related to this issue; here, and here.
I am looking for solutions that will let me run docker-compose. I have found solutions for running docker, but not for running docker-compose.
I am hoping someone else has had this use-case and found a solution.
Edit: Let me clarify my use-case:
When I detect a valid trigger (ie: push to repo) I need to start a new job.
I need to setup an environment with multiple dockers/instances (docker-compose).
The instances on this environment need access to code from git (mount volumes/create new images with the data).
I need to run tests in this environment.
I need to then retrieve results from these instances (JUnit test results for Jenkins to parse).
The problems I am having are with 2, and 3.
For 2 there is a problem running this in parallel (more than one job) since the docker context is shared (docker-in-docker issues). If this is running on more than one node then i get clashes because of shared resources (ports for example). my workaround is to only limit it to one running instance and queue the rest (not ideal for CI)
For 3 there is a problem mounting volumes since the docker context is shared (docker-in-docker issues). I can not mount the code that I checkout in the job because it is not present on the host that is responsible for running the docker instances that I trigger. my workaround is to build a new image from my template and just copy the code into the new image and then use that for the test (this works, but means I need to use docker cp tricks to get data back out, which is also not ideal)
I think the better way is to use the pure Kubernetes resources to run tests directly by Kubernetes, not by docker-compose.
You can convert your docker-compose files into Kubernetes resources using kompose utility.
Probably, you will need some adaptation of the conversion result, or maybe you should manually convert your docker-compose objects into Kubernetes objects. Possibly, you can just use Jobs with multiple containers instead of a combination of deployments + services.
Anyway, I definitely recommend you to use Kubernetes abstractions instead of running tools like docker-compose inside Kubernetes.
Moreover, you still will be able to run tests locally using Minikube to spawn the small all-in-one cluster right on your PC.
I am wondering how do we make machines that host docker to be easily replaceable. I would like something like a Dockerfile that contains instructions on how to set-up the machine that will host docker. Is there a way to do that?
The naive solution would be to create an official "docker host" binary image to install on new machines, but I would like to have something that is reproducible and transparent like the dockerfile?
It seems like tools like Vagrant, Puppet, or Chef may be useful but they appear to be for virtual machine procurement and they seem to all require set-up of some sort of "master node" server. I am not going to be spinning up and tearing down regularly so a master server is a waste of a server, I just want something that is reproducible in the event i need to set-up or replace a new machine.
this is basically what docker-machine does for you https://docs.docker.com/machine/overview/
and other "orchestration" systems will make this automated and easier, as well
There are lots of solutions to this with no real one size fits all answer.
Chef and Puppet are the popular configuration management tools that typically use a centralized server. Ansible is another option that typically runs without a server and just connects with ssh to configure the host. All three of these works very similarly, so if your concern is simply managing the CM server, Ansible may be the best option for you.
For VM's Vagrant is the typical solution and it can be combined with other tools like Ansible to provision the VM after creating it.
In the cloud space, there's tools like Terraform or vendor specific tools like CloudFormation.
Docker is working on a project called Infrakit to deploy infrastructure the way compose deploys containers. It includes hooks for several of the above tools, including Terraform and Vagrant. For your own requirements, this may be overkill.
Lastly, for designing VM images, Docker recently open sourced their Moby project which creates the VM image containing a minimal container OS, the same one used under the covers in Docker for Windows, Docker for Mac, and possibly some of the cloud hosing providers.
We automate Docker installation on hosts using Ansible + Jenkins. Given the propper SSH access, provisioning new Docker hosts is a matter of triggering a Jenkins job.
I just came across docker, and was looking through its docs to figure out how to use this to distribute a java project across multiple nodes, while making this distribution platform independent i.e the nodes can be running any platform. Currently i'm sending classes to different nodes and running it on them with the assumption that these nodes have the same environment as the client. I couldn't quite figure out how to do this, any suggestions wouldbe greatly appreciated.
I do something similar. In my humble opinion Docker or not is not your biggest problem. However, using Docker images for this purpose can and will save you a lot of headaches.
We have a build pipeline where a very large Java project is built using Maven. The outcome of this is a single large JAR file that contains the software we need to run on our nodes.
But some of our nods also need to run some 3rd party software such as Zookeeper and Cassandra. So after the Maven build we use packer.io to create a Docker image that contains all needed components which ends up on a web server that can be reached only from within our private cloud infrastructure.
If we want to roll out our system we use a combination of Python scripts that talk with the OpenStack API and create virtual machines on our cloud, and Puppet which performs the actual software provisioning inside of the VMs. Our VMs are CentOS 7 images, so what Puppet actually does is to add the Docker yum repos. Then installs Docker through yum, pulls in the Docker image from our repository server and finally uses a custom bash script to launch our Docker image.
For each of these steps there are certainly even more elegant ways of doing it.