Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I'm new to Terraform and just did all the Tutorials i could find about it. I have set up Multiple Docker Containers and a Network, Currently starting it with a shell skript. The general plan would be, to be able to start my testbed and all its components with Terraform (Like ONOS with containernet, Routers, ...).
My First Question would, is Terraform made for that kind of Question? Or would you suggest anything different. I thought using Terraform would make it Easy to write new Scenarios.
AT this Point I use the Shell skripts to build and run the Docker Containers. Does it make sense to let Terraform do the RUN (not build) task?
Thanks for your Help & Opinions
I'm new to Stack, it would be awesome if you explain a downvote - so i can learn to do it better.
edit ( Build file deleted - unnecassary)
The general plan would be, to be able to start my testbed and all its components with Terraform
Tl;Dr
Don't do this.
This isn't what terraform is for. Terraform provisions infrastructure. So (as an example) if you want an azure function, you write a terraform file that describes what this looks like then run terraform to create it. It doesn't (nor should it) run your function. It simply describes how Azure should create these structures prior to them being ran.
It seems you actually want a build pipline that uses terraform as part of that process to provision the infrastructure. The build script would then run the containers once terraform had done it's job
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 days ago.
Improve this question
I am coming from a PHP world. Our project is in a docker container and you can use php unit locally, on travis etc, you just tell it to search for files that have .tests. We are switching over to Go so I am trying to learn this language and what I am struggling to understand is testing. So I have a service that is compiled and its binary is copied to Docker and run from there. Locally I can run go test, however, because it is running outside of my docker container it can't connect to my database. The other problem is setting up the deployment process. I assume I can't just tell travis to search for files that end with test.go because they are not in the docker container. So does anyone have a good material on how it works in Go world? Am I supposed to copy not just the binary but all the files and run tests inside Docker?
use it:
1.run go test before go build in docker-build
2.usegomock if test have different port and url
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 1 year ago.
Improve this question
I am just getting started using Ansible and would like to try and run a playbook inside of a GitLab CI/CD job. However, I'm just finding out that the ansible/ansible image on the Docker Hub does not contain ansible-playbook. I'm getting cross-eyed looking at all of the unofficial ones out there, and I'm not sure what to pick. Can someone point me to an easy/readily-available Docker image I could pull that contains ansible-playbook just so that I can experiment? I'd prefer not to have to set up a whole separate Dockerfile etc. just to get started playing around.
Just build your own. Your Dockerfile could be as simple as:
FROM python:3.9
RUN pip install ansible
You say " I'd prefer not to have to set up a whole separate Dockerfile etc", but I don't think that adds much complexity. Update as necessary for Python or Ansible dependencies that are appropriate to your particular playbooks.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
As a new starter in the world of DevOps, I've tried to find a one-pager that explains side by side what each of the following technologies do best and how they are orchestrated together in typical deployment scenario.
Its all a bit overwhelming coming in cold.
Seems like there's a technology for every single step of the deployment. Have some been superseded by others? Are the differentiated in the granularity of the artifact?
No opinions please of which is better, just resources of hey they are used together.
Docker
Kubernetes
Helm
Terraform
Rancher
Docker is the de facto standard for building containers and running them in various environments.
Kubernetes is a complex framework for orchestrating containers.
Helm is a component of Kubernetes, a package manager for running apps on Kubernetes.
(Rancher is a framework for managing and orchestrating containers. It could also manage Kubernetes clusters)
A typical scenario 'devops' scenario would involve builiding Docker images from source, and running them in A kubernetes cluster in production, as described in a Helm chart. The underlying infrastructure for Kubernetes could be deployed with Terraform.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I am a bit confused about Docker EE. In my case I have fully working setup of Kubernetes. It has few dev,test application containers, and now we want to move for production containers(App) which are client facing so developers were talking about docker EE for prod. Now how it will affect my existing kubernetes infrastructure? Do I need to go for any additional configuration for my kubernetes, or it's just way of creation of container image by developer part will change?
As existing Kubernetes infra maintaining part is anything changes?
DockerEE has its own way to install and setup kubernetes.
Its simpler than the usual kubernetes setup in my opinion. But one thing i noticed in dockeree, kubelet is running as a container managed by swarm. Most if not all kubernetes control plane components are managed by swarm not systemd.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm building an application that people will run on their own server, similar to moodleor wordpress. I'm assuming the people running the application will be familiar with executing commands on the command line, but I can't assume they are familiar with Docker.
What I'm thinking of doing, is giving them instructions on how to install Docker and docker-compose. Most installations will be small enough that both the web server and database can run on the same machine, so they can just they can just put the compose file in a directory and then run docker-compose up -d.
Would this be a good way to distribute the application? Ofcourse, the docker-compose file would take into account all the considerations for running docker-compose in production.
You have two tasks:
1. Install Docker on server
You can use something like Ansible or just make a good manual page for them.
2. Run containers, build application, etc.
It is very easy to create Makefile with basic command:
make install
make reinstall
make build
make start
make stop
If you will use Ansible for 1. you can use it for 1. and for 2. both.
If you don't need to automise 1. it is enough to use Makefile. It is simple and fast. And they can understand what does this Makefile do.
I think, Why not? If your final user is Ok about using Docker, I think that's a cool way to do.
It let your final user get rid of versions and hardware differences, as you need, and you are able to push new versions of your containers, so that you can do updates easily.