I would like to run docker-compose projects from Golang using the docker package by providing the docker-compose.yml file.
Following the example from https://docs.docker.com/engine/api/sdk/examples/
I know how to create and run individual containers using Golang, but is there a way to run docker-compose projects from the Golang docker library?
I know that I can do something like this
import "os/exec"
exec.Command("docker-compose","up")
but I would like this to happen from the docker package instead.
I'm a complete Go noob, so take this with a grain of salt, but I was looking through the Docker compose implementation of their up command - they use Cobra for their cli tooling (which seems quite common), and Viper for argument parsing. The link provided is to the Cobra command implementation, which is different from the up internals, which the command uses.
If you were to want to add a command which would invoke the docker compose "up" command as part of your golang command (which is what I think you're going for - it's what I was looking to try to do), I think you'd have to accept the fact that you'd have to basically implement your own versions of the Cobra commands they have there, replicating their existing logic. That was my take.
Related
It seems that docker compose * mirrors many docker-compose * commands.
What is the difference between these two?
Edit: docker compose is not considered a tech preview anymore. The rest of the answer still stands as-is, including the not-yet-implemented command.
docker compose is currently a tech preview, but is meant to be a drop-in replacement for docker-compose. It is being built into the docker binary to allow for new features. It hasn't implemented one command yet, and has deprecated a few. These are quite rarely used though, in my experience.
The goal is that docker compose will eventually replace docker-compose, but no timeline for that yet and until that day you still need docker-compose for production.
Why they do that?
docker-compose is written in Python while most Docker developments are in Go. And they decided to recreate the project in Go with the same and more features, for better integration
I have a service that I originally had configured in my environment. I felt that the configuration was not very well documented and the service not easily deployable, so I decided to adopt Docker to solve that.
The service uses a Python script that has its own dependencies and can be called with a number of different arguments. Originally, the scripts dependencies were installed system-wide so I just hard-coded the path to the script in the service's code and it worked.
However, now that I'm trying to move to Docker I'm not sure how to deal with that. Some ideas:
bind-mount the script directory - but then how to make sure all its dependencies are available within the container environment?
dockerize the Python script and add it as a service to the docker compose YAML. Really unsure about this one as it's not really a service, just a utility script that exits as soon as it's done processing.
The Python script is also called with a combination of arguments. Not sure if it's relevant, but I noticed that when I create a new container from an image, I can't start the container again with different arguments - I have to re-create the container with different arguments. I really don't understand the idea behind this behaviour and would appreciate if somebody could explain the logic behind this.
Currently, I'm using bash script in order to execute docker commands.
For example docker-compose up -d is just command start inside the terminal and etc.
I have a lot of commands like this to start/stop/restart/execute some commands inside a container and etc.
Everything is working fine, but the problem is with the bash file there are too many if/else for all the commands. I'm trying to find another script language to do this, but not sure which will be cleaner(better to write). Does someone else use something similar?
I was thinking to use Python, but it needs to be something that requests a minimum work from the user side. The idea is just to download the docker repo and start using the commands that's why I'm using bash for now.
I would say that Python is an extremely good choice. The syntax is very easy to pick up, and there are many modules/libraries for infrastructure work.
Python is one of the most popular languages in the Ops/Devops world, so there will also be plenty of help online.
Let's say I make a container with some flags. For instance,
docker run -v my_volume:/data my_cool_image
Now, let's say my_cool_image is updated to a new version. Is there a nice way to make a new container with the same -v flag as the old one? The container has been properly configured so that the data does not get stored in the container, so deleting the old container is not a concern.
The best solution I can find is to use docker-compose, but that seems a bit silly for single-container systems.
I'd use a shell script or a Docker Compose YAML file. (Compose isn't really overkill; if you add some error handling and write out one option per line for readability, the shell script and the YAML file wind up being about the same length.)
There's nothing built in to Docker that can extract the docker run options from an existing container.
I cannot get the idea of connecting parts of a webapp via Dockerfile's.
Say, I need Postgres server, Golang compiler, nginx instance and something else.
I want to have a Dockerfile that describes all these dependencies and which I can deploy somewhere, then create an image and run a container from it.
Is it correct that I can put everything in one Dockerfile or should I create a separate Dockerfile for each dependency?
If I need to create a Dockerfile for each dependency, what's the correct way to create a merged image from them all and make all the parts work inside one container?
The current best practice is to have a single container perform one function. This means that you would have one container for ngnix and another for your app.. Each could be defined by their own dockerfile. Then to tie them all together, you would use docker-compose to define the dependencies between them.
A dockerfile is your docker image. One dockerfile for each image you build and push to a docker register. There are no rules as to how many images you manage, but it does take effort to manage an image.
You shouldn't need to build your own docker images for things like Postgres, Nginx, Golang, etc.. etc.. as there are many official images already published. They are configurable, easy to consume and can be often be run as just a CLI command.
Go to the page for a docker image and read the documentation. It often examples what mounts it supports, what ports it exposes and what you need to do to get it running.
Here's nginx:
https://hub.docker.com/_/nginx/
You use docker-compose to connect together multiple docker images. It makes it easy to docker-compose up an entire server stack with one command.
How to use docker-compose is like trying to explain how to use docker. It's a big topic, but I'll address the key point of your question.
Say, I need Postgres server, Golang compiler, nginx instance and something else. I want to have a Dockerfile that describes all these dependencies and which I can deploy somewhere, then create an image and run a container from it.
No, you don't describe those things with a dockerfile. Here's the problem in trying to answer your question. You might not need a dockerfile at all!.
Without knowing the specific details of what you're trying to build we can't tell you if you need your own docker images or how many.
You can for example; deploy a running LAMP server using nothing but published docker images from the docker hub. You would just mount the folder with your PHP source code and you're done.
So the key here is that you need to learn how to use docker-compose. Only after learning what it can not do will you know what work is left for you to do to fill in the gaps.
It's better to come back to stackoverflow with specific questions like "how do I run the Golang compiler on the command line via docker"