Im running a container with mssql, and want to know if the DB is healthy after a set of second, cant figure out how to return the value of this step that has the docker inspect command in it
docker inspect --format={{.State.Health.Status}} test-db
so I can tell it in a next step, ex. the output == 'healthy' and if this is true then go forward with the steps onto build down the line..
I dont know the format and my head is starting to explode.
If anyone could help me understand how to do this?
EDIT:
I know im close:
EDIT:
solution was to use those damned single quotes or the $() docker inspect --format={{.State.Health.Status}} test-db
try surrounding the {{ with single quote like
docker inspect --format='{{.State.Health.Status}}' test-db
and executing in the if condition like:
if [[ $(docker inspect --format='{{.State.Health.Status}}' test-db) == "healthy" ]]
Related
I am just trying to see an echo with my updated docker compose, but they are being hidden, I would like to ask if exist an option to remove that for debug purposes, I also tried:
docker-compose --verbose up
docker-compose --ansi "always" up
BUILDKIT_PROGRESS=plain docker-compose up
Any help will be welcome, I am stuck 2 days with this now and I cant see the echo, and I do need to debug this machine.
Cheers!
I think I could find the solution for this, I am using linux so to be able to change the way that we have the docker or docker-compose is to export a variable at the terminal, so docker can know what should be doing, so I did: export BUILDKIT_PROGRESS=plain
This we can get it back the previous echoes into dockerfile lines, so you can debug well while building docker machines.
I have been readying here and there online but the answer does not come thoroughly explained. I hope this question here if answered, can provide an updated and thorough explanation of the matter.
Why would someone define a container with the following parameters:
stdin: true
tty: true
Also if
`docker run -it`
bind the executed container process to the calling client stdin and tty, what would setting those flag on a container bind its executed process it to ?
I could only envision one scenario, which is, if the command is let say bash, then you can attach to it (i.e. that bash running instance) later after the container is running.
But again one could just run docker run it when necessary. I mean one launch a new bash and do whatever needs to be done. No need to attach to a running one
So the first part of the question is:
a) What is happening under the hood ?
b) Why and when to use it, what difference does it make and what is the added value ?
AFAIK, setting stdin: true in container spec will simply keep container process stdin open waiting for somebody to attach to it with kubectl attach.
As for tty: true - this simply tells Kubernetes that stdin should be also a terminal. Some applications may change their behavior based on the fact that stdin is a terminal, e.g. add some interactivity, command completion, colored output and so on. But in most cases you generally don't need it.
Btw kubectl exec -it POD bash also contains flags -it but in this case this really needed because you're spawning shell process in container's namespace which expects both stdin and terminal from user.
I'm trying to pull and set a container with an database (mysql) with the following command:
docker-compose up
But for some reason is failing to start the container is showing me this:
some_db exited with code 0
But that doesn't give me any information of why that happened so I can fix it, so how can I see the details of that error?.
Instead of looking for how to find an error in your composition, you should change your process. With docker it is easy to fall into using all of the lovely tools to make things work seamlessly together, but this sounds like an individual container issue, not an issue with docker compse. You should try building an empty container with the baseimage you want then using docker exec -it <somecontainername> sh to go into it and run the commands you're trying to execute in your entrypoint manually to find the specific breakpoint.
Try at least docker logs <exited_container_name/id> (or more recently: docker container logs <exited_container_name/id>)
That way, you can check if there was any error message.
An exited container means the main process stopped: you need to make sure that main process remains up in order for the container to not exit immediately.
Try also docker inspect <exited_container_name/id> (or, again, docker container inspect <exited_container_name/id>)
If you see for instance "OOMKilled": true, that would mean the container memory was exceeded. You can also look for the "ExitCode" for clues.
See more at "Ten tips for debugging Docker containers" from Mark Betz.
Right now, for my workflow (personal project, but for sharing with friends I would like to simplfy it), I have the steps:
1. docker run -it -p 3456:3456 -p 39499:39499 maccam912/tahoe-node /bin/bash
2. In docker, run the script tahoe-run.sh which will either do a first-time configuration, or just run the tahoe service if it is not the first run.
My question is how I would simplify this. A few specific points below:
I would like to tell docker to just run the container and let it do its thing in the background. if I changed it by replacing the -it with -d I understand that would get me closer, but it never seems to leave anything running when I do a docker ps the same way the above workflow does, and then me detaching with CTRL+P, CTRL+Q.
Do I need to -p those ports? I EXPOSE them in the Dockerfile. Also, since they are being mapped directly to the same port in the container, do I need to write each twice? Those ports are what the container is using AND what the outside world is expecting to be able to use to connect to the tahoe "server".
How could I get this working so that instead of running /bin/bash I run /tahoe-run.sh right away?
My perfect command would look something like docker run -d maccam912/tahoe-node /tahoe-run.sh if the port mapping stuff is not necessary. The end goal of this would be to let the container run in the background, with tahoe-run.sh kicking off a server that just runs "forever".
Let me know if I can clarify anything or if there is a simpler way to do this. Feel free to take a look at my Dockerfile and make any suggestions if it would simplify things there, but this project is intentionally following ease of use (hopefully getting things down to 1 command) over separation of duties, as is recommended.
I do believe I figured it out:
The problem with having just -d specified is that as soon as the command given returns/exits, the container stops running. My solution was to end the last line with a &, leaving the script hanging and not exiting.
As for the ports, the EXPOSE in the Dockerfile does nothing more than make those ports available to the host machine, not to remote machines like a server would need. the -p is still necessary. The doubling of the ports (in:out) is needed because specifying only one will map that port to a random high numbered port in the container.
Please correct me if I'm wrong on any of this, but in this case the best solution was to modify my script, and will be to use docker run -d -p 3456:3456 -p 39499:39499 maccam912/tahoe-node /tahoe-run.sh and the new script will leave it hanging and the container running.
When I'm debugging my Dockerfile, y constantly need to run these two commands:
$ docker build -t myuser/myapp:mytag - < myapp.docker # to create the container
$ docker run -i -t myuser/myapp:mytag /bin/bash # to enter the container and see what's going on when something went wrong
("mytag" is usually something like "production", "testing", or "development". Not sure if I'm supposed to use tags this way)
But now the second command doesn't seem to work anymore: it's starting an old container. If I list all the containers with $ docker images, I see my tagged container in the 3rd place, and other untagged containers before it. If I use the ID of the 1st container it works fine, but it will be annoying to do it that way, I will have to search for its ID every time.
What am I doing wrong?
You just have to start and attach a container using:
docker start -i #ContainerID
It's important to be clear about containers vs images. It sounds like your tagged image is 3rd in the list of images, and that you believe the first image that only has n ID really should be tagged but it isn't. This probably means that there's a problem building the image. The docker build output is verbose by default and should show you the problem.
As an aside, I'm not entirely sure about your use case but the idea of having different containers for development, testing and production is an anti-pattern. The entire point is to minimize differences between execution environments. In most cases you should use the same image but provide different environment variables to configure the application as desired for each environment. But again, I'm sure there are reasons to do this and you may have a legitimate one.
Here's what I use:
run_most_recent_container() { docker exec -it `docker ps -a --no-trunc -q | head -n 1` bash; }