Docker v2.12.2 isn't showing echoes - docker

I am just trying to see an echo with my updated docker compose, but they are being hidden, I would like to ask if exist an option to remove that for debug purposes, I also tried:
docker-compose --verbose up
docker-compose --ansi "always" up
BUILDKIT_PROGRESS=plain docker-compose up
Any help will be welcome, I am stuck 2 days with this now and I cant see the echo, and I do need to debug this machine.
Cheers!

I think I could find the solution for this, I am using linux so to be able to change the way that we have the docker or docker-compose is to export a variable at the terminal, so docker can know what should be doing, so I did: export BUILDKIT_PROGRESS=plain
This we can get it back the previous echoes into dockerfile lines, so you can debug well while building docker machines.

Related

docker-compose start/up without running command

Is there any way to make docker-compose start a service without running the declared command?
Not sure if any such option exists, nothing obvious in the flags for docker-compose up. It would be useful for debugging as presently I have to comment out the command in order to enter a container that otherwise exits on startup.
In this case, there's no command in the Dockerfile, but there's a command in docker-compose.yml.
Based on jonrsharpe's comment, the answer is to use run instead as it will start the container.
docker-compose run service bash
This makes it possible to enter the container and debug the problem so the real command can run.

how to configure docker containers proxy?

how to configure docker containers proxy ?
First of all,
I tried to use the way that setted '/etc/systemd/system/docker.service.d/http-proxy.conf' (https://docs.docker.com/config/daemon/systemd/#httphttps-proxy) and it really works for docker daemon, but it doesn't work for docker containers, it seems this way just take effect for some command like 'docker pull'
Secondary,
I have a lot of docker containers, I don't want to use 'docker run -e http_proxy=xxx... ' command every time when I start a container.
So I guess if there is such a way automatically load the global configuration file when the container starts, I googled it and got it to set the file '~/.docker/config.json'(How to configure docker container proxy?, this way still does not work for me.
(
my host machine system is centos7, here is my docker -v:
Docker version 1.13.1, build 6e3bb8e/1.13.1
)
I feel that it may be related to my docker version or the docker started by the systemd service, so ~/.docker/config.json does not take effect.
Finally ,
I just hope that modifying configuration files will allow all my containers to automatically configure environment variables when it start (that is auto set environment variables 'http_proxy=http://HostIP:8118 https_proxy=http://HostIP:8118' when a container start, like Dockerfile param ENV) . I want to know if there is such a way? And if this way can be realised I can make the container use the host's proxy, after all, my host's agent is working properly.
But I was wrong, I tried to run a container,then set http_proxy=http://HostIP:8118 and https_proxy=http://HostIP:8118, but when I use the command 'wget facebook.com' and I got 'Connecting to HostIP:8118... failed: No route to host.', But, the host machine(centos7) can successfully execute the wget, And I can successfully ping the host in the container. I don't know why it might be related to firewalls and the 8118 port.
It is Over,
OMG.. I have no other way, can anyone help me?
==============================
ps:
You can see from the screenshot below, I actually want to install goa and goagen but report an error, maybe because of network reasons, I want to open the agent to try, so...only have the above problem.
1.my go docker container
enter image description here
go docker wget
2.my host
my host wget
You need version 17.07 or more recent to automatically pass the proxy to containers you start using the config.json file. The 1.13 releases are long out of support.
This is well documented from docker:
https://docs.docker.com/network/proxy/

How to rebuild and update a container without downtime with docker-compose?

I enjoy a lot using docker-compose.
Eg. on my server, when I want to update my app with minor changes, I only need to git pull origin master && docker-compose restart, works perfectly.
But sometimes, I need to rebuild (eg. I added an npm dependency, need to run npm install again).
In this case, I do docker-compose build --no-cache && docker-compose restart.
I would expect this to :
create a new instance of my container
stop the existing container (after the newer has finished building)
start the new one
optionally remove the old one, but this could be done manually
But in practice it seems to restart the former one again.
Is it the expected behavior?
How can I handle a rebuild and start the new one after it is built?
Maybe I missed a specific command? Or would it make sense to have it?
from the manual docker-compose restart
If you make changes to your docker-compose.yml configuration these
changes will not be reflected after running this command.
you should be able to do
$docker-compose up -d --no-deps --build <service_name>
The --no-deps will not start linked services.
The problem is that restart will restart your current containers, which is not what you want.
As an example, I just did this
change the docker file for one of the images
call docker-compose build to build the images
call docker-compose down1 and docker-compose up
docker-compose restart will NOT work here
using docker-compose start instead also does not work
To be honest, i'm not completly sure you need to do a down first, but that should be easy to check.1 The bottomline is that you need to call up. You will see the containers of unchanged images restarting, but for the changed image you'll see recreating.
The advantage of this over just calling up --build is that you can see the building-process first before you restart.
1: from the comments; down is not needed, you can just call up --build. Down has some "down"-sides, including possible being destructive to your (volume-)data.
Use the --build flag to the up command, along with the -d flag to run your containers in the background:
docker-compose up -d --build
This will rebuild all images defined in your compose file, then restart any containers whose images have changed.
-d assumes that you don't want to keep everything running in your shell foreground. This makes it act more like restart, but it's not required.
Don't manage your application environment directly. Use deployment tool like Rancher / Kubernetes. Using one you will be able to upgrade your dockerized application without any downtime and even downgrade it should you need to.
Running Rancher is as easy as running another docker container as this tool is available in the Docker Hub.
You can use Swarm. Init swarm first by docker swarm init command and use healthcheck in docker-compose.yml.
Then run below command:
docker stack deploy -c docker-compose.yml project_name
instead of
docker-compose up -d.
When docker-compose.yml file is updated only run this command again:
docker stack deploy -c docker-compose.yml project_name
Docker Swarm will create new version of services and stop old version after that.
Though the accepted answer shall work to rebuild the container before starting the new one as a replacement, it is ok for simple use case, but the container will still be down during new container initialization process. If this is quite long, it can be an issue.
I managed to achieve rolling updates with docker-compose (along with a nginx reverse proxy), and detailed how I built that in this github issue: https://github.com/docker/compose/issues/1786#issuecomment-579794865
Hope it can help!
Run the following commands:
docker-compose pull
docker-compose up -d --no-deps --build <service_name>
As the top rated answer mentioned
docker-compose up -d --no-deps --build <service_name>
will restart a single service without taking down the whole compose.
I just wanted to add to the top answer in case anyone is unsure how to update an image without restarting the container.
Another way:
docker-compose restart in your case could be replaced with docker-compose up -d --force-recreate, see https://docs.docker.com/compose/reference/up/
Running docker-compose up while docker-compose is in the running state, will recreate container that got their configuration changed.
Thats the easiest way, and it will only affect containers that got their configuration changed.
root#docker:~# docker-compose up
traefik is up-to-date
nginx is up-to-date
Recreating php ... done

For dummies approach to build image and run own code on Docker

I am trying to work with Docker. I want to run a supersimple program on Docker (to get acquainted with Docker).
I have gone through most of Dockers own tutorials, but did work with own code any where so I am left puzzled. When searching online there are a lot of hits (which i have attempted to understand), but most of them either involve more unknown tools (maven, springboot, django) or they are far too complicated.
Say i have a helloworld.py (or helloworld.java). How do i do go about running it on Docker? * by running i mean upload and execute.
Do i need to download java on docker? what sequence of steps are needed?
I know this is a "stupid" question, which is why i specified a dummies-approach.
Any help will greatly be appreciated. Even links that cover this (which i have not succeded in finding)
This is a basic image for running a "hello world" example in Python:
You have to create these two files in a folder.
Dockerfile:
FROM python:2
COPY ./helloworld.py /
CMD ["/usr/bin/python", "/helloworld.py"]
helloworld.py:
print "hello world"
Look for the Dockerfile reference to understand what FROM, COPY and CMD do.
First you build the container:
docker build -t hellopython <path-of-image-folder>
Verify that the image is listed:
docker images
Run a new container:
docker run hellopython
Use ps to list the containers:
docker ps -a

How to get docker-compose to always re-create containers from fresh images?

My docker images are built on a Jenkins CI server and are pushed to our private Docker Registry. My goal is to provision environments with docker-compose which always start the originally built state of the images.
I am currently using docker-compose 1.3.2 as well as 1.4.0 on different machines but we also used older versions previously.
I always used the docker-compose pull && docker-compose up -d commands to fetch the fresh images from the registry and start them up. I believe my preferred behaviour was working as expected up to a certain point in time, but since then docker-compose up started to re-run previously stopped containers instead of starting the originally built images every time.
Is there a way to get rid of this behaviour? Could that way be one which is wired in the docker-compose.yml configuration file to not depend "not forgetting" something on the command line upon every invocation?
ps. Besides finding a way to achieve my goal, I would also love to know a bit more about the background of this behaviour. I think the basic idea of Docker is to build an immutable infrastructure. The current behaviour of docker-compose just seem to plain clash with this approach.. or do I miss some points here?
docker-compose up --force-recreate is one option, but if you're using it for CI, I would start the build with docker-compose rm -f to stop and remove the containers and volumes (then follow it with pull and up).
This is what I use:
docker-compose rm -f
docker-compose pull
docker-compose up --build -d
# Run some tests
./tests
docker-compose stop -t 1
The reason containers are recreated is to preserve any data volumes that might be used (and it also happens to make up a lot faster).
If you're doing CI you don't want that, so just removing everything should get you want you want.
Update: use up --build which was added in docker-compose 1.7
The only solution that worked for me was the --no-cache flag:
docker-compose build --no-cache
This will automatically pull a fresh image from the repo. It also won't use the cached version that is prebuilt with any parameters you've been using before.
By current official documentation there is a shortcut that stops and removes containers, networks, volumes, and images created by up, if they are already stopped or partially removed and so on, then it will do the trick too:
docker-compose down
Then if you have new changes on your images or Dockerfiles use:
docker-compose build --no-cache
Finally:docker-compose up
In one command: docker-compose down && docker-compose build --no-cache && docker-compose up
docker-compose up --build # still use image cache
OR
docker-compose build --no-cache # never use cache
You can pass --force-recreate to docker compose up, which should use fresh containers.
I think the reasoning behind reusing containers is to preserve any changes during development. Note that Compose does something similar with volumes, which will also persist between container recreation (a recreated container will attach to its predecessor's volumes). This can be helpful, for example, if you have a Redis container used as a cache and you don't want to lose the cache each time you make a small change. At other times it's just confusing.
I don't believe there is any way you can force this from the Compose file.
Arguably it does clash with immutable infrastructure principles. The counter-argument is probably that you don't use Compose in production (yet). Also, I'm not sure I agree that immutable infra is the basic idea of Docker, although it's certainly a good use case/selling point.
docker-compose up --build --force-recreate
I claimed 3.5gb space in ubuntu AWS through this.
clean docker
docker stop $(docker ps -qa) && docker system prune -af --volumes
build again
docker build .
docker-compose build
docker-compose up
Also if the compose has several services and we only want to force build one of those:
docker-compose build --no-cache <service>
together with --force-recreate,
you might want to consider using this flag too:
-V, --renew-anon-volumes Recreate anonymous volumes instead of retrieving
data from the previous containers.
I'm not sure from which version this flag is available, so check your docker-compose up --help if you have it or not
$docker-compose build
If there is something new it will be rebuilt.

Resources