How to configure Docker max-concurrent-uploads on QNAP - docker

How do I configure Docker on my QNAP TS-131P so that it only uploads one layer at time ?
I have a problem pushing an image because it is trying to push multiple layers concurrently and they keep failing because of a poor internet connection.
According to How to push single docker image layers at time? I need to configure daemon to use max-concurrent-uploads but I don't understand how I do this within the context of qnap.
[~] # docker -v
Docker version 17.09.1-ce, build a9fd393
[~] # which docker
/share/CACHEDEV1_DATA/.qpkg/container-station/bin/docker

After much digging,
Looks like container-station is using same location as Linux systems for dockerd config file. Should work by adding a file:
/etc/docker/daemon.json with:
{
"max-concurrent-uploads": 1
}
from how-to-push-single-docker-image-layers-at-time
Alternatively, if the script for starting up docker used by container station (/share/CACHEDEV1_DATA/.qpkg/container-station/script/run-docker.sh) has a line including dockerd, you could add the command line argument --max-concurrent-uploads=1 to that line.

Related

How to Recreate a Docker Container Without Docker Compose

TLDR: When using docker compose, I can simply recreate a container by changing its configuration and/or image in the docker-compose.yml file along with running docker-compose up. Is there any generic equivalent for recreating a container (to apply changes) which was created by a bare docker create/run command?
Elaborating a bit:
The associated docker compose documentation states:
If there are existing containers for a service, and the service’s configuration or image was changed after the container’s creation, docker-compose up picks up the changes by stopping and recreating the containers (preserving mounted volumes).
I'm having troubles to understand which underlaying steps are actually performed during this recreation, as e.g. the docker (without compose) documentation doesn't really seem to use the recreate term at all.
Is it safe to simply run docker container rm xy and then docker container create/run (along with passing the full and modified configuration)? Or is docker compose actually doing more under the hood?
I already found answers about applying specific configuration changes like e.g. this one about port mappings, but I'm still wondering whether there is a more general answer to this.
I'm having troubles to understand which underlaying steps are actually performed during this recreation, as e.g. the docker (without compose) documentation doesn't really seem to use the recreate term at all.
docker-compose is a high level tool; it performs in a single operation what would require multiple commands using the docker cli. When docker-compose says, "docker-compose up picks up the changes by stopping and recreating the containers", it means it is doing the equivalent of:
docker stop <somecontainer>
docker rm <somecontainer>
docker run ...
(Where ... represents whatever configuration is implied by the service definition in your docker-compose.yaml).
Let's say it recognizes a change in container1 it does (not really, working via API):
docker compose rm -fs container1
docker compose create (--build) container1
docker compose start container1
What is partially close to (depending on your compose-config):
docker rm -f projectname_container1
(docker build --flags)
docker create --allDozensOfAttributes projectname_container1
docker start projectname_container1
docker network connect (--flags) projectname_networkname projectname_container1
and maybe more..
so i would advise to use the docker compose commands for single services instead of docker cli if suitable..
The issue is that the variables and settings are not exposed through any docker apis. It may be possible by way of connecting directly to the docker socket, parsing the variables, and then stopping/removing the container and recreating it.
This would be prone to all kinds of errors and would require lots of debugging to get these values.
What I do is to simply store my docker commands in a shell script. You can just save the command you need to run into a text file, name it .sh, set the -x on the file, then run it. Then when you stop/delete the container, you can just rerun the shell script.
Another thing you can do would be to replace the docker command with a function (in something like your ~/.bashrc) that stores the arguments to a text file and rechecks that text file with a passed argument (like "recreate" followed by a name). However, I'm more a fan of doing docker containers in their own shell scripts as its more portable.

What have to be done to deliver on Docker and avoid to accumulate images?

I use Docker to execute a website I make.
When a release have to be delivered, I have to build a new Docker image and start a new Container from it.
The problem is that images et containers are accumulating and taking huge space.
Besides the delivery, I need to stop the running container and delete it and the source image too.
I don't need Docker command lines but a checklist or a process to not forget anything.
For instance:
-Stop running container
-Delete stopped container
-Delete old image
-Build new image
-Start new container
Am I missing something?
I'm not used to Docker, maybe there are best practices to this pretty classical use case?
The local workflow that works for me is:
Do core development locally, without Docker. Things like interactive debuggers and live reloading work just fine in a non-Docker environment without weird hacks or root access, and installing the tools I need usually involves a single brew or apt-get step. Make all of my pytest/junit/rspec/jest/... tests pass.
docker build a new image.
docker stop && docker rm the old container.
docker run a new container.
When the number of old images starts to bother me, docker system prune.
If you're using Docker Compose, you might be able to replace the middle set of steps with docker-compose up --build.
In a production environment, the sequence is slightly different:
When your CI system sees a new commit, after running the repository's local tests, it docker build && docker push a new image. The image has a unique tag, which could be a timestamp or source control commit ID or version tag.
Your deployment system (could be the CI system or a separate CD system) tells whatever cluster manager you're using (Kubernetes, a Compose file with Docker Swarm, Nomad, an Ansible playbook, ...) about the new version tag. The deployment system takes care of stopping, starting, and removing containers.
If your cluster manager doesn't handle this already, run a cron job to docker system prune.
You should use:
docker system df
to investigate the space used by docker.
After that you can use
docker system prune -a --volumes
to remove unused components. Containers you should stop them yourself before doing this, but this way you are sure to cover everything.

how to configure docker containers proxy?

how to configure docker containers proxy ?
First of all,
I tried to use the way that setted '/etc/systemd/system/docker.service.d/http-proxy.conf' (https://docs.docker.com/config/daemon/systemd/#httphttps-proxy) and it really works for docker daemon, but it doesn't work for docker containers, it seems this way just take effect for some command like 'docker pull'
Secondary,
I have a lot of docker containers, I don't want to use 'docker run -e http_proxy=xxx... ' command every time when I start a container.
So I guess if there is such a way automatically load the global configuration file when the container starts, I googled it and got it to set the file '~/.docker/config.json'(How to configure docker container proxy?, this way still does not work for me.
(
my host machine system is centos7, here is my docker -v:
Docker version 1.13.1, build 6e3bb8e/1.13.1
)
I feel that it may be related to my docker version or the docker started by the systemd service, so ~/.docker/config.json does not take effect.
Finally ,
I just hope that modifying configuration files will allow all my containers to automatically configure environment variables when it start (that is auto set environment variables 'http_proxy=http://HostIP:8118 https_proxy=http://HostIP:8118' when a container start, like Dockerfile param ENV) . I want to know if there is such a way? And if this way can be realised I can make the container use the host's proxy, after all, my host's agent is working properly.
But I was wrong, I tried to run a container,then set http_proxy=http://HostIP:8118 and https_proxy=http://HostIP:8118, but when I use the command 'wget facebook.com' and I got 'Connecting to HostIP:8118... failed: No route to host.', But, the host machine(centos7) can successfully execute the wget, And I can successfully ping the host in the container. I don't know why it might be related to firewalls and the 8118 port.
It is Over,
OMG.. I have no other way, can anyone help me?
==============================
ps:
You can see from the screenshot below, I actually want to install goa and goagen but report an error, maybe because of network reasons, I want to open the agent to try, so...only have the above problem.
1.my go docker container
enter image description here
go docker wget
2.my host
my host wget
You need version 17.07 or more recent to automatically pass the proxy to containers you start using the config.json file. The 1.13 releases are long out of support.
This is well documented from docker:
https://docs.docker.com/network/proxy/

Docker : How to avoid Operation not permitted in Docker Container?

I created one docker image of sles12 machine by taking backing of all file system which are necessary and created one tar file. For creating docker image I run following command -
cat fullbackup.tar | docker import - sles_image
After that I run docker image in container using below command -
docker run --net network1 -i -t sles_image /bin/bash
note - I already set up networking in this docker container (IP address which I want).
Now In my docker container, some applications are already configured because that applications are available in sles12 machine from which I created this docker image. These custom applications are internally running some kernel low level commands like modprobe.
But when I starts my application, application will start correctly. I'm facing this error -
Operation not permitted
How I can give correct permissions so that it will not give me this error?
You might try set the Docker container with Runtime privilege and Linux capabilities, with the
docker run --privileged
If you are on mac resolve the issue by giving files and folder permissions to docker or the other workaround is to manually copying the files to docker instead of mounting them.

What are the best practices to manage and move Docker containers?

We are evaluating Docker to use for our application,so really like to know the following questions:
What are the best practices to move docker images and container between different machine?
Also how to manage containers and images in production environment across different regions?
First of all Docker architecture has a push pull mechanism using Registry(which may be private or public(like docker Hub).
1) Answer to your first Question- Moving Docker images and container between machines?
You can create tar file of images or container and then move the tar file between your machines.
Check using docker ps -a,then based on your requirement use any one of the following:
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS
68d9619a7a91 ubuntu:14.04 "/bin/bash" 10 seconds ago Exited
For Container move - Use docker export and import:
$ docker export 68d9619a7a91 > ubuntu-container.tar
$ docker import - update < ubuntu-container.tar
For Image move -- Use docker save and load:
$ docker images
$ docker save -o image.tar
$ docker load < image.tar
2) Second question- Managing containers in production environment?
a) It is better to have your own private registry managing all the images that you need for your containers. Suppose you have a dedicated node as Docker registry where all your docker images will stay.Now you can push your changes or updates of the images to the registry and then accordingly pull this images from this registry to your machine that will run the containers from ths images.
b) Another great way of managing images/container across cluster and different cloud provider is to use a Kubernetes(open sourced by Google). Although we have not implemented Kubernetes,but just started looking into its documentation,and it looks very promising if you are using docker containers and cloud.

Resources