I've a Dockerfile starting with the official nginx image.
FROM nginx
And they set the maintainer label.
LABEL maintainer="NGINX Docker Maintainers <docker-maint#nginx.com>"
So, now my image appears to also be maintained by them.
$ docker image inspect example-nginx
...
"Labels": {
"maintainer": "NGINX Docker Maintainers <docker-maint#nginx.com>"
},
The documentation mentions how to overwrite the label. But, so far, the best I can do is set it to an empty value.
LABEL maintainer=
$ docker image inspect example-nginx
...
"Labels": {
"maintainer": ""
},
How do I completely remove or unset a label set by a parent image?
Great question. I did some research and, as far as I know, it's not possible with the current Docker/Moby implementation. It's also a problem for other properties as well, as you can see here (the issue is from 2014!):
https://github.com/moby/moby/issues/3465
I know it's really annoying, but, if you really want to remove that you can try following this:
https://github.com/moby/moby/issues/3465#issuecomment-383416201
The person automatized this process with a Python script that seems to let you do what you want:
https://github.com/gdraheim/docker-copyedit
It appears to have the Remove Label operation (https://github.com/gdraheim/docker-copyedit/blob/92091ed4d7a91fda2de39eb3ded8dd280fe61a35/docker-copyedit.py#L304), that is what you want.
I don't know if it works (I haven't had time to test that), but I think it's worth trying.
Related
I'm using ECSOperator in airflow and I need to pass flags to the docker run. I searched the internet but I couldn't find a way to give an ECSOperator flags such as: -D, --cpus and more.
Is there a way to pass these flags to a docker run (if a certain condition is true) using the ECSOperator (same way we can pass tags, and network configuration), or they can only be defined in the ECS container running the docker image?
I'm not familiar with ECSOpearor but if I understand correctly that is python library. And you can create new task using python
As I can see in this exmaple it is possible to set task_definition and overrides:
...
ecs_operator_task = ECSOperator(
task_id = "ecs_operator_task",
dag=dag,
cluster=CLUSTER_NAME,
task_definition=service['services'][0]['taskDefinition'],
launch_type=LAUNCH_TYPE,
overrides={
"containerOverrides":[
{
"name":CONTAINER_NAME,
"command":["ls", "-l", "/"],
},
],
},
network_configuration=service['services'][0]['networkConfiguration'],
awslogs_group="mwaa-ecs-zero",
awslogs_stream_prefix=f"ecs/{CONTAINER_NAME}",
...
So if you want to set CPU and Memory specs for whole task you have to update task_definition dictionary parameters (something like service['services'][0]['taskDefinition']['cpu'] = 2048)
If you want to specify parameters for exact container, overrides should be proper way:
overrides={
"containerOverrides":[
{
"cpu": 2048,
...
},
],
},
Or edited containerDefinitions may be set directly inside task_definition in theory...
Anyway most of docker parameters should be pass inside containerDefinitions section.
So about your question:
Is there a way to pass these flags to a docker run
If I understand correctly you have a JSON TaskDefinition file and want to run it locally using docker?
Then try to check these tools. It allows you to convert docker-compose.yml into ECS definition, and that is opposite of what you looking for, but maybe some of these tools able to convert it vice-versa..?
In other way you have to parse TaskDefinition's JSON manually and convert it to docker command arguments
I have a debian package I am deploying that comes with a docker image. On upgrading the package, the prerm script stops and removes the docker image. As a fail safe, I have the preinst script do it as well to ensure the old image is removed before the installation of the new image. If there is no image, the following error reports to the screen: (for stop) No such image: <tag> and (for rmi): No such container: <tag>.
This really isn't a problem, as the errors are ignored by dpkg, but they are reported to the screen, and I get constant questions from the users is that error ok? Did the install fail? etc.
I cannot seem for find the correct set of docker commands to check if a container is running to stop it, and check to see if an image exists to remove it, so those errors are no longer generated. All I have is docker image tag to work with.
I think you could go one of two ways:
Knowing the image you could check whether there is any container based on that image. If yes, find out whether that container is running. If yes, stop it. If not running, remove the image. This would prevent error messages showing up but other messages regarding the container and image handling may be visible.
Redirect output of the docker commands in question, e.g. >/dev/null
you're not limited with docker-cli you know? you can always combine docker-cli commands with linux sh or dos commands as well and also you can write your own .sh scripts and if you don't want to see the errors you can either redirect them or store them to a file such as
to redirect: {operation} 2>/dev/null
to store : {operation} 2>>/var/log/xxx.log
I'm trying to add a mirror to my docker in order to use my server
to cache images from docker hub using the following syntax:
/etc/docker/daemon.json
{
"registry-mirrors": ["https://myserver.com"]
}
I have seen the above config even docker's official documentation.
but my ubuntu 20.04 does not read that file at all. Even if I restart the
docker service.
You should rewrite the configuration file as follow:
{
"registry-mirrors": ["myserver.com"]
}
Remove the protocol!
Intro
Added a directive to daemon.json which was just being ignored when I restarted Docker. Docker was restarting without error, it was just ignoring my change.
Problem
I was attempting to change the default log target to syslog from json-file by APPENDING the log-driver directive to the end of /etc/docker/daemon.json (I was scripting my Docker install and so was building this file incrementally).
But no matter WHAT I did, I could not get the change read. The output of docker info --format '{{.LoggingDriver}}' was always json-file.
Troubleshooting
Investigated the potential of a formatting error like the accepted answer, but this bore no fruit. Reading, re-reading of the Docker docs. Googling. Nothing could clear the error.
Solution
The Problem? Looks like Docker was really finicky about the ORDER the logging directive "log-driver" appeared. After wasting hours and beating my brains in, I changed the order the directive appeared in the file by PREPENDING it to the top of daemon.json like so:
{
"log-driver": "syslog",
"default-address-pools":
[
{"base":"192.168.X.X/24","size":28}
]
}
With the directive at the TOP, the change was recognized after restarting Docker and the output of docker info --format '{{.LoggingDriver}}' was now as expected: syslog. Go figure...
Conclusion
It was a silly problem, but wow did it waste some cycles figuring out how things were broken. Hope this get folks like myself out of a hole who couldn't find this solution Googling-
Say there is an image A described by the following Dockerfile:
FROM bash
RUN mkdir "/data" && echo "FOO" > "/data/test"
VOLUME "/data"
I want to specify an image B that inherites from A and modifies /data/test. I don't want to mount the volume, I want it to have some default data I specify in B:
FROM A
RUN echo "BAR" > "/data/test"
The thing is that the test file will maintain the content it had at the moment of VOLUME instruction in A Dockerfile. B image test file will contain FOO instead of BAR as I would expect.
The following Dockerfile demonstrates the behavior:
FROM bash
# overwriting volume file
RUN mkdir "/volume-data" && echo "FOO" > "/volume-data/test"
VOLUME "/volume-data"
RUN echo "BAR" > "/volume-data/test"
RUN cat "/volume-data/test" # prints "FOO"
# overwriting non-volume file
RUN mkdir "/regular-data" && echo "FOO" > "/regular-data/test"
RUN echo "BAR" > "/regular-data/test"
RUN cat "/regular-data/test" # prints "BAR"
Building the Dockerfile will print FOO and BAR.
Is it possible to modify file /data/test in B Dockerfile?
It seems that this is intended behavior.
Changing the volume from within the Dockerfile: If any build steps change the data within the volume after it has been declared, those changes will be discarded.
VOLUMEs are not part of your IMAGE. So what is the use case to seed data into it. When you push the image into another location, it is using an empty volume at start. The dockerfile behaviour does remind you of that.
So basically, if you want to keep the data along with the app code, you should not use VOLUMEs. If the volume declaration did exist in the parent image then you need to remove the volume before starting your own image build. (docker-copyedit).
There are a few non-obvious ways to do this, and all of them have their obvious flaws.
Hijack the parent docker file
Perhaps the simplest, but least reusable way, is to simply use the parent Dockerfile and modify that. A quick Google of docker <image-name:version> source should find the github hosting the parent image Dockerfile. This is good for optimizing the final image, but destroyes the point of using layers.
Use an on start script
While a Dockerfile can't make further modifications to a volume, a running container can. Add a script to the image, and change the Entrypoint to call that (and have that script call the original entry point). This is what you will HAVE to do if you are using a singleton-type container, and need to partially 'reset' a volume on start up. Of course, since volumes are persisted outside the container, just remember that 1) your changes may already be made, and 2) Another container started at the same time may already be making those changes.
Since volumes are (virtually) forever, I just use one time setup scripts after starting the containers for the first time. That way I easily control when the default data is setup/reset. (You can use docker inspect <volume-name> to get the host location of the volume if you need to)
The common middle ground on this one seems to be to have a one-off image whose only purpose is to run once to do the volume configurations, and then clean it up.
Bind to a new volume
Copy the contents of the old volume to a new one, and configure everything to use the new volume instead.
And finally... reconsider if Docker is right for you
You probably already wasted more time on this than it was worth. (In my experience, the maintenance pain of Docker has always far outweighed the benefits. However, Docker is a tool, and with any tool, you need to sometimes take a moment to reflect if you are using it right, and if there are better tools for the job.)
I have a testlink docker image running (named 'otechlabs/testlink').
Question 1: How do I get the original url from which I downloaded it (I can't remember) ? I would like to see the instructions about how to run the container.
It's running so fine that I saved a commit of it to run in another machine.
Question 2: Should I remember the run parameters(I can't remember)?
The container was created around 3 months ago.
Question 3: Instead of save/load, should I export/import?
Since I don't remember how to run the image, then I guess I should skip this step, perhaps (someway) copying the image to just start it in another host.
Q1:
You can try looking up the image in docker hub. The name otechlabs/testlink suggests that the user otechlabs in dockerhub has an image called testlink.
Now, I tried looking up the user's profile here but it appears that he doesn't have anything uploaded yet, maybe it is a private image.
If you're lucky you might be able to find something useful from other people's testlink image page.
Example:
https://hub.docker.com/r/rodrigodirk/testlink/
Q2:
Not quite sure what you meant here. Well if you have a running container of it, you can always do a docker inspect [CONTAINER_ID] to revisit the parameters used for starting it.
Example:
"Config": {
"ExposedPorts": {
"5432/tcp": {},
"9001/tcp": {}
},
"Env": [
"affinity:container==47eea8a078ad47583b4f5343302e7939a6d5f04ad929a079d8d9ae7cbee96d6a",
"POSTGRES_USER=bigCat01"
]
}
Q1: If you did a 'docker pull' the image id contains the url; if the id does not contain a domain name, then it is by default dockerhub repository
Q2: (as mentioned by Samual) if you still have a container saved run 'docker inspect ' to display the startup param's
Q3: if you modified the container you can 'commmit' the changes, and you can also change the tag: 'docker tag old_tag new_tag'
To help move it around, you could change he tag to gcr.io/project-id/new_tag:version and push it to google's container registry (free 30 day trail, may be free beyond that if you keep resource usage low) in your project 'project-id'