Is it possible only to modify the FROM value while executing docker commit ?
Say my active container is of Ubuntu 16.04 and I wanted to create an image off it, but Ubuntu version should be of 18.04, rest remains the same.
Does Docker support this scenario ?
Expecting like : docker commit —change=FROM ubuntu:18.04
The answer is no. You can't modify the base image with docker commit --change=FROM command.
The FROM instruction is not supported for --change option.
Here is the excerpt from the docs:
The --change option will apply Dockerfile instructions to the image
that is created. Supported Dockerfile instructions:
CMD|ENTRYPOINT|ENV|EXPOSE|LABEL|ONBUILD|USER|VOLUME|WORKDIR
If you don't have dockerfile for your container then, I would suggest to use either:
docker history command to generate Dockerfile. As mentioned here.
OR
Use dfimage utiliyy as mentioned here.
And then change the FROM instruction in your new generated dockerfile.
This is a strong reason to never use docker commit.
If you have a commit-based workflow, you need to docker run a container from some base image, perform some steps, and commit the result. Once you've done this, though, Docker has no idea what happened in between; it just knows that there's an image, and some opaque set of filesystem changes, and it's being asked to create an image from that.
Say you're using an old version of Ubuntu, and you want to upgrade to something newer. In a commit-based workflow, it's up to you to do all of the steps by hand. To keep track of this, you might write down a text file of the steps you want to perform:
# `docker run` a container using this base image
FROM ubuntu:18.04
# `docker cp` this file into the image
COPY package.deb /
# Run this command in the container shell
RUN dpkg -i /package.deb
# After committing the image, `docker run` the new image with this command
CMD some_command
That specific format is exactly the Dockerfile format, though: you can check it into source control, run docker build, and get the image back. Your coworker can do that too even if they don't have the exact setup you do, and even if they don't type the commands exactly the same way. And when you do need to upgrade the base image, you can just change the first line to FROM ubuntu:20.04 and docker build it again.
Related
I have docker file which make image.
FROM public.ecr.aws/lambda/python:3.9
RUN export LANG=en_US.UTF-8
RUN export PYTHONUNBUFFERED=1
docker build -f dockers/mydocker -t python .
then I would like to make images from this image.
There are listed image named basic docker images
Then in another Dockerfile.
FROM python
ADD ./Pipfile* ./
RUN pipenv install --ignore-pipfile
When I try to build this dockerfile
There comes like this
FROM docker.io/library/python
Does this mean I can use local image to build next image?
I need to make local repository for this purpose ??
Or any other way to do this??
This is probably working fine, but you should be careful to pick names that don't conflict with standard Docker Hub image names.
A Docker image name has the form registry.example.com/path/name. If you don't explicitly specify a registry, it always defaults to docker.io (this can't be changed), and if you don't specify a path either, it defaults to docker.io/library/name. This is true in all contexts – the docker build -t option, the docker run image name, Dockerfile FROM lines, and anywhere else an image name appears.
That means that, when you run docker build -t python, you're creating a local image that has the same name as the Docker Hub python image. The docker build diagnostics are showing you the expanded name. It should actually be based on your local image, though; Docker won't contact a remote registry unless the image is missing locally or you explicitly tell it to.
I'd recommend choosing some unambiguous name here. You don't specifically need a Docker Hub account, but try to avoid bare names that will conflict with standard images.
# this will work even if you don't "own" this Docker Hub name
docker build -f Dockerfile.base -t whitebear/python .
FROM whitebear/python
...
(You may have some trouble seeing the effects of your base image since RUN export doesn't do anything; change those lines in the base image to ENV instead.)
I am new to docker, and have downloaded the tfx image using
docker pull tensorflow/tfx
However, I am unable to find anywhere how to successfully launch a container for the same.
here's a naive attempt
You can use docker image ls to get a list of locally-built Docker images. Note that an "image" is a template for a VM.
To instantiate the VM and shell into it, you would use a command like docker run -t --entrypoint bash tensorflow/tfx. This spins up a temporary VM based on the tensorflow/tfx image.
By default, Docker assumes you want the latest version of that image stored on your local machine, i.e. tensorflow/tfx:latest in the list. If you want to change it, you can reference a specific image version by name or hash, e.g. docker run -t --entrypoint bash tensorflow/tfx:1.0.0 or docker run -t --entrypoint bash fe507176d0e6. I typically use the docker image ls command first and cut & paste the hash, so my notes can be specific about which build I'm referencing even if I later edit the relevant Dockerfile.
Also note that changes you make inside that VM will not be saved. Once you exit the bash shell, it goes away. The shell is useful for checking the state & file structure of a constructed image. If you want to edit the image itself, use a Dockerfile. Each line of a Dockerfile creates a new image when the Dockerfile is compiled. If you know that something went wrong between lines 5 and 10 of the Dockerfile, you can potentially shell into each of those images in turn (with the docker run command I gave above) to see what went wrong. Kinda tedious, but it works.
Also note that docker run is not equivalent to running a TFX pipeline. For the latter, you want to look into the TFX CLI commands or otherwise compile the pipeline - and probably upload it to an external Kubeflow server.
Also note that the Docker image is just a starting point for one piece of your TFX pipeline. A full pipeline will require you to specify the components you want, a more-complete Dockerfile, and more. That's a huge topic, and IMO, the existing documentation leaves a lot to be desired. The Dockerfile you create describes the image which will be distributed to each of the workers which process the full pipeline. It's the place to specify dependencies, necessary files, and other custom setup for the machine. Most ML-relevant concerns are handled in other files.
I am working on Flask app running on ec2 server inside a docker image.
The old dev seems to have removed the original Dockerfile, and I can't seem to find any instructions on a way to push my changes into to the docker image with out the original.
I can copy my changes manually using:
docker cp newChanges.py doc:/root/doc/server_python/
but I can't seem to find a way to restart flask. I know this is not the ideal solution but it's the only idea I have.
There is one way to add newChanges.py to existing image and commit that image with a new tag so you will have a fall back option if you face any issue.
Suppose you run alpine official image and you don't have DockerFile
Everytime you restart the image you will not have your newChanges.py
docker run --rm -name alpine alpine
Use ls inside the image to see a list of existing files that are created in Dockerfile.
docker cp newChanges.py alpine:/
Run ls and verify your file was copied over
Next Step
To commit these changes to your running container do the following:
Docker ps
Get the container ID and run:
docker commit 4efdd58eea8a updated_alpine_image
Now run your alpine image and you will the see the updated changes as suppose
docker run -it updated_alpine_image
This is what you will see in your update_alpine_image with having DockerFile
This is how you can rebuild the image from existing image. You can also try #uncletall answer as well.
If you just want to restart after docker cp, you can just docker stop $your_container, then docker start $your_container.
If you want to update newChanges.py to docker image without original Dockerfile, you can use docker export -o $your_tar_name.tar $your_container, then docker import $your_tar_name.tar $your_new_image:tag. Later, always reserve the tar to backup server for future use.
If you want continue to develop later use a Dockerfile in the future for further changes:
you can use docker commit to generate a new image, and use docker push to push it to dockerhub with the name something like my_docker_id/my_image_name:v1.0
Your new Dockerfile:
FROM my_docker_id/my_image_name:v1.0
# your new thing here
ADD another_new_change.py /root/
# others
You can try to examine the history of the image, from there you can probably re-create the Dockerfile. Try using docker history --no-trunc image-name
See this answer for more details
I just started using docker. I create an image using docker file. How can I create a new image from that existing image?
Let's say you have a container bd91ca3ca3c8 running, and you want to create a new image after you made changes in the container. Generating another image will allow you to preserve your changes.
In that case you can run:
docker commit -p -a "author_here" -m "your_message" bd91ca3ca3c8 name_of_new_image
-p pauses the container while commit command is building the new image.
-a allows you to supply author information of the new image.
-m allows you to add a comment just as in the Git.
You can create a new image by using docker command $docker build -f docker_filename . , It will first read the Dockerfile where the instructions are written and automatically build the image. The instruction in the Dockerfile contains the necessary commands to assemble an image. Once, the image is build, it will be assigned an image id.
The image can be pushed to the docker registry hub. For this, the user must create an account in the docker registry hub.
An example of Dockerfile looks like this,
FROM docker/whalesay:latest
RUN apt-get -y update && apt-get install -y fortunes
CMD /usr/games/fortune -a | cowsay
Here, the first instruction tells the new image will be using docker/whalesay:latest image.
The second instruction will run the two commands.
And the third instruction tells that when the environment is set up "fortune -a" command should run.
In order to create a new image from an existing image all you have to do is to specify 'FROM' for e.g:
FROM sergiu/ubuntu
MAINTAINER sergiu
I am also new in docker but what i find might be helpful.
1) whenever you write "FROM" and run docker file, docker look in his repo and first load that image. so if you have any local image you want to use in "FROM", then it should be loaded.
2) what parameter you give in "FROM" is important, as if you give repo_name or tag wrong it give the error msg. so for this run "docker images" command to see your image correct repo_name and tag.
3) now you can start your new docker file like this
FROM REPOSITORY:TAG
and it will work
Docker commit: Creates a new image from a container’s changes.
It can be useful to commit a container’s file changes or settings into a new image. This allows you to debug a container by running an interactive shell, or to export a working dataset to another server. Generally, it is better to use Dockerfiles to manage your images in a documented and maintainable way.
You can follow the below command to create an image for an existing image:
docker tag jboss/wildfly myimage:v1
Creates an image called myimage with the tag v1 for the image jboss/wildfly:latest
I've got a jenkins server monitoring a git repo and building a docker image on code change. The .git directory is ignored as part of the build, but I want to associate the git commit hash with the image so that I know exactly what version of the code was used to make it and check whether the image is up to date.
The obvious solution is to tag the image with something like "application-name-branch-name:commit-hash", but for many develop branches I only want to keep the last good build, and adding more tags will make cleaning up old builds harder (rather than using the jenkins build number as the image is built, then retagging to :latest and untagging the build number)
The other possibility is labels, but while this looked promising initially, they proved more complicated in practice..
The only way I can see to apply a label directly to an image is in the Dockerfile, which cannot use the build environment variables, so I'd need to use some kind of templating to produce a custom Dockerfile.
The other way to apply a label is to start up a container from the image with some simple command (e.g. bash) and passing in the labels as docker run arguments. The container can then be committed as the new image. This has the unfortunate side effect of making the image's default command whatever was used with the labelling container (so bash in this case) rather than whatever was in the original Dockerfile. For my application I cannot use the actual command, as it will start changing the application state.
None of these seem particularly ideal - has anyone else found a better way of doing this?
Support for this was added in docker v1.9.0, so updating your docker installation to that version would fix your problem if that is OK with you.
Usage is described in the pull-request below:
https://github.com/docker/docker/pull/15182
As an example, take the following Dockerfile file:
FROM busybox
ARG GIT_COMMIT=unknown
LABEL git-commit=$GIT_COMMIT
and build it into an image named test as anyone would do naïvely:
docker build -t test .
Then inspect the test image to check what value ended up for the git-commit label:
docker inspect -f '{{index .ContainerConfig.Labels "git-commit"}}' test
unkown
Now, build the image again, but this time using the --build-arg option:
docker build -t test --build-arg GIT_COMMIT=0123456789abcdef .
Then inspect the test image to check what value ended up for the git-commit label:
docker inspect -f '{{index .ContainerConfig.Labels "git-commit"}}' test
0123456789abcdef
References:
Docker build command documentation for the --build-arg option
Dockerfile reference for the ARG directive
Dockerfile reference for the LABEL directive
You can specify a label on the command line when creating your image. So you would write something like
docker build -t myproject --label "myproject.version=githash" .
instead of hard-coding the version you can also get it directly from git:
docker build -t myproject --label "myproject.version=`git describe`" .
To read out the label from your images you can use docker inspect with a format string:
docker inspect -f '{{index .Config.Labels "myproject.version"}}' myproject
If you are using docker-compose, you could add the following to the build section:
labels:
git-commit-hash: ${COMMIT_HASH}
where COMMIT_HASH is your environment variable, which holds commit hash.