I made changes to fabric code and would like to test it by running using docker-compose. I run make peer; make docker. I see the following.
mkdir -p build/image/peer/payload
cp build/docker/bin/peer build/sampleconfig.tar.bz2 build/image/peer/payload
mkdir -p build/image/orderer/payload
cp build/docker/bin/orderer build/sampleconfig.tar.bz2 build/image/orderer/payload
mkdir -p build/image/testenv/payload
cp build/docker/bin/orderer build/docker/bin/peer build/sampleconfig.tar.bz2 images/testenv/install-softhsm2.sh build/image/testenv/payload
mkdir -p build/image/tools/payload
cp build/docker/bin/cryptogen build/docker/bin/configtxgen build/docker/bin/configtxlator build/docker/bin/peer build/sampleconfig.tar.bz2 build/image/tools/payload
When I do docker images, I still see the same bunch of images
hyperledger/fabric-orderer latest 391b202306fa 3 weeks ago 180MB
hyperledger/fabric-orderer x86_64-1.1.0 391b202306fa 3 weeks ago 180MB
hyperledger/fabric-peer latest e0f3bdb4506f 3 weeks ago 187MB
hyperledger/fabric-peer x86_64-1.1.0 e0f3bdb4506f 3 weeks ago 187MB
How do I go from there to generate new docker image? what am I missing?
I am using Ubuntu 16.04. Thanks for your time
cd build/image/peer/
docker build -t hyperledger/fabric-peer:x86_64-1.1.x .
Change the image name in docker-compose.yaml file and you are good to go.
Related
I have a Docker image that was created roughly a year ago. The Dockerfile contains:
FROM docker:stable
How can I determine the actual version of the docker image that stable was referring to back when the image was built?
Edit: What I want to do, in a nutshell, is replace FROM docker:stable with FROM docker:X.Y.Z where X.Y.Z is the version tag that "stable" was pointing to a year ago when the image was originally built.
As suggested by this answer
docker inspect --format='{{index .RepoDigests 0}}' $IMAGE
This will give you the sha256 hash of the image.
Then you can use a service like MicroBadger to get more info about that specific build.
If you want to recreate the Dockerfile you can use docker history to examine the layer history:
$ docker history docker
IMAGE CREATED CREATED BY SIZE COMMENT
3e23a5875458 8 days ago /bin/sh -c #(nop) ENV LC_ALL=C.UTF-8 0 B
8578938dd170 8 days ago /bin/sh -c dpkg-reconfigure locales && loc 1.245 MB
be51b77efb42 8 days ago /bin/sh -c apt-get update && apt-get install 338.3 MB
4b137612be55 6 weeks ago /bin/sh -c #(nop) ADD jessie.tar.xz in / 121 MB
750d58736b4b 6 weeks ago /bin/sh -c #(nop) MAINTAINER Tianon Gravi <ad 0 B
511136ea3c5a 9 months ago 0 B
Keep in mind that if the image has been manually tampered with, I don't know how reliable this output would be.
Finally if you want to go full hacker mode, this old thread on the Docker community forums has some info.
I'm not sure how you can get the tag, because I don't believe this is stored in the image itself, but in the repository. So you'd have to query the repository itself, or get a full list of image history and go detective on it.
I have a problem with a container that runs a cron job. The job executes curator to remove some elasticsearch indices. I have read many similar posts on stackoverflow but I still don't get it. The job seems to call the curator but the indices are not removed. The same command works if I run it manually.
This is my Dockerfile
FROM ubuntu:xenial
RUN apt-get update && apt-get install python-pip rsyslog -y
RUN groupadd -r curator && useradd -r -g curator curator
RUN pip install elasticsearch-curator
RUN apt-get install cron
COPY delete_indices_cron /etc/cron.d/delete_indices_cron
COPY ./delete_indices.sh /opt/delete_indices.sh
COPY ./configs /opt/config
RUN ["crontab", "/etc/cron.d/delete_indices_cron"]n
RUN chmod 644 /etc/cron.d/delete_indices_cron
RUN chmod 744 /opt/delete_indices.sh
RUN touch /var/log/cron.log
CMD ["rsyslogd"]
ENTRYPOINT ["cron","-f","&&", "tail","-f","/var/log/cron.log"]
I run the image afterward with
docker run -d --link elasticsearch:elasticsearch --name curator mycurator4
and the docker ps output
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
eea96a48aa3a mycurator4 "cron -f && tail -f /" 15 minutes ago Up 15 minutes curator
e584c9b090c8 vagrant-registry.vm:5000/sslserver "python /sslServer/ss" 2 weeks ago Up 2 weeks 0.0.0.0:12121->12121/tcp sslserver
20eee9943664 kibana:4 "/docker-entrypoint.s" 2 weeks ago Up 2 weeks 0.0.0.0:5601->5601/tcp kibana
8c462586982e logstash:2 "/docker-entrypoint.s" 2 weeks ago Up 2 weeks 0.0.0.0:5044->5044/tcp, 0.0.0.0:12201->12201/tcp, 0.0.0.0:12201->12201/udp logstash
c971fa3e357b elasticsearch:2 "/docker-entrypoint.s" 2 weeks ago Up 2 weeks 0.0.0.0:9200->9200/tcp, 9300/tcp elasticsearch
4af9a78a4b1f jenkins "/bin/tini -- /usr/lo" 2 weeks ago Up 2 weeks 0.0.0.0:8080->8080/tcp, 50000/tcp
jenkins
UPDATE: the problem was that the curator could not be found as a command in the environment. When i changed it to the relative path the problem solved. Also based on some suggestions i removed the .sh from the /opt/delete_indices.sh because ansible "does not like this"!.
IMHO, this is a square peg, round hole situation.
Instead, I would add only the curator contents and necessary files into the image to do and use the host system cron to run the container. This would ensure you have the right env vars set and other misc problems you may have with cron.
To answer your question, this would be what command you are running from within the container:
cron -f && tail -f /var/log/cron.log rsyslogd
The first issue is the &&, which wouldn't behave like you want it to because the command cron exits which causes docker to exit when cron is complete, thus never calling tail -f. At least, that's what I found when I ran the && locally as a test. Secondly, if you want to look at the output, you'd run docker logs curator
A. Here is how I created the image:
Got latest Ubuntu image
Ran as container and attached to it
Cloned source code from git inside docker container
Tagged and pushed docker image to my registry
B. And from a different machine I pulled, changed and pushed it by doing:
Docker pull from the registry
Start container with the pulled image and attach to it
Change something in the cloned git directory
Stop container, tag and push it to registry
Now the issue I'm seeing is that every time B is repeated it will try to upload ~600MB (which is the public image layer) to the registry which takes a long time in my case.
Is there any way to avoid uploading the whole 600MB and instead pushing the only directory that has changed?
What am I doing wrong? How do you guys use docker for frequent pushes?
Docker will only push changed layers, so it looks as though something in your workflow is not quite right. It will be much clearer if you use a Dockerfile, as each instruction explicitly creates a layer, but even with docker commit the results should be the same.
Example - run a container from the ubuntu image and run apt-get update and then commit the container to a new image. Now run docker history and you'll see the new images adds a layer on top of the bash image, which has the additional state from running the APT update:
> docker history sixeyed/temp1
IMAGE CREATED CREATED BY SIZE COMMENT
2d98a4114b7c About a minute ago /bin/bash 22.2 MB
14b59d36bae0 7 months ago /bin/sh -c #(nop) CMD ["/bin/bash"] 0 B
<missing> 7 months ago /bin/sh -c sed -i 's/^#\s*\(deb.*universe\)$/ 1.895 kB
<missing> 7 months ago /bin/sh -c echo '#!/bin/sh' > /usr/sbin/polic 194.5 kB
<missing> 7 months ago /bin/sh -c #(nop) ADD file:620b1d9842ebe18eaa 187.8 MB
In this case, the diff between ubuntu and my temp1 image is the 22MB layer 2d98.
Now if I run a new container from temp1, create an empty file and run docker commit to create a new image, the new layer only has the changed file:
> docker history sixeyed/temp2
IMAGE CREATED CREATED BY SIZE COMMENT
e9ea4b4963e4 45 seconds ago /bin/bash 0 B
2d98a4114b7c About a minute ago /bin/bash 22.2 MB
14b59d36bae0 7 months ago /bin/sh -c #(nop) CMD ["/bin/bash"] 0 B
<missing> 7 months ago /bin/sh -c sed -i 's/^#\s*\(deb.*universe\)$/ 1.895 kB
<missing> 7 months ago /bin/sh -c echo '#!/bin/sh' > /usr/sbin/polic 194.5 kB
<missing> 7 months ago /bin/sh -c #(nop) ADD file:620b1d9842ebe18eaa 187.8 MB
When I push the first image, only the 22MB layer will get uploaded - the others are mounted from ubuntu, which is already in the Hub. If I push the second image, only the changed layer gets pushed - the temp1 layer is mounted from the first push:
> docker push sixeyed/temp2
The push refers to a repository [docker.io/sixeyed/temp2]
f741d3d3ee9e: Pushed
64f89772a568: Mounted from sixeyed/temp1
5f70bf18a086: Mounted from library/ubuntu
6f32b23ac95d: Mounted from library/ubuntu
14d918629d81: Mounted from library/ubuntu
fd0e26195ab2: Mounted from library/ubuntu
So if your pushes are uploading 600MB, you're either making 600MB changes to the image, or your workflow is preventing Docker using layers correctly.
Docker already uploads only the changed layer.
It is similar to how Docker build only rebuilds the cache invalidated layers. Of course it has to communicate with the registry which layers are available (it reports as Already pushed). And if you have changed the sequence of your operations in the Dockerfile, they are absolutely new layers and all of them will be re-uploaded obviously.
FROM ubuntu
RUN echo "hello"
EXPOSE 80
and
FROM ubuntu
EXPOSE 80
RUN echo "hello"
These two images are miles apart even though the behavioral end result is same. So take care about such things.
Let's say there is a docker image, someone makes changes to it and then pushes it up to a docker repository. I then pull down the image. Is there a way to then take that image and run a container from a previous layer? Run the version before the changes were made.
If I run docker history it will look something like this:
docker history imagename:tag
IMAGE CREATED CREATED BY SIZE COMMENT
3e23a5875458 8 days ago /bin/sh -c #(nop) ENV LC_ALL=C.UTF-8 0 B
<missing> 8 days ago /bin/sh -c dpkg-reconfigure locales && loc 1.245 MB
<missing> 8 days ago /bin/sh -c apt-get update && apt-get install 338.3 MB
<missing> 6 weeks ago /bin/sh -c #(nop) ADD jessie.tar.xz in / 121 MB
<missing> 6 weeks ago /bin/sh -c #(nop) MAINTAINER ssss <ad 0 B
<missing> 9 months ago 0 B
It seems as if I could run an earlier version if I figure out a way to somehow tag or identify previous layers of the image.
You can, by tagging build layers of the image if you have access to them. As described here.
In your case what could be happening is that from version v1.10.0 forward they've changed the way that the docker engine handles content addressability. This is being heavily discussed here.
What it means is that you won't have access to the build layers unless you built this image in the current machine or exported and loaded by combining:
docker save imagename build-layer1 build-layer2 build-layer3 > image-caching.tar
docker load -i image-caching.tar
A user has posted a handy way to save that cache in the discussion I've mentioned previously:
docker save imagename $(sudo docker history -q imagename | tail -n +2 | grep -v \<missing\> | tr '\n' ' ') > image-caching.tar
This should collect all the build layers of the given image and save them to a cache tar file.
I want to work on a GeoDjango application that will be easy for others to work on too. I would like to use Docker to package the application.
So, a very simple question.
I can see that there is an existing GeoDjango container on DockerHub: https://registry.hub.docker.com/u/jhonatasmartins/geodjango/
How can I view the Dockerfile for this container, so that I can use it as a basis for my own container?
There is (at the moment) no way to see the Dockerfile used to generate an image, unless the author has published the Dockerfile somewhere. Images built as automated builds will have links to a source repository with a Dockerfile, but for images that were manually built and pushed you're limited to whatever folks publish in the description.
You could try contacting the image maintainer.
There are also times where doing a docker history <IMAGE_ID> could show you some tips about how the image was built. In fact, if the image was built using a Dockerfile will give you a clear image of the steps of the Dockerfile. For the ones committed from a containers you may not see some steps, but sometimes you can get some idea from there. For example for the image you said:
$ docker history jhonatasmartins/geodjango:latest
IMAGE CREATED CREATED BY SIZE
0b7e890a4644 3 months ago /bin/bash 112.2 MB
35174145916a 3 months ago /bin/bash 449.9 MB
5506de2b643b 4 months ago /bin/sh -c #(nop) CMD [/bin/bash] 0 B
22093c35d77b 4 months ago /bin/sh -c apt-get update && apt-get dist-upg 6.558 MB
3680052c0f5c 4 months ago /bin/sh -c sed -i 's/^#\s*\(deb.*universe\)$/ 1.895 kB
e791be0477f2 4 months ago /bin/sh -c rm -rf /var/lib/apt/lists/* 0 B
ccb62158e970 4 months ago /bin/sh -c echo '#!/bin/sh' > /usr/sbin/polic 194.8 kB
d497ad3926c8 4 months ago /bin/sh -c #(nop) ADD file:3996e886f2aa934dda 192.5 MB
511136ea3c5a 20 months ago 0 B
Edit1: As jwodder recommended below add --no-trunc option to docker inspect to check the complete command of each layer. I won't put here the one for the example because it's too verbose, but definitely you will get more info using it.