In the context of a distributed API, I am handling a "Giga" service which consumes about 15 Gb of memory and requires at least four CPUs. During bootstrapping, the service must load four files before it becomes available.
On my laptop, when I run the service without docker, I mean, from a shell, when I run it, the service takes about 9 seconds to become active. Once the service is active, 240 calls to the service takes around 7 seconds.
Now, when I run on my laptop precisely the same service, but this time under a Docker container, it takes about 6 minutes to load the files and become active. When I execute exactly the 240 calls mentioned above, the service takes around 5.5 minutes!!!!
This is the first time I have found a similar problem, and since I am not a Docker guru, I wonder if someone could give me clues about what could be happening.
This is the content of the Dockerfile:
FROM alpine:3.16 as dag_build
RUN apk add g++ make protobuf protobuf-dev grpc-dev \
pkgconfig git gsl-dev tclap-dev
RUN mkdir -p /usr/src/dag_service
WORKDIR /usr/src/dag_service
COPY model_services/protos/dag.proto /usr/src/protos/dag.proto
COPY model_services/dag/*.H /usr/src/dag_service/
COPY model_services/dag/dag_service.cc /usr/src/dag_service
COPY model_services/dag/Makefile /usr/src/dag_service
RUN cd /usr/src/dag_service; make dag_service
COPY model_services/dag/nfl_graph_q[1234]_130.txt.bz2 /usr/src/dag_service/
RUN cd /usr/src/dag_service; bunzip2 nfl_graph_q[1234]_130.txt.bz2
COPY model_services/dag/q[1234].Tree /usr/src/dag_service/
##################################################
# Run the dag service
FROM alpine:3.16 AS dag_runtime
RUN apk add protobuf-dev grpc-dev
COPY --from=dag_build /usr/src/dag_service/nfl_graph_q[1234]_130.txt /bin/
COPY --from=dag_build /usr/src/dag_service/q[1234].Tree /bin/
COPY --from=dag_build /usr/src/dag_service/dag_service /bin/dag_service
WORKDIR /bin/
RUN mkdir -p /tmp
EXPOSE 6003
RUN chmod a+x dag_service
CMD ["./dag_service", "-s", "1 0 900 75 -1 3 3", "-s", "2 0 900 75 -1 3 3", "-s", "3 0 900 75 -1 3 3", "-s", "4 0 900 75 -1 3 3", "-d", "nfl_graph_q1_130.txt", "-d", "nfl_graph_q2_130.txt", "-d", "nfl_graph_q3_130.txt", "-d", "nfl_graph_q4_130.txt", "-p", "q1.Tree", "-p", "q2.Tree", "-p", "q3.Tree", "-p", "q4.Tree", "-m", "3e-8", "-l", "0.99"]
The service is written in C++.
My Laptop runs Linux, Ubuntu 22.04
Finally, I found the cause of the problem, or at least its symptom: the problem was Alpine. I replaced the Alpine image with one called grpc/cxx:latest taken from the docker library, and everything started to work as expected. The performance is similar to the bare process execution.
It would be interesting to understand what Alpine does or has that causes such dramatic performance degradation. Also, why anyone, at least in this forum, has experienced a similar issue?
Related
Okay so firstly I read some posts on this topic. That is how I ended up with my solution. Still I don’t find my mistake. Also I am more of a beginner.
So this is my docker file:
FROM conda/miniconda3
WORKDIR /app
RUN apt-get update -y
RUN apt-get install cron -y
RUN apt-get install curl -y
RUN conda update -n base -c defaults conda
RUN conda install mamba -n base -c conda-forge
COPY ./environment.yml ./environment.yml
RUN mamba env create -f environment.yml
# Make RUN commands use the new environment:
SHELL ["conda", "run", "--no-capture-output", "-n", "d2", "/bin/bash", "-c"]
#Setup cron
COPY ./cronjob /etc/cron.d/cronjob
RUN crontab /etc/cron.d/cronjob
RUN chmod 0600 /etc/cron.d/cronjob
RUN touch ./cron.log
COPY ./ ./
RUN ["chmod", "+x", "run.sh"]
ENTRYPOINT ["sh", "run.sh"]
CMD ["cron", "-f"]
What I want to do:
Run my run.sh (I managed to do that.)
Setup a cronjob inside my container which is defined in a file called cronjob (see content below)
My cronjob is not working. Why?
Note that cron.log is empty. It is never triggered.
Also the output of crontab -l (run inside of the container) is:
$ crontab -l
# Updates every 15 minutes.
*/15 * * * * /bin/sh /app/cron.sh >> /app/cron.log 2&>1
cronjob
# Updates every 15 minutes.
*/15 * * * * /bin/sh /app/cron.sh >> /app/cron.log 2&>1
As Saeed pointed out already, there is reason to believe you did not place your cron.sh script inside the container.
On top of that cron is programmed such that it does not log failed invocations anywhere. You can try to turn on some debug logging (I almost had to search cron's source to find the right settings years ago). Finally cron will send it's debug output to syslog - but in your container only cron is running, so the log entries are probably lost on that stage again.
That ultimately means you are in the dark and need to find the needle. But installing the script is a first good attempt.
As Saeed said in this comment
First of all, your cronjob command is wrong. You should have 2>&1 instead of 2&>1. Second. run ls -lh /app/cron.sh to see if your file is copied. Also be sure cron.sh is in the directory where your Dockerfile is.
2&>1 was the mistake that I had made.
I had a similar issue with the crontab not being read
I was also using something like:
COPY ./cronjob /etc/cron.d/cronjob
Locally the cronjob file had permissions of 664 instead of 644. This was causing cron to log Sep 29 16:21:01 0f2c2e0ddbfd cron[389]: (*system*crontab) INSECURE MODE (group/other writable) (/etc/cron.d/crontab) (I actually had to install syslog-ng to see this happen).
Turns out cron will refuse to read cron configurations if they are writeable by others. I guess it makes sense in hindsight but I was completely oblivious to this.
Changing my cronjob file permissions to 644 fixed this for me (I did this on my local filesystem, the Dockerfile copies permissions over)
only you need to root right then it will solve issuse
*/15 * * * * root /bin/sh /app/cron.sh >> /app/cron.log 2&>1
My goal is to have /ssc/bin/put-and-submit.sh to be executable. I looked at another question, but do not think it applies.
FROM perl:5.20
ENV PERL_MM_USE_DEFAULT 1
RUN cpan install Net::SSL inc:latest
RUN mkdir /ssc
COPY /ssc /ssc
RUN chmod a+rx /ssc/bin/*.sh
ENTRYPOINT ["/ssc/bin/put-and-submit.sh"]
stat /ssc/bin/put-and-submit.sh
File: '/ssc/bin/put-and-submit.sh'
Size: 1892 Blocks: 8 IO Block: 4096 regular file
Device: 7ah/122d Inode: 293302 Links: 1
Access: (0600/-rw-------) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2021-01-27 04:14:43.000000000 +0000
Modify: 2021-01-27 04:14:43.000000000 +0000
Change: 2021-01-27 04:52:44.700000000 +0000
Birth: -
I read the question below, and believe that circumstance is when another layer is added, it overwrites the previous one. In my case, I start with a Perl image, add a few CPAN libraries, copy a few files and then ask it to change permissions.
Dockerfile "RUN chmod" not taking effect
I remember I had this problem too and it basically only worked when I just replaced the default /usr/local/bin/docker-php-entrypoint WITHOUT firing the ENTRYPOINT command (to use a custom entrypoint script).
So in your case you have to find out what the default entrypoint file is perl is using (must also be in /usr/local/bin) and maybe replace that.
Sorry it's not the exact "right" solution but in my case it worked out fine and good enough.
So what I'm doing for example for my PHP-FPM containers is the following (note that ENTRYPOINT is commented out):
COPY docker-entrypoint.sh /usr/local/bin/docker-php-entrypoint
RUN chmod +x /usr/local/bin/docker-php-entrypoint
# ENTRYPOINT ["/usr/local/bin/docker-php-entrypoint"]
Just in case, my sh script looks like this (only starts supervisor):
#!/bin/sh
set -e
echo "Starting supervisor service"
exec supervisord -c /etc/supervisor/supervisord.conf
I hope this gets you somewhere mate, cheers
I am trying to run a docker container to automatically set up a sphinx documentation site, but for some reason I get the following error when I try to build
Step 9/11 : RUN make html
---> Running in abd76075d0a0
make: *** No rule to make target 'html'. Stop.
When I run the container and console in, I see that sphinx-quickstart does not seem to have been run since there are no files present at all in /sphinx. Not sure what I have done wrong. Dockerfile is below.
1 # Run this with
2 # docker build .
3 # docker run -dit -p 8000:8000 <image_id>
4 FROM ubuntu:latest
5
6 WORKDIR /sphinx
7 VOLUME /sphinx
8
9 RUN apt-get update -y
10 RUN apt-get install python3 python3-pip vim git -y
11
12 RUN pip3 install -U pip
13 RUN pip3 install sphinx
14
15 RUN sphinx-quickstart . --quiet --project devops --author 'Timothy Pulliam' -v '0.1' --language 'en' --makefile
16 RUN make html
17
18 EXPOSE 8000/tcp
19
20
21 CMD ["python3", "-m", "http.server"]
EDIT:
Using LinPy's suggestion I was able to get it to work. It is still strange that it would not work the other way.
The Dockerfile VOLUME directive mostly only has confusing side effects. Unless you’re 100% clear on what it does and why you want it, you should just delete it.
In particular, one of those confusing side effects is that RUN commands that write into the volume directory just get lost. So when on line 7 you say VOLUME /sphinx, the RUN sphinx-quickstart on line 15 tries to write its output into the current directory, which is a declared volume directory, so the output content isn’t persisted into the image.
(Storing your code in a volume isn’t generally appropriate; build it into the image so it’s reusable later. You can use docker run -v to bind-mount content over any container-side directory regardless of whether or not it’s declared as a VOLUME.)
so you need to set those in one line:
RUN sphinx-quickstart . --quiet --project devops --author 'Timothy Pulliam' -v '0.1' --language 'en' --makefile && make html
I think you can see in the logs , remove intermediate container there for the rule html is not there anymore
You've already resolved the issue with LinPy's helpful comment, but just to add more, doing a quick google search with your error message comes up with this StackOverflow post...
gcc makefile error: "No rule to make target ..."
Perhaps you were accidentally invoking a different command (in this case a GCC command) rather than the .bat file provided by Sphinx.
Hopefully this might shed a bit more light on WHY it was happening. I assume the Ubuntu parent image you're using has GCC pre-installed.
My ENTRYPOINT script doesn't execute and throws standard_init_linux.go:175: exec user process caused "no such file or directory". Why so?
Doesn't Work
$ docker build -t gilani/trollo . && docker run gilani/trollo
Sending build context to Docker daemon 126 kB
Step 1 : FROM vault:latest
---> 1f127f53f8b5
Step 2 : MAINTAINER Amin Shah Gilani <gilani#payload.tech>
---> Using cache
---> 86b885ca1c81
Step 3 : COPY vaultConfig.json /vault/config
---> Using cache
---> 1a2be2fa3acd
Step 4 : COPY ./docker-entrypoint.sh /
---> Using cache
---> 0eb7c1c992f1
Step 5 : RUN chmod +x /docker-entrypoint.sh
---> Running in 251395c4790f
---> 46aa0fbc9637
Removing intermediate container 251395c4790f
Step 6 : ENTRYPOINT /docker-entrypoint.sh
---> Running in 7434f052178f
---> eca040859bfe
Removing intermediate container 7434f052178f
Successfully built eca040859bfe
standard_init_linux.go:175: exec user process caused "no such file or directory"
Dockerfile:
FROM vault:latest
MAINTAINER Amin Shah Gilani <gilani#payload.tech>
COPY vaultConfig.json /vault/config
COPY ./docker-entrypoint.sh /
RUN chmod +x /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
docker-entrypoint.sh:
#!/bin/bash
echo 'Hello World!'
Works
$ docker build -t gilani/trollo . && docker run gilani/trollo
Sending build context to Docker daemon 126 kB
Step 1 : FROM vault:latest
---> 1f127f53f8b5
Step 2 : MAINTAINER Amin Shah Gilani <gilani#payload.tech>
---> Using cache
---> 86b885ca1c81
Step 3 : COPY vaultConfig.json /vault/config
---> Using cache
---> 1a2be2fa3acd
Step 4 : ENTRYPOINT echo 'hello world'
---> Using cache
---> ef5792a1f252
Successfully built ef5792a1f252
'hello world'
Dockerfile:
FROM vault:latest
MAINTAINER Amin Shah Gilani <gilani#payload.tech>
COPY vaultConfig.json /vault/config
ENTRYPOINT ["echo", "'hello world'"]
I was tearing my hair out with an issue very similar to this. In my case /bin/bash DID exist. But actually the problem was Windows line endings.
In my case the git repository had an entry point script with Unix line endings (\n). But when the repository was checked out on a windows machine, git decided to try and be clever and replace the line endings in the files with windows line endings (\r\n).
This meant that the shebang didn't work because instead of looking for /bin/bash, it was looking for /bin/bash\r.
The solution for me was to disable git's automatic conversion:
git config --global core.autocrlf input
Then check out the repository again and rebuild.
Some more helpful info here:
How to change line-ending settings
and here
http://willi.am/blog/2016/08/11/docker-for-windows-dealing-with-windows-line-endings/
the vault:latest image does not contain /bin/bash which you try to call with your shebang #!/bin/bash. You should either change that to #!/bin/sh or completely remove the shebang from your script.
Another possibility:
Check that the file is not saved with Windows line endings (CRLF). If it is, save it with Unix line endings (LF) and it will be found.
Without seeing your image, my initial idea is that you don't have /bin/bash in your image. Changing the first line of your docker-entrypoint.sh to:
#!/bin/sh
will likely resolve it.
I struggled for hours because I haven't noticed anywhere explained that you need to copy the file in the location where the VM can access the file, preferably globally like so:
COPY docker-entrypoint.sh /usr/local/bin/
(I had thought it should just be automatically accessible since it's part of the dockerfile context)
Gosh I struggled for 2–3 hours!!
Thanks to #Ryan Allen
For my case it was CRLF problem. I am working on puppet manifests over ATOM for jenkins setup.
Make sure if you are using ATOM or any other IDE on windows, when you take your file ( especially .sh) to unix, convert it to unix format. It worked like magic once converted.
Here is what I added in my puppet file:
exec {'dos2unix':
path => ['/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin:/opt/puppetlabs/bin'],
command => 'dos2unix /dockerwork/puppet/jenkins/files/*',
subscribe => File['/dockerwork/puppet/jenkins/files/init.sh'],
}
I came here with a similar issue while troubleshooting my attempt to build a Dockerfile "entry point" (entrypoint.sh) bash shell script (to be executed within the .NET Core SDK 2.2 image). The start of the script had the line #!/bin/bash, and during execution of docker-compose up (after successfully building with docker-compose build, the logging reported web_1 | ./entrypoint.sh: line 1: #!/bin/bash: No such file or directory.
Checking the file with VS Code, I noticed it was reporting the following encoding:
UTF-8 with BOM
Clicking on this, I would get the option to Save with encoding:
I chose to save as UTF-8 (utf8), which resolved the issue.
NOTE: I also found this SO article What's the difference between UTF-8 and UTF-8 without BOM?
My case was that the alpine image I was using didn't come with bash at all...
RUN apk-install bash did the trick obviously
Another reason this error comes up is if your Windows User password changes.
In my case my entrypoint.sh line endings were LF but I was still getting the error. Our admin has a mandatory password reset about every month or so. Whenever this happens I would run into the error. If this is your reason you may need to reset your credentials in docker settings under "Shared Drives".
Unselect the drive and apply. Then reselect the drive and apply. It will prompt you for your password.
This problem is to do with line endings and I solved it with the solution below
Convert the DOS file to a unix format. This removes any wired line endings.
dos2unix - is available in alpine as well as other Linux distributions.
I used it like so: RUN apk add dos2unix && dos2unix /entry.sh
Sorry for hacking -- this is not a response to the question, but a description of a different problem and it's solution that has the same symptoms.
I had
ENTRYPOINT ["/usr/bin/curl", "-X", "POST", "http://myservice:8000", \
"-H", "Content-Type: application/json", \
"-d", '{"id": "test"}' \
]
I was getting the error:
/bin/bash: [/usr/bin/curl,: No such file or directory
It turns out it's the single quotes that messed it up. Docker documentation has a note:
The exec form is parsed as a JSON array, which means that you must use double-quotes (“) around words not single-quotes (‘).
Blockquote
Solution -- use double quotes instead of single and escape nested double quotes:
ENTRYPOINT ["/usr/bin/curl", "-X", "POST", "http://myservice:8000", \
"-H", "Content-Type: application/json", \
"-d", "{\"id\": \"test\"}" \
]
None of the solutions worked for me but I was able to solve the error by setting WORKDIR to the same directory that contained the entrypoint script. No amount of cd'ing would work but somehow WORKDIR solved it
TL;DR
Running COPY . /app on top of an image with but slightly outdated source code creates a new layer as large as the whole source code, even when there is only a few bytes worth of changes.
Is there a way to add only changed files to this docker image as a new layer - without resorting to docker commit?
Long version:
When deploying our application to production, we need to add the source code to the image. A very simple Dockerfile is used for this:
FROM neam/dna-project-base-debian-php:0.6.0
COPY . /app
Since the source code is huge (1.2 GB), this makes for quite a hefty push upon each deploy:
$ docker build -f .stack.php.Dockerfile -t project/project-web-src-php:git-commit-17c279b .
Sending build context to Docker daemon 1.254 GB
Step 0 : FROM neam/dna-project-base-debian-php:0.6.0
---> 299c10c416fc
Step 1 : COPY . /app
---> 78a30802804a
Removing intermediate container 13b49c323bb6
Successfully built 78a30802804a
$ docker tag -f project/project-web-src-php:git-commit-17c279b tutum.co/project/project-web-src-php:git-commit-17c279b
$ docker login --email=tutum-project#project.com --username=project --password=******** https://tutum.co/v1
WARNING: login credentials saved in /home/dokku/.docker/config.json
Login Succeeded
$ docker push tutum.co/project/project-web-src-php:git-commit-17c279b
The push refers to a repository [tutum.co/project/project-web-src-php] (len: 1)
Sending image list
Pushing repository tutum.co/project/project-web-src-php (1 tags)
Image a604b236bcde already pushed, skipping
Image 1565e86129b8 already pushed, skipping
...
Image 71156b357f2f already pushed, skipping
Image 299c10c416fc already pushed, skipping
78a30802804a: Pushing [=========> ] 234.2 MB/1.254 GB
Upon the next deploy, we only want to add the changed files to the image, but watch and behold when running COPY . /app on top of the previously added image actually requires us to push 1.2 GB worth of source code AGAIN, even when we only change a few bytes worth of source code:
New Dockerfile (.stack.php.git-commit-17c279b.Dockerfile):
FROM project/project-web-src-php:git-commit-17c279b
COPY . /app
After change a few files, adding some text and code, then building and pushing:
$ docker build -f .stack.php.git-commit-17c279b.Dockerfile -t project/project-web-src-php:git-commit-17c279b-with-a-few-changes .
Sending build context to Docker daemon 1.225 GB
Step 0 : FROM project/project-web-src-php:git-commit-17c279b
---> 4dc643a45de3
Step 1 : COPY . /app
---> ecc7adc194c4
Removing intermediate container cb3e87c6cb7a
Successfully built ecc7adc194c4
$ docker tag -f project/project-web-src-php:git-commit-17c279b-with-a-few-changes tutum.co/project/project-web-src-php:git-commit-17c279b-with-a-few-changes
$ docker push tutum.co/project/project-web-src-php:git-commit-17c279b-with-a-few-changes
The push refers to a repository [tutum.co/project/project-web-src-php] (len: 1)
Sending image list
Pushing repository tutum.co/project/project-web-src-php (1 tags)
Image 1565e86129b8 already pushed, skipping
Image a604b236bcde already pushed, skipping
...
Image fe64bff23cf8 already pushed, skipping
Image 71156b357f2f already pushed, skipping
ecc7adc194c4: Pushing [==> ] 68.21 MB/1.225 GB
There is a workaround to achieve small layers as described on Updating docker images with small changes using commits which includes launching a rsync process within the image and then using docker commit to save the new contents as a new layer, however (as mentioned in that thread) this is unorthodox since the image is not built from a Dockerfile, and we prefer an orthodox solution that does not rely on docker commit.
Is there a way to add only changed files to this docker image as a new layer - without resorting to docker commit?
Docker version 1.8.3
Actually, the solution IS to use COPY . /app as the OP is doing, there is however an open bug causing this not to work as expected on most systems
The only currently feasible workaround to this issue seems to be to use rsync to analyze the differences between the old and new images prior to pushing the new one, then use the changelog output to generate a tar-file containing the relevant changes which is subsequently COPY:ed to a new image layer.
This way, the layer sizes becomes a few bytes or kilobytes for smaller changes instead of 1.2 GB every time.
I put together documentation and scripts to help out with this over at https://github.com/neam/docker-diff-based-layers.
The end results are shown below:
Verify that basing the project images on the revision 1 image tag contents does not lead to desired outcome
Verify that subsequent COPY . /app commands re-adds all files in every layer instead of only the files that have changed:
docker history sample-project:revision-2
Output:
IMAGE CREATED CREATED BY SIZE COMMENT
4a3115eaf267 3 seconds ago /bin/sh -c #(nop) COPY dir:61d102421e6692b677 16.78 MB
d4b30af167f4 25 seconds ago /bin/sh -c #(nop) COPY dir:68b8f374d8731b8ad8 16.78 MB
c898fe1daa44 2 minutes ago /bin/sh -c apt-get update && apt-get install 10.77 MB
39a8a358844a 4 months ago /bin/sh -c #(nop) CMD ["/bin/bash"] 0 B
b1dacad9c5c9 4 months ago /bin/sh -c #(nop) ADD file:5afd8eec1dc1e7666d 125.1 MB
Even though we added/changed only a few bytes, all files are re-added and 16.78 MB is added to the total image size.
Also, the file(s) that we removed did not get removed.
Create an image with an optimized layer
export RESTRICT_DIFF_TO_PATH=/app
export OLD_IMAGE=sample-project:revision-1
export NEW_IMAGE=sample-project:revision-2
docker-compose -f rsync-image-diff.docker-compose.yml up
docker-compose -f shell.docker-compose.yml -f process-image-diff.docker-compose.yml up
cd output; docker build -t sample-project:revision-2-processed .; cd ..
Verify that the processed new image has smaller sized layers with the changes:
docker history sample-project:revision-2-processed
Output:
IMAGE CREATED CREATED BY SIZE COMMENT
1920e750d362 24 seconds ago /bin/sh -c if [ -s /.files-to-remove.list ]; 0 B
1267bf926729 2 minutes ago /bin/sh -c #(nop) ADD file:5021c627243e841a45 19 B
d04a2181b62a 2 minutes ago /bin/sh -c #(nop) ADD file:14780990c926e673f2 264 B
d4b30af167f4 7 minutes ago /bin/sh -c #(nop) COPY dir:68b8f374d8731b8ad8 16.78 MB
c898fe1daa44 9 minutes ago /bin/sh -c apt-get update && apt-get install 10.77 MB
39a8a358844a 4 months ago /bin/sh -c #(nop) CMD ["/bin/bash"] 0 B
b1dacad9c5c9 4 months ago /bin/sh -c #(nop) ADD file:5afd8eec1dc1e7666d 125.1 MB
Verify that the processed new image contains the same contents as the original:
export RESTRICT_DIFF_TO_PATH=/app
export OLD_IMAGE=sample-project:revision-2
export NEW_IMAGE=sample-project:revision-2-processed
docker-compose -f rsync-image-diff.docker-compose.yml up
The output should indicate that there are no differences between the images/tags. Thus, the sample-project:revision-2-processed tag can now be pushed and deployed, leading to the same end result but without having to push an unnecessary 16.78M over the wire, leading to faster deploy cycles.
Docker caching works per layer / instruction in the Dockerfile. In this case the files used in that layer (everything in the build-context (.)) are modified, so the layer needs to be rebuilt.
If there's specific parts of the code that don't change often, you could consider to add those in a separate layer, or even move those to a "base image"
FROM mybaseimage
COPY ./directories-that-dont-change-often /somewhere
COPY ./directories-that-change-often /somewhere
It may take some planning or restructuring for this to work, depending on your project, but may be worth doing.
My solution: (idea from https://github.com/neam/docker-diff-based-layers !)
docker rm -f uniquename 2> /dev/null
docker run --name uniquename -v ~/repo/mycode:/src ${REPO}/${IMAGE}:${BASE} rsync -ar --exclude-from '/src/.dockerignore' --delete /src/ /app/
docker commit uniquename ${REPO}/${IMAGE}:${NEW_TAG}