upload a 2.5Gb file - upload

In google cloud I have a storage and a segment where I copy a 2.5Gb file but it takes 15 hours.
What configuration should the storage and the segment have to copy fast? .
To copy I use: gsutil cp filex gs: // namesegment

I created a 3 GB file:
dd if=/dev/zero of=./gentoo_root.img bs=4k iflag=fullblock,count_bytes count=3G
For uploading to GCS you have some options:
You can use the gsutil tool transfers to enable a multi-threaded copy with -m flag:
gsutil -m cp gentoo_root.img gs://my-bucket/file.img
You can use gsutil with parallel composite uploads:
gsutil -o GSUtil:parallel_composite_upload_threshold=150M cp gentoo_root.img gs://my-bucket/file.img
Or Storage Transfer Service
You can find more information: Strategies for transferring big data sets

Related

Can't load docker from saved tar image

I committed my container as an image, then used "docker save" to save the image as a tar. Now I'm trying to load the tar on a GCC Centos 7 instance. I packaged it locally on my Ubuntu machine.
I've tried: docker load < image.tar and sudo docker load < image.tar
I also tried chmod 777 image.tar to see if the issue was permissions related.
Each time I try to load the image I get a variation of this error (the xxxx bit is a different number every time):
open /var/lib/docker/tmp/docker-import-xxxxxxxxx/repositories: no such file or directory
I think it might have something to do with permissions, because when I try to cd into /var/lib/docker/ I run into permissions issues.
Are there any other things I can try? Or is it likely to be a corrupted image?
There was a simple answer to this problem
I ran md5 checksums on the images before and after I moved them across systems and they were different. I re-transferred and all is working now.
for me the problem was that I added .tgz as an input. Once I extracted the tarball - there were the .tar file. Using that file as input it was successful.
The sequence of the BUILD(!) command is important, try this sequence:
### 1/3: Build it:
# docker build -f MYDOCKERFILE -t MYCNTNR .
### 2/3: Save it:
# docker save -o ./mycontainer.tar MYCNTNR
### 3/3: Copy it to the target machine:
# rsync/scp/... mycontainer.tar someone#target:.
### 4/4: On the target, load it :
# docker load -i MYCNTNR.tar
<snip>
Loaded image: MYCNTNR
I had the same issue and the following command fixed it for me:
cat <file_name>.tar | docker import - <image_name>:<image_version/tag>
Ref: https://webkul.com/blog/error-open-var-lib-docker-tmp-docker-import/

How to get files generated by docker run to host

I have run docker run to generate a file
sudo docker run -i --mount type=bind,src=/home/mathed/Simulation/custom_desman/1/Strains/Simulation2/Assembly,target=/home/mathed/Simulation/custom_desman/1/Strains/Simulation2/Assembly 990210oliver/mycc.docker:v1 MyCC.py /home/mathed/Simulation/custom_desman/1/Strains/Simulation2/Assembly/final_contigs_c10K.fa
This is the message I've got after executing.
20181029_0753
4mer
1_rename.py /home/mathed/Simulation/custom_desman/1/Strains/Simulation2/Assembly/final_contigs_c10K.fa 1000
Seqs >= 1000 : 32551
Minimum contig lengh for first stage clustering: 1236
run Prodigal.
/opt/prodigal.linux -i My.fa -a gene.aa -d gene.nuc -f gbk -o output -s potential_genes.txt
run fetchMG.
run UCLUST.
Get Feature.
2_GetFeatures_4mer.py for fisrt stage clustering
2_GetFeatures_4mer.py for second stage clustering
3_GetMatrix.py 1236 for fisrt stage clustering
22896 contigs entering first stage clustering
Clustering...
1_bhsne.py 20
2_ap.py /opt/ap 500 0
Cluster Correction.
to Split and Merge.
1_ClusterCorrection_Split.py 40 2
2_ClusterCorrection_Merge.py 40
Get contig by cluster.
20181029_0811
I now want to get the files generated by MyCC.py to host.
After reading Copying files from Docker container to host, I tried,
sudo docker cp 642ef90103be:/opt /home/mathed/data
But I got an error message
mkdir /home/mathed/data/opt: permission denied
Is there a way to get the files generated to a directory /home/mathed/data?
Thank you.
I assume your dest path does not exist.
Docker cp doc stats that in that case :
SRC_PATH specifies a directory
DEST_PATH does not exist
DEST_PATH is created as a directory and the contents of the source directory are copied into this directory
Thus it is trying to create a directory fro DEST_PATH... and docker must have the rights to do so.
According to the owner of the DEST_PATH top existing directory, you may have to either
create the directory first so that it will not be created by docker and give it the correct rights (looks like it has no rights to do so) using sudo chown {user}:{folder} + chmod +x {folder}
change the rights to the parent existing directory (chown + chmod again),
switch to path where docker is allowed to write.

How to verify if the content of two Docker images is exactly the same?

How can we determine that two Docker images have exactly the same file system structure, and that the content of corresponding files is the same, irrespective of file timestamps?
I tried the image IDs but they differ when building from the same Dockerfile and a clean local repository. I did this test by building one image, cleaning the local repository, then touching one of the files to change its modification date, then building the second image, and their image IDs do not match. I used Docker 17.06 (the latest version I believe).
If you want to compare content of images you can use docker inspect <imageName> command and you can look at section RootFS
docker inspect redis
"RootFS": {
"Type": "layers",
"Layers": [
"sha256:eda7136a91b7b4ba57aee64509b42bda59e630afcb2b63482d1b3341bf6e2bbb",
"sha256:c4c228cb4e20c84a0e268dda4ba36eea3c3b1e34c239126b6ee63de430720635",
"sha256:e7ec07c2297f9507eeaccc02b0148dae0a3a473adec4ab8ec1cbaacde62928d9",
"sha256:38e87cc81b6bed0c57f650d88ed8939aa71140b289a183ae158f1fa8e0de3ca8",
"sha256:d0f537e75fa6bdad0df5f844c7854dc8f6631ff292eb53dc41e897bc453c3f11",
"sha256:28caa9731d5da4265bad76fc67e6be12dfb2f5598c95a0c0d284a9a2443932bc"
]
}
if all layers are identical then images contains identical content
After some research I came up with a solution which is fast and clean per my tests.
The overall solution is this:
Create a container for your image via docker create ...
Export its entire file system to a tar archive via docker export ...
Pipe the archive directory names, symlink names, symlink contents, file names, and file contents, to an hash function (e.g., MD5)
Compare the hashes of different images to verify if their contents are equal or not
And that's it.
Technically, this can be done as follows:
1) Create file md5docker, and give it execution rights, e.g., chmod +x md5docker:
#!/bin/sh
dir=$(dirname "$0")
docker create $1 | { read cid; docker export $cid | $dir/tarcat | md5; docker rm $cid > /dev/null; }
2) Create file tarcat, and give it execution rights, e.g., chmod +x tarcat:
#!/usr/bin/env python3
# coding=utf-8
if __name__ == '__main__':
import sys
import tarfile
with tarfile.open(fileobj=sys.stdin.buffer, mode="r|*") as tar:
for tarinfo in tar:
if tarinfo.isfile():
print(tarinfo.name, flush=True)
with tar.extractfile(tarinfo) as file:
sys.stdout.buffer.write(file.read())
elif tarinfo.isdir():
print(tarinfo.name, flush=True)
elif tarinfo.issym() or tarinfo.islnk():
print(tarinfo.name, flush=True)
print(tarinfo.linkname, flush=True)
else:
print("\33[0;31mIGNORING:\33[0m ", tarinfo.name, file=sys.stderr)
3) Now invoke ./md5docker <image>, where <image> is your image name or id, to compute an MD5 hash of the entire file system of your image.
To verify if two images have the same contents just check that their hashes are equal as computed in step 3).
Note that this solution only considers as content directory structure, regular file contents, and symlinks (soft and hard). If you need more just change the tarcat script by adding more elif clauses testing for the content you wish to include (see Python's tarfile, and look for methods TarInfo.isXXX() corresponding to the needed content).
The only limitation I see in this solution is its dependency on Python (I am using Python3, but it should be very easy to adapt to Python2). A better solution without any dependency, and probably faster (hey, this is already very fast), is to write the tarcat script in a language supporting static linking so that a standalone executable file was enough (i.e., one not requiring any external dependencies, but the sole OS). I leave this as a future exercise in C, Rust, OCaml, Haskell, you choose.
Note, if MD5 does not suit your needs, just replace md5 inside the first script with your hash utility.
Hope this helps anyone reading.
Amazes me that docker doesn't do this sort of thing out of the box. Here's a variant on #mljrg's technique:
#!/bin/sh
docker create $1 | {
read cid
docker export $cid | tar Oxv 2>&1 | shasum -a 256
docker rm $cid > /dev/null
}
It's shorter, doesn't need a python dependency or a second script at all, I'm sure there are downsides but it seems to work for me with the few tests I've done.
There doesn't seem to be a standard way for doing this. The best way that I can think of is using the Docker multistage build feature.
For example, here I am comparing the apline and debian images. In yourm case set the image names to the ones you want to compare
I basically copy all the file from each image into a git repository and commit after each copy.
FROM alpine as image1
FROM debian as image2
FROM ubuntu
RUN apt-get update && apt-get install -y git
RUN git config --global user.email "you#example.com" &&\
git config --global user.name "Your Name"
RUN mkdir images
WORKDIR images
RUN git init
COPY --from=image1 / .
RUN git add . && git commit -m "image1"
COPY --from=image2 / .
RUN git add . && git commit -m "image2"
CMD tail > /dev/null
This will give you an image with a git repository that records the differences between the two images.
docker build -t compare .
docker run -it compare bash
Now if you do a git log you can see the logs and you can compare the two commits using git diff <commit1> <commit2>
Note: If the image building fails at the second commit, this means that the images are identical, since a git commit will fail if there are no changes to commit.
If we rebuild the Dockerfile it is almost certainly going to produce a new hash.
The only way to create an image with the same hash is to use docker save and docker load. See https://docs.docker.com/engine/reference/commandline/save/
We could then use Bukharov Sergey's answer (i.e. docker inspect) to inspect the layers, looking at the section with key 'RootFS'.

Copy a directory from container to host

I have saved a folder called "ec2-user" in an image. And I can easily extract one of the files...
docker run -it shantanuo/bkup sh
docker exec -it e7611fc860a6 sh -c " cat /tmp/ec2-user/t1.txt" > t1.txt
This works as expected. But I do not know how to copy the entire directory "ec2-user" that is around 8 GB
In other words I am using docker as backup device. This is different that application deployment, but I will like to know if this is OK.
Looking at the name of your file and its size (8GB) I guess you're doing a database backup?
Since you want to copy the entire directory and it is relatively big, why not consider compressing the backup directory to a single file and then just do
docker cp my_container:/tmp/bkup/abc.xz .
To compress your backup you can use the xz utility.
Example:
shantanuo/bkup | xz -8 > /tmp/bkup.xz

Docker how to ADD a file without committing it to an image?

I have a ~300Mb zipped local file that I add to a docker image. The next state then extracts the image.
The problem is that the ADD statement results in a commit that results in a new file system layer makes the image ~300Mb larger than it needs to be.
ADD /files/apache-stratos.zip /opt/apache-stratos.zip
RUN unzip -q apache-stratos.zip && \
rm apache-stratos.zip && \
mv apache-stratos-* apache-stratos
Question: Is there a work-around to ADD local files without causing a commit?
One option is to run a simple web server (e.g. python -m SimpleHTTPServer) before starting the docker build, and then using wget to retrieve the file, but that seems a bit messy:
RUN wget http://localhost:8000/apache-stratos.zip && \
unzip -q apache-stratos.zip && \
rm apache-stratos.zip && \
mv apache-stratos-* apache-stratos
Another option is to extract the zipped file at container start up instead of build time, but I would prefer to keep the start up as quick as possible.
According to the documentation, if you pass an archive file from the local filesystem (not a URL) to ADD in the Dockerfile (with a destination path, not a path + filename), it will uncompress the file into the directory given.
If <src> is a local tar archive in a recognized compression format
(identity, gzip, bzip2 or xz) then it is unpacked as a directory.
Resources from remote URLs are not decompressed. When a directory is
copied or unpacked, it has the same behavior as tar -x: the result is
the union of:
1) Whatever existed at the destination path and 2) The contents of the
source tree, with conflicts resolved in favor of "2." on a file-by-file basis.
try:
ADD /files/apache-stratos.zip /opt/
and see if the files are there, without further decompression.
With Docker 17.05+ you can use a multi-stage build to avoid creating extra layers.
FROM ... as stage1
# No need to clean up here, these layers will be discarded
ADD /files/apache-stratos.zip /opt/apache-stratos.zip
RUN unzip -q apache-stratos.zip
&& mv apache-stratos-* apache-stratos
FROM ...
COPY --from=stage1 apache-stratos/ apache-stratos/
You can use docker-squash to squash newly created layers. That should reduce the image size significantly.
Unfortunately the mentioned workarounds (RUN curl ... && unzip ... & rm ..., unpack on container start) are the only options at the moment (docker 1.11).
There are currently 3 options I can think of.
Option 1: you can switch to a tar or compressed tar format from the zip file and then allow ADD to decompress the file for you.
ADD /files/apache-stratos.tgz /opt/
Only downside is any other changes, like a directory rename, will trigger the copy on write again, so you need to make sure your tar file has the contents in the final directory structure.
Option 2: Use a multi-stage build. Extract the file in an early stage, perform any changes, and then copy the resulting directory to your final stage. This is a good option for any build engines that cannot use BuildKit. augurar's answer covers this so I won't repeat the same Dockerfile he already has.
Option 3: BuildKit (available in 18.09 and newer) allows you to mount files from other locations, including your build context, within a RUN command. This currently requires the experimental syntax. The resulting Dockerfile looks like:
# syntax=docker/dockerfile:experimental
FROM ...
...
RUN --mount=type=bind,source=/files/apache-stratos.zip,target=/opt/apache-stratos.zip \
unzip -q apache-stratos.zip && \
rm apache-stratos.zip && \
mv apache-stratos-* apache-stratos
Then to build that, you export a variable before running your build (you could also export it in your .bashrc or equivalent):
DOCKER_BUILDKIT=1 docker build -t your_image .
More details on BuildKit's experimental features are available here: https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/experimental.md

Resources