Google Cloud Platform - Viewing downloaded files after wget - machine-learning

I am completing this tutorial and am at the part where you download the code for the tutorial. The request we send to Github is:
wget https://github.com/GoogleCloudPlatform/cloudml-samples/archive/master.zip
I understand that this downloads archive to GCP, and I can see the files in the Cloud shell, but is there a way to see the files through the Google Console GUI? I would like to browse the files I have downloaded to understand their structure better.

By clicking on the pencil icon on the top right corner, the Cloud Shell Code editor will pop.
Quoting the documentation:
"The built-in code editor is based on Orion. You can use the code
editor to browse file directories as well as view and edit files, with
continued access to the Cloud Shell. The code editor is available by
default with every Cloud Shell instance."
You can find more info here: https://cloud.google.com/shell/docs/features#code_editor
If you prefer to use the command line to view files, you can install and run the tree Unix CLI command 1 and run it in Cloud Shell to list contents of directories in a tree-like format.
install tree => $ sudo apt-get install tree
run it => $ tree ./ -h --filelimit 4
-h will show human readable size of files/directories
and you can use --filelimit to set the maximum number of directories to descent within the list.
Use $ man tree to see the available parameters for the command, or check the man online documentation here: https://linux.die.net/man/1/tree

Related

Is there an alternative link instead of the one in the docker book?

I am new to docker and was working through the docker book and just got to the building a Sinatra web application part. But the link provided to download the source code doesn't exist. I tried using the GitHub link for the code but that isn't working as well. I need to make a binary executable as the next step and I am unable to do so.
$ cd sinatra
$ wget --cut-dirs=3 -nH -r -e robots=off --reject="index.html","
Dockerfile" --no-parent http://dockerbook.com/code/5/sinatra/ webapp/
This is what I am supposed to do but if you copy paste the link into your browser, it doesn't work. I also tried making each folder I needed and physically making files but in the next step which is chmod +x webapp/bin/webapp, it says that the directory does not exist even though it does.

Deploying cgal docker

I'm trying to deploy the official CGAL docker. From reading the README I understand that after downloading the specific image (e.g I want to open a docker with ubuntu16+CGAL and all of it's dependencies) using the following command:
docker pull cgal/testsuite-docker:ubuntu # get a specific image by replacing TAG with some tag
I need to install the cgal library itself using the
./test_cgal.py --user **** --passwd **** --images cgal-testsuite/ubuntu
The thing is that eventually I want to start the docker with an interactive shell, i.e
docker run --rm -it -v $(pwd):/source somedocker
And I couldn't understand where is the generated image, after the CGAL installation script.
Those images are not for running CGAL. They are only images we use to define an environment for our testsuite, and run tests in it, including compiling CGAL.
test_cgal.py will download the integration branch, which is rarely working as it is the branch in which we merge our PR to test them nightly. Don't use this to get a working CGAL. To my knowledge, there is no such image as the one you are looking for. No official one anyways.
Furthermore, installing cgal at runtime in this image will not modify the image, once you close the container your installation will be lost. You need to specify how to install CGA in the Dockerfile of your image and
then build it if you want a "ready to use" image.
You can use the dockerfile of the image you found to write your own, as there should be all the dependencies specified in it, but you need to edit it to download CGAL and maybe build it if you don't want the header-only version. This is not done in test-cgal.py or anywhere in this docker repository.

Alternative to using --squash when building docker images on Windows using local files

We have some local installers and zip files that we use to build our docker images. It is easy to get this to work in a Dockerfile:
FROM mcr.microsoft.com/windows/nanoserver
COPY myinstaller.exe .
RUN myinstaller.exe; \
del myinstaller.exe
The problem here is that it produces a layer for the COPY line, which increases the size of the image. A common work-around for this is to have one RUN line, that downloads the file from the Internet, runs commands, and then deletes the installation file. The problem, as written above, is that the installers are on the local filesystem.
I found that there is a --squash command for docker:
docker build --squash -t mytestimage .
This does exactly what I want: It gives me an image without this extra installer file that is not necessary. To run this command, you need to enable experimental features though. There is also an open issue to simply remove this feature:
https://github.com/moby/moby/issues/34565
Is there some alternative way of using local installers in a Dockerfile when running on Windows, that doesn't involve setting up a server to provide the files?
We ended up setting up nginx to provide files when building. On our build server, the machine building our docker images and the server that has the installer files have a very good connection between them, so downloading huge files is not a real problem.
When it comes to --squash, it is bugged for Docker on Windows. Here is the relevant issue for it:
https://github.com/moby/moby/issues/31468
There is an issue to move --squash out of experimental, but it doesn't seem to have a lot of support:
https://github.com/moby/moby/issues/38657
The alternative that some people propose instead of --squash is multi stage build, discussion here:
https://github.com/moby/moby/issues/34565
There is an alternative to --squash, if you have local installer files, you don't want to set up a web server, and you would like your docker image to be small, and you are running Windows: Use mapped drives.
In Windows, you can share folders with other users on your network. Docker containers are like another computer that is running on your physical machine, and it can access these network drives.
First set up a new user, for example username share and password password1. Create a folder somewhere on your computer. Then right click it, click properties, and then go to the Sharing tab and click "Share". Find the user that you have just created, using the little dropdown menu and Find people ..., and share the folder with this user.
Create a folder somewhere for your test project. Create a batch file setupshare.bat that looks like this:
#echo off
for /f "tokens=2 delims=:" %%i in ('ipconfig ^| findstr "Default Gateway"') do (
set hostip=%%i
goto :end
)
:end
set hostip=%hostip: =%
net use O: \\%hostip%\vms /USER:share password1
The first part of this file is only to find the ip address that the docker container can use to access its host computer. It is not the most pretty thing I've ever put together, so let me know if there's a better way!
It uses a for-loop, as that is the way to save the output of a command to a variable in batch files. The command is ipconfig, and we pipe it to findstr and searches for Default Gateway. We need to use ^| instead of just | because it is in a for-loop. The first part of the for-loop divides each line from the command on the delimiter, which is : in this case, and we only take the second token. The for-loop only handles the first line, if there are multiple entries with a Default Gateway. This script doesn't work if there are multiple entries and the first one is not the correct one.
The line set hostip=%hostip: =% is to remove a space at the start of the string.
We then have the IP address that we want to use stored in hostip. We use this in the net use command, which will map O:\ to shared folder vms on the machine with IP hostip. We use the username share and the password password1. Note that this is a very bad way of handling passwords, as they kind of should be secret!
With a batch file like this, we can set up a Dockerfile in this way:
# escape=`
FROM mcr.microsoft.com/dotnet/core/sdk:3.0
COPY setupshare.bat .
RUN setupshare.bat && `
copy O:\file.txt file.txt
The RUN command will first call setupshare.bat that sets up the network share properly. We can then use any file that we shared, for example a huge installer, and install the things that we want. In this case I have only shared a test file file.txt to see that it works, so just change that line.
I would still advice everyone to just set up a little web server, for example nginx, and use the standard way of writing Dockerfiles, with downloading files and running it in the same RUN command. That's what people expect when they see a Dockerfile, and it should be a more robust solution.
We can also hope that the Docker people either makes a COPY command that can copy, run, and delete installers in the same layer, or that --squash is implemented properly.

Move file downloaded in Dockerfile to harddrive

First off, I really lack a lot of knowledge regarding Docker itself and its structure. I know that it'd be way more beneficial to learn the basics first, but I do require this to work in order to move on to other things for now.
So within a Dockerfile I installed wget & used it to download a file from a website, authentification & download are successful. However, when I later try move said file it can't be found, and it doesn't show up using e.g explorer either (path was specified)
I thought it might have something to do with RUN & how it executes the wget command; I read that the Id can be used to copy it to harddrive, but how'd I do that within a Dockerfile?
RUN wget -P ./path/to/somewhere http://something.com/file.txt --username xyz --password bluh
ADD ./path/to/somewhere/file.txt /mainDirectory
Download is shown & log-in is successful, but as I mentioned I am having trouble using that file later on as it's not to be located on the harddrive. Probably a basic error, but I'd really appreciate some input that might lead to a solution.
Obviously the error is produced when trying to execute ADD as there is no file to move. I am trying to find a way to mount a volume in order to store it, but so far in vain.
Edit:
Though the question is similiar to the "move to harddrive" one, I am searching for ways to get the id of the container created within the Dockerfile in order to move it; while the thread provides such answers, I haven't had any luck using them within the Dockerfile itself.
Short answer is that it's not possible.
The Dockerfile builds an image, which you can run as a short-lived container. During the build, you don't have (write) access to the host and its file system. Which kinda makes sense, since you want to build an immutable image from which to run ephemeral containers.
What you can do is run a container, and mount a path from your host as a volume into the container. This is the only way how you can share files between the host and a container.
Here is an example how you could do this with the sherylynn/wget image:
docker run -v /path/on/host:/path/in/container sherylynn/wget wget -O /path/in/container/file http://my.url.com
The -v HOST:CONTAINER parameter allows you to specify a path on the host that is mounted inside the container at a specified location.
For wget, I would prefer -O over -P when downloading a single file, since it makes it really explicit where your download ends up. When you point -O to the location of the volume, the downloaded file ends up on the host system (in the folder you mounted).
Since I have no idea what your image or your environment looks like, you might need to tweak one or two things to work well with your own image. As a general recommendation: For basic commands like wget or curl, you can find pre-made images on Docker Hub. This can be quite useful when you need to set up a Continuous Integration pipeline or so, where you want to use wget or curl but can't execute it directly.
Use wget -O instead of -P for specific file download
for e.g.,
RUN wget -O /tmp/new_file.txt http://something.com --username xyz --password bluh/new_file.txt
Thanks

IOS project code coverage source file is relative path

I have generated .gcno and .gcda files after running my iphone app.Then I use cover story to view the coverage rate.However, cover story could not open the source file and I found that the source path is a relative path, not full path.All I can see is full of /EOF/ in the screen.
The strange to me is that only some of the files could not open due to this path issue. Most of them are full path and cover story can open them successfully.Unable to attach screenshot
How can I show the correct path names in CoverStory?
I suggest generating an HTML report with lcov, which allos you to normalize the directory names.
Other benefits of using an HTML report are:
The coverage information is available on both desktop machines as well as from a Continuous Integration build server.
To install lcov
Use Homebrew or MacPorts
Example:
brew install lcov
First Generate Datafile
#!/bin/bash
set -e # fail script if any commands fail
${gen.info} ${temp.dir}/coverage-data/*.gcno --no-recursion --output-filename \
${temp.dir}/${module.name}-temp.info
#Remove symbols we're not interested in.
${lcov} -r ${temp.dir}/${module.name}-temp.info > ${temp.dir}/${module.name}.info
Now Generate the HTML Report
#!/bin/bash
set -e # fail script if any commands fail
${gen.html} --no-function-coverage --no-branch-coverage -o ${coverage.reports.dir} \
--prefix ${basedir} ${temp.dir}/${module.name}.info
If you're interested, I have an build script the produces HTML reports here. An example report: http://jasperblues.github.io/Typhoon/coverage/index.html

Resources