I'm wondering how I can use this csvfile file saved via python:
csvfile = open("/tmp/" + ip + "-report.csv", "w")
csvfile.write(report.text)
csvfile.close()
to then be used in the next executed shell which is in cURL:
curl -u $PLATFORM_CREDS -XPUT https://domain.url.test/directory/savetohere -T $filename
How can I make csvfile a global variable to be used in other shell executables?
Related
Running
docker run -t -i -w=[absolute_work_dir] [docker_image] [executable]
is fine. However when I set the workdir using variable in a PowerShell script (.ps1):
$WORKDIR = [absolute_work_dir]
docker run -t -i -w=$WORKDIR [docker_image] [executable]
it gave the error:
docker: Error response from daemon: the working directory '$WORKDIR' is invalid, it needs to be an absolute path.
What is possibly wrong?
You could assemble a string with your variable, then execute the string as a command.
$DOCKRUNSTR = "docker run -t -i -w=" + $WORKDIR + " [docker_image] [executable]"
& $DOCKRUNSTR
The & will tell powershell to run that string as a command.
Edit: Inside a .ps1 file try Invoke-Expression instead of &
Maybe there's a better powershell super user solution, but this seems like it could get the job done for you.
0
I am trying to run this command and getting an error:
docker exec 19eca917c3e2 cat "Hi there" > /usr/share/ngnix/html/system.txt
/usr/share/ngnix/html/system.txt: No such file or directory
A very simple command to create a file and write in it, I tried echo and that one too didn't work.
The cat command only works on files, so cat "Hi there" is incorrect.
Try echo "Hi there" to output this to standard out.
You are then piping the output to /usr/share/ngnix/html/system.txt. Make sure the directory /usr/share/ngnix/html/ exists. If not create it using
mkdir -p /usr/share/ngnix/html
I presume you are trying to create the file in the container.
You have several problems going on, one of which #Yatharth Ranjan has addressed - you want echo not cat for that use.
The other is, your call is being parsed by the local shell, which is breaking it up into docker ... "hello world" and a > ... system.txt on your host system.
To get the pipe into file to be executed in the container, you need to explicity invoke bash in the container, and then pass it the command:
docker exec 12345 /bin/sh -c "echo \"hello world\" > /usr/somefile.txt"
So, here you would call /bin/sh in the container, pass it -c to tell it a shell command follows, and then the command to parse and execute is your echo "hello world" > the_file.txt.
Of course, a far easier way to copy files into a container is to have them on your host system and then copy them in using docker cp: (where 0123abc is your container name or id)
docker cp ./some-file.txt 01234abc:/path/to/file/in/container.txt
I have a Dockerfile in which files in a directory are downloaded:
RUN wget https://www.classe.cornell.edu/~cesrulib/downloads/tarballs/ -r -l1 --no-parent -A tgz \
--cut=99 -nH -nv --show-progress --progress=bar:force:noscroll
I know that there is exactly one file here of the form "bmad_dist_YYYY_MMDD.tgz" where "YYYY_MMDD" is a date. For example, the file might be named "bmad_dist_2020_0707.tgz". I want to set a bash variable to the file name without the ".tgz" extension. If this was outside of docker I could use:
FULLNAME=$(ls -1 bmad_dist_*.tgz)
BMADDIST="${FULLNAME%.*}"
So I tried in the dockerfile:
ENV FULLNAME $(ls -1 bmad_dist_*.tgz)
ENV BMADDIST "${FULLNAME%.*}"
But this does not work. Is it possible to do what I want?
Shell expansion does not happen in Dockerfile ENV. Then workaround that you can try is to pass the name during Docker build.
Grab the filename during build name and discard the file or you can try --spider for wget to just get the filename.
ARG FULLNAME
ENV FULLNAME=${FULLNAME}
Then pass the full name dynamically during build time.
For example
docker build --build-args FULLNAME=$(wget -nv https://upload.wikimedia.org/wikipedia/commons/5/54/Golden_Gate_Bridge_0002.jpg 2>&1 |cut -d\" -f2) -t my_image .
The ENV ... ... syntax is mainly for plaintext content, docker build arguments, or other environment variables. It does not support a subshell like your example.
It is also not possible to use RUN export ... and have that variable defined in downstream image layers.
The best route may be to write the name to a file in the filesystem and read from that file instead of an environment variable. Or, if an environment variable is crucial, you could set an environment variable from the contents of that file in an ENTRYPOINT script.
I have to list the Docker container images published in a certain project, but I cannot find an appropriate API using the gcloud CLI tool. Is this possible?
Is there any other solution to list the container images form this private container registry in my Google project?
You can use "gcloud docker search <hostname>/<your-project-id>" to list the images. Hostname should be "gcr.io", or "us.gcr.io" or whatever your images are created under. Please note you have to iterate through all possible hosts to find all images under the project. However, this method only list the repositories, it will not list tags or manifests.
You can also use registry API directly to do that and it will return more information. Using the below script as a starting guide:
#!/bin/bash
HOSTS="gcr.io us.gcr.io eu.gcr.io asia.gcr.io"
PROJECT=your-project-id
function search_gcr() {
local fullpath=""
local host=$1
local project=$2
if [[ -n $3 ]]; then
fullpath=${3}
fi
local result=$(curl -u _token:$(gcloud auth print-access-token) \
--fail --silent --show-error \
https://${host}/v2/${project}${fullpath}/tags/list)
if [[ -z $result ]]; then
printf ""
else
printf $result
fi
}
function recursive_search_gcr() {
local host=$1
local project=$2
local repository=$3
local result=$(search_gcr $host $project ${repository})
local returnVal=$?
if [[ -z $result ]]; then
echo Not able to curl: https://${host}/v2/${project}${fullpath}/tags/list
return
fi
local children="$(python - <<EOF
import json
import sys
obj = json.loads('$result')
if 'child' in obj:
print ' '.join(obj['child'])
else:
print ''
EOF
)"
for child in $children;
do
recursive_search_gcr $host $project ${repository}/${child}
done
local manifests="$(python - <<EOF
import json
import sys
obj = json.loads('$result')
if 'manifest' in obj:
print ' '.join(obj['manifest'])
else:
print ''
EOF
)"
echo Repository ${host}/${project}$repository:
echo " manifests:"
for manifest in $manifests
do
echo " "$manifest
done
echo
local tags="$(python - <<EOF
import json
import sys
obj = json.loads('$result')
if 'tags' in obj:
print ' '.join(obj['tags'])
else:
print ''
EOF
)"
echo " tags:"
for tag in $tags
do
echo " "$tag
done
echo
}
for HOST in $HOSTS;
do
recursive_search_gcr $HOST $PROJECT
done
Use the "gcloud container images" command to find and interact with images in Google Container Registry. For example, this would list all containers in a project called "my-project":
gcloud container images list --repository=gcr.io/my-project
Full documentation is at https://cloud.google.com/container-registry/docs/managing
On CentOS, I'm trying to pass an environment variable to a PHP script.
I've created this file, test.php:
<?php print_r($_ENV);
When I run this command:
DB=mysql php test.php
I get the following output:
Array
(
)
What did I miss?
Check your variables_order php.ini variable. It has to contain E for $_ENV to be populated. You can also do:
$ DB=whatever php -d variables_order=E -r 'echo $_ENV["DB"];'
whatever
Alternatively, you can use getenv() which will work regardless of the value of variables_order.
Use getenv function:
$ cat test.php
<?php
print_r(getenv('DB'));
?>
$ DB=msql php test.php
mysql