I dockerized the torchserve application which takes two images as input. I was able to make calculations with a single command:
curl -X POST http://localhost:8080/predictions/app -F "file1=#/home/app/frame1.jpg" -F "file2=#/home/app/frame2.jpg"
but I am not able to do the same with running docker.
Traceback (most recent call last): > ... > File "/usr/local/lib/python3.8/dist-packages/ts/torch_handler/request_envelope/json.py", line 41, in _from_json > rows = (data.get("data") or data.get("body") or data)["instances"] > KeyError: 'instances' >
https://cloud.google.com/ai-platform/prediction/docs/getting-started-pytorch-container In the google cloud documentation I found a note that it might be a helpful (or necessary?) to send a file as a JSON file but I don't know how to send two of them. Any ideas?
I tried to build JSON similar to the google example but it didn't work. I also tried to modify CURL command but without an effect.
Related
My job executes ansible-playbook in debug-mode (ansible-playbook -vvv) which generates a lot of output.
After the job finished, its very difficult to search using browser because its very slow and stuck.
I tried to download it with curl/wget, but the file is incomplete (i guess only about 10% was downloaded)
curl http://j:8080/job/my-job/5/consoleText -O
wget http://j:8080/job/my-job/5/consoleText
curl returns with error:
curl: (18) transfer closed with outstanding read data remaining
Try the following.
curl -u "admin":"admin" "http://localhost:8080/job/my-job/5/logText/progressiveText?start=0"
If that doesn't work. I think your best option is to get the log from the server itself. The log can be found at ${JENKINS_HOME}/jobs/${JOB_NAME}/builds/${BUILD_NUMBER}/log.
I have a specific task to accomplish which involves downloading a file from Google sheets. I need to always have just one file downloaded so the new file will overwrite any previous one (if it exists)
I have tried the following command but I can't quite get it to work. Not sure what's missing.
/usr/local/bin/php -q https://docs.google.com/spreadsheets/d/11rFK_fQPgIcMdOTj6KNLrl7pNrwAnYhjp3nIrctPosg/ -o /usr/local/bin/php /home/username/public_html/wp-content/uploads/wpallimport/files.csv
Managed to solve with the following:
curl --user-agent cPanel-Cron https://docs.google.com/spreadsheets/d/[...]/edit?usp=sharing --output /home/username/public_html/wp-content/uploads/wpallimport/files/file.csv
How can I resume pull when disconnected? The pull process always start from the beginning every time I run docker pull some-image again after disconnected. My connection is so unstable that even downloading just a 100MB image take so long and almost fails every time. So, it is almost impossible for me to pull a bigger image. So, how can I resume the pull process?
Update:
The pull process will now automatically resume based on which layers have already been downloaded. This was implemented with https://github.com/moby/moby/pull/18353.
Old:
There is no resume feature yet. However there are discussions around this feature being implemented with docker's download manager.
Docker's code isn't as updated as the moby in development repository on github. People have been having issues for several years relating to this. I had tried to manually use several patches which aren't in the upstream yet, and none worked decent.
The github repository for moby (docker's development repo) has a script called download-frozen-image-v2.sh. This script uses bash, curl, and other things like JSON interpreters via command line. It will retrieve a docker token, and then download all of the layers to a local directory. You can then use 'docker load' to insert into your local docker installation.
It does not do well with resume though. It had some comment in the script relating to 'curl -C' isn't working. I had tracked down, and fixed this problem. I made a modification which uses a ".headers" file to retrieve initially, which has always returned a 302 while I've been monitoring, and then retrieves the final using curl (+ resume support) to the layer tar file. It also has to loop on the calling function which retrieves a valid token which unfortunately only lasts about 30 minutes.
It will loop this process until it receives a 416 stating that there is no resume possible since it's ranges have been fulfilled. It also verifies the size against a curl header retrieval. I have been able to retrieve all images necessary using this modified script. Docker has many more layers relating to retrieval, and has remote control processes (Docker client) which make it more difficult to control, and they viewed this issue as only affecting some people on bad connections.
I hope this script can help you as much as it has helped me:
Changes:
fetch_blob function uses a temporary file for its first connection. It then retrieves 30x HTTP redirect from this. It attempts a header retrieval on the final url and checks whether the local copy has the full file. Otherwise, it will begin a resume curl operation. The calling function which passes it a valid token has a loop surrounding retrieving a token, and fetch_blob which ensures the full file is obtained.
The only other variation is a bandwidth limit variable which can be set at the top, or via "BW:10" command line parameter. I needed this to allow my connection to be viable for other operations.
Click here for the modified script.
In the future it would be nice if docker's internal client performed resuming properly. Increasing the amount of time for the token's validation would help tremendously..
Brief views of change code:
#loop until FULL_FILE is set in fetch_blob.. this is for bad/slow connections
while [ "$FULL_FILE" != "1" ];do
local token="$(curl -fsSL "$authBase/token?service=$authService&scope=repository:$image:pull" | jq --raw-output '.token')"
fetch_blob "$token" "$image" "$layerDigest" "$dir/$layerTar" --progress
sleep 1
done
Another section from fetch_blob:
while :; do
#if the file already exists.. we will be resuming..
if [ -f "$targetFile" ];then
#getting current size of file we are resuming
CUR=`stat --printf="%s" $targetFile`
#use curl to get headers to find content-length of the full file
LEN=`curl -I -fL "${curlArgs[#]}" "$blobRedirect"|grep content-length|cut -d" " -f2`
#if we already have the entire file... lets stop curl from erroring with 416
if [ "$CUR" == "${LEN//[!0-9]/}" ]; then
FULL_FILE=1
break
fi
fi
HTTP_CODE=`curl -w %{http_code} -C - --tr-encoding --compressed --progress-bar -fL "${curlArgs[#]}" "$blobRedirect" -o "$targetFile"`
if [ "$HTTP_CODE" == "403" ]; then
#token expired so the server stopped allowing us to resume, lets return without setting FULL_FILE and itll restart this func w new token
FULL_FILE=0
break
fi
if [ "$HTTP_CODE" == "416" ]; then
FULL_FILE=1
break
fi
sleep 1
done
Try this
ps -ef | grep docker
Get PID of all the docker pull command and do a kill -9 on them. Once killed, re-issue the docker pull <image>:<tag> command.
This worked for me!
I have tried:
curl -v --http1.0 --data "mac=00:00:00" -F "userfile=#/tmp/02-02-02-02-02-22" http://url_address/getfile.php
but it fails with the following message:
Warning: You can only select one HTTP request!
How can I send a mix of data and file by curl? Is it possible or not?
Thank you
Read up on how -F actually works! You can add any number of data parts and file parts in a multipart formpost that -F makes. -d however makes a "standard" clean post and you cannot mix -d with -F.
You need to first figure out which kind of post you want, then you pick either -d or -F depending on your answer.
I install Nagios on CentOS to monitor some servers, and one of them is a TSM server.
I download a plugin written in bash when i execute it in command line it works.
/usr/lib64/nagios/plugins/check_tsm db -v6
db - database utilization 42%, OK
and the return code of the batch script is 0 ( from the command echo $? )
So the script work fine, and return 0 that mean a OK status in nagios, but the status still unknown, I really don't know why.
And i check logs in nagios, etc. It's not a problem of commands definition in commands.cfg or the declaration of service, because I copy the command that nagios send automatically every 5 min and the command works fine in command line, but still unknow status.
Definition of command:
define command{
command_name check_tsm_v6
command_line /usr/lib64/nagios/plugins/check_tsm $ARG1$ -v6 $ARG2$ $ARG3$
}
Service declaration :
define service{
use generic-service
host_name tsm-test
service_description database utilization
check_command check_tsm_v6!db!85!90
}
And here's the bash script.
One thing that's caught me out in the past with Nagios scripts is user rights. When testing your script directly on the command line be sure to precede it with:
sudo -u nagios
So yours would be:
sudo -u nagios /usr/lib64/nagios/plugins/check_tsm db -v6
This assumes that your nagios instance is being run by the nagios user, which is a fairly safe bet.
Good luck
Brad
Try to use yum install sysstat -y command to download the package.
If it work that will a great. If you are facing still same please upload the complete error which is showing in browser?