curl --upload-file does not work with alpine - docker

I am trying to upload files to Sonatype Nexus, using their API :
curl -v -u admin:admin123 \
--upload-file pom.xml \
http://localhost:8081/repository/maven-releases/org/foo/1.0/foo-1.0.pom
This works like a charm with curl runs from Ubuntu based container.
However, it does not work well with Alpine based container. Indeed, the file URL is created, but the uploaded file is empty ( 0 bytes ) .
is curl syntax different from platform to another?
If so, what's the right syntax with Alpine to call curl with --upload-file option ?

Related

Docker openapi client generator can't find "spec file"

I have generated a openapi json file, and I wish to create a typescript client using docker.
I have tried to do something similar to what is on the openapi generator site (https://openapi-generator.tech/ - scroll down to docker part), but it doesn't work.
Command from site:
docker run --rm \
-v $PWD:/local openapitools/openapi-generator-cli generate \
-i /local/petstore.yaml \
-g go \
-o /local/out/go
What I have tried:
docker run --rm -v \
$PWD:/local openapitools/openapi-generator-cli generate -i ./openapi.json \
-g typescript-axios
No matter what I do, there is always a problem with the ./openapi.json file. The error which occours:
[error] The spec file is not found: ./openapi.json
[error] Check the path of the OpenAPI spec and try again.
I have tried the things below:
-i ~/compass_backend/openapi.json
-i openapi.json
-i ./openapi.json
-i $PWD:/openapi.json
cat openapi.json | docker run .... (error, -i is required)
I am out of ideas. The error is always the same. What am I doing wrong?
I was able to solve the problem by switching from bash to powershell. Docker uses windows path notation and I was trying to use bash notation. If you type pwd in bash you get this:
/c/Users/aniemirka/compass_backend
And if you type pwd in powershell you get this:
C:\Users\aniemirka\compass_backend
So docker was trying to mount a volume to /c/Users/aniemirka/compass_backend\local, and it couldn't read it because it is not windows notation, so the volume didn't exist.

TensorFlow model server - Could not resolve host POST

I am new to TensorFlow (using 1.13) and trying to build a TF model and serve it on a docker tensor flow model server.
I have exported my model, installed docker and started my docker container with the command:
docker run -p 8501:8501 --name SN_TFlow
--mount type=bind,source=/tmp/service_model/export,target=/models
-e MODEL_NAME=1596551653 -t tensorflow/serving &
I can see my container running and the line: "I tensorflow_serving/model_servers/server.cc:375] Exporting HTTP/REST API at:localhost:8501 ..." in the client which seems to indicate all is up and running according to the doc.
However when I try to test my model with the curl command:
$ curl -d ‘{"test_value": [40]}' \ -X POST http://localhost:8501/1/models/1596551653:predict
I get a message saying:
URL bad/illegal format or missing url
Could not resolve host POST
and I get a 404 message.
I also tried simply curl http://localhost:8501/models/1/1596551653 but also get Not found.
Any idea what I am missing? Thanks
The problem I observe in your code is the \ in the middle of the curl command.
The backslash, \ should occur at the end of the line, if the command is more than one line.
So, the below command,
$ curl -d ‘{"test_value": [40]}' \ -X POST
http://localhost:8501/1/models/1596551653:predict
should be changed as shown below (move the backslash to the end) :
$ curl -d ‘{"test_value": [40]}' -X POST \
http://localhost:8501/1/models/1596551653:predict
Also, we suggest you to upgrade to the Latest Version of Tensorflow in 1.x, which is 1.15 or to 2.3.0.

Version error when launching hyperledger composer rest server docker

A little background: I have a business network running on IBM cloud hyperledger starter edition. It's built with composer v0.19.14, and as far as I can tell, everything is v0.19.14 and should work with Fabric v1.1. I can deploy my BNA and view with composer playground and even launch composer-rest-server from my machine locally and everything looks good. But when I try to launch my docker composer rest server, I get a version compatibility error. I've searched everywhere and tried all the recommendations out there, but to no avail.
Here is the error when launching the docker in -it mode:
Error: Error trying to ping. Error: Composer runtime (0.19.14) is not compatible with client (0.19.12)
Here's my Dockerfile:
FROM hyperledger/composer-rest-server:0.19.14
Here's my build script:
docker build -t hyperledger/composer-rest-server:0.19.14 .
source envvars_simple.txt
docker run \
-it \
-e COMPOSER_CARD=${COMPOSER_CARD} \
-e COMPOSER_NAMESPACES=${COMPOSER_NAMESPACES} \
-e COMPOSER_AUTHENTICATION=${COMPOSER_AUTHENTICATION} \
-e COMPOSER_MULTIUSER=${COMPOSER_MULTIUSER} \
-e COMPOSER_APIKEY=${COMPOSER_APIKEY} \
-v ~/.composer:/home/composer/.composer \
--name rest \
-p 3001:3000 \
sample/sample-hyperledger-rest-server
I think the error is in the first line
docker build -t hyperledger/composer-rest-server:0.19.12 .
You are pulling down a composer-rest-server based docker image with v0.19.12 where the remainder of your components are 0.19.14. Try pulling the same version of the container and it should be ok.
I'm a dummy. So the problem was that docker run kept trying to pull an old image from docker hub called sample/sample-hyperledger-rest-server that i created, but didn't bother to update. This is a simple case of my bad.

Docker invalid characters for volume when using relative paths

Ive been given a docker container which is run via a bash script. The container should set up a php web app, it then goes on to call other scripts and containers. It seems to work fine for others, but for me its throwing an error.
This is the code
sudo docker run -d \
--name eluci \
-v ./config/eluci.settings:/mnt/eluci.settings \
-v ./config/elucid.log4j.settings.xml:/mnt/eluci.log4j.settings.xml \
--link eluci-database:eluci-database \
/opt/eluci/run_eluci.sh
This is the error
docker: Error response from daemon: create ./config/eluci.settings:
"./config/eluci.settings" includes invalid characters for a local
volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you intended to
pass a host directory, use absolute path.
Im running docker on a centos VM using virtualbox on a windows 7 host.
From googling it seems to be something to do with the mount, however I dont want to change it in case the setting it breaks or is relied upon in another docker container. I still have a few more bash scripts to run, which should orchestrate the rest of the build process. As a complete newb to Docker, this has got me stumped.
The command docker run -v /path/to/dir does not accept relative paths, you should provide an absolute path. The command can be re-written as:
sudo docker run -d \
--name eluci \
-v "/$(pwd)/config/eluci.settings:/mnt/eluci.settings" \
-v "/$(pwd)/config/elucid.log4j.settings.xml:/mnt/eluci.log4j.settings.xml" \
--link eluci-database:eluci-database \
/opt/eluci/run_eluci.sh

Using Volumerize to backup my docker volumes with scp ?

I have a couple of docker volumes i want to backup onto another server, using scp/sftp. I don't know how to deal with that so i decided to have a look at blacklabelops/volumerize GitHub project.
This tool is based on the command line tool Duplicity. Dockerized and Parameterized for easier use and configuration. Tutorial is dealing with a jenkins docker, but i don't understand how to mention i'm want to use a pem file.
I've tried different solution (adding -i option to scp command line) without any success at the moment.
Duplicity man page is mentioning the use of cacert pem files (--ssl-cacert-file option), but i suppose i have to create an env variable when running the docker (with -e option), and i don't know which name to use.
Here what i have so far, can someone please point me in the right direction ?
docker run -d --name volumerize -v jenkins_volume:/source:ro -v backup_volume:/backup     -e "VOLUMERIZE_SOURCE=/source"  -e "VOLUMERIZE_TARGET=scp://me#serverip/home/backup" blacklabelops/volumerize
The option --ssl-cacert-file is only for host verification not for authentication.
I have found this example on how to add pem files inside an scp command:
scp -i /path/to/your/.pemkey -r /copy/from/path user#server:/copy/to/path
The parameter -i /path/to/your/.pemkey can be passed in blacklabelops/volumerize
with the env variable `VOLUMERIZE_DUPLICITY_OPTIONS``
Example:
$ docker run -d \
--name volumerize \
-v jenkins_volume:/source:ro \
-v backup_volume:/backup \
-e "VOLUMERIZE_SOURCE=/source" \
-e "VOLUMERIZE_TARGET=scp:///backup" \
-e 'VOLUMERIZE_DUPLICITY_OPTIONS=--ssh-options "-i /path/to/your/.pemkey"' \
blacklabelops/volumerize

Resources