I must integrate a 3rd party module into our containerized Docker solution. On my local dev it works well as I can download the image 3rdParty_file_name.tar on the disk and use it:
docker load --input .\3rdParty_file_name.tar
The problem appears when I have to do the same in Azure Devops. How can I integrate the image 3rdParty_file_name.tar into the container build pipeline? I can't upload the image because there's a limitation of 10MB in the Azure DevOps\Library\Secure files feature.
Azure Storage account is a good approach to remove the limitation of 10MB file size.
1.Upload your .tar to the container.
2.In your Azure Pipeline, you could use Azure CLI task to execute the az storage blob download command.
Script sample:
mkdir $(Build.SourcesDirectory)\BlobFile
az storage blob download --container-name $(containername) --file $(Build.SourcesDirectory)\BlobFile\"sample.tar" --name "sample.tar"--account-key $(accountkey) --account-name $(accountname)
cd $(Build.SourcesDirectory)\BlobFile
ls
The other option with 0 cost was to upload the .tar file on an internal server, expose the public IP and use the Invoke-WebRequest command into the pipeline:
- task: PowerShell#2
displayName: Downloading {{imageName}} from {{serverName}}
inputs:
targetType: 'inline'
script: Invoke-WebRequest -Uri https://websitename.ch/path/to/the/file/3rdParty_file_name.tar' -OutFile '.\3rdParty_file_name.tar' -UseBasicParsing -Credential (New-Object PSCredential('$(serverUsername)', (ConvertTo-SecureString -AsPlainText -Force -String'$(serverPassword)')))
Related
In Amazon SageMaker, I'm trying to deploy a custom created Docker container with a Scikit-Learn model, but deploying keeps giving errors.
These are my steps:
On my local machine created a script (script.py) and splitted training and test data. The script contains a main section, accepts parameters 'output-train-dir', 'model-dir', 'train' and 'test', and contains the functions model_fn, input_fn, output_fn and predict_fn
Tested the script locally, which worked
python script.py --train . --test . --model-dir .
Created a Docker image based on the default Python image (Python 3.9) and push to Amazon ECR, below are the commands I've used
> docker pull python
create Dockerfile, containing
FROM python:3.9
RUN pip3 install --no-cache scikit-learn numpy pandas joblib sagemaker-training
> docker build -t mymodel .
> aws ecr create-repository --repository-name mymodel
> docker tag 123456789012 123456789123.dkr.ecr.eu-central-1.amazonaws.com/mymodel
> docker push 123456789123.dkr.ecr.eu-central-1.amazonaws.com/mymodel
Uploaded the training and test data to s3 (mybucket)
Trained the script with local modus
aws_sklearn = SKLearn(entry_point='script.py',
framework_version='0.23-1',
image_uri='123456789123.dkr.ecr.eu-central-1.amazonaws.com/mymodel',
instance_type='local',
role=role)
aws_sklearn.fit({'train': mybucket_train_path, 'test': mybucket_test_path, 'model-dir': mybucket_model_path})
which was successful
Next I trained on AWS
aws_sklearn = SKLearn(entry_point='script.py',
framework_version='0.23-1',
image_uri='123456789123.dkr.ecr.eu-central-1.amazonaws.com/mymodel',
instance_type='ml.m4.xlarge',
role=role)
aws_sklearn.fit({'train': mybucket_train_path, 'test': mybucket_test_path})
which also was successful (however, providing the model-dir paramater gave errors, so I omitted it)
deploying however gave an error:
aws_sklearn_predictor = aws_sklearn.deploy(instance_type='ml.t2.medium',
initial_instance_count=1)
Error message:
UnexpectedStatusException: Error hosting endpoint
mymodel-2021-01-24-12-52-02-790: Failed. Reason: The primary
container for production variant AllTraffic did not pass the ping
health check. Please check CloudWatch logs for this endpoint..
And Cloudwatch said:
AWS sagemaker exec: "serve": executable file not found in $PATH
I somewhere read that I should add RUN chmod +x /opt/program/serve to the Dockerfile, but in my local image, there is no serve file present, this is something that SageMaker creates, right ?
How or where should I add serve to the $PATH environment variable or grant execute rights to the serve script ?
The serve file isn't something SageMaker creates automatically; you have to have it be part of the Docker container. This is technically true for the Estimator job too (there should be a similar train file as well; however you overwrite this by manually specifying an entry_point).
This page should help explain what SageMaker is actually trying to run when you run training and batch_transform jobs. That page references this repo which you can use as an example.
In short, if you want to continue to use your custom docker container, you'll have to build in functionality for the serve command (See additional scripts in the repo for launching the gunicorn server which runs multiple instances of the Flask app) and add those files to your Dockerfile.
The RUN chmod +x /opt/program/serve command will also make more sense after you've added that serve command functionality.
A team who worked on a project deployed a docker image to a cloud run service. I do not have the docker image but I have access to the Cloud Run service. I can see the logs and details. I would like to find the files that were in that docker image. How can I access this? For example the image contained a main.py file, now I want to access this.
Thanks
Since Cloud Run uses docker images, you can use the GCP cloud shell (Cloud shell already has docker installed).
Also, you need get the container registry image used for your cloud run service, to get it, please follow this steps:
1.- Choose your service from the cloud run services list
2.- In the service page, go to revision tab.
3.- Click on image URL.
4.- In the image details page, click on Show Pull Command and the image used will have the following format
gcr.io/[imagename] for example gcr.io/cloudrun/hello:latest
In cloud shell run the following command
docker run -it --entrypoint sh {image-name}
For example:
docker run -it --entrypoint sh gcr.io/cloudrun/hello
This command will open a new shell within your docker container (to exit hit, ctrl+d), run the command ls -lah to see the files within your docker image, to see the content of any file use the cat command.
*your google account used in Google Cloud Console must have access to the container registry image
I need to deploy selenium/standalone-chrome image to docker.
The problem is that I use corporative openshift with private registry. There is no possibility to upload image to registry or load it thru the docker (docker service is not exposed).
I managed to export tar file from local machine using command 'docker save -o'. I uploaded this image to artifactory as an artifact and now can download it.
Question: how can I create or import image based on a binary archive with layers?
Thanks in advance.
Even you're using OpenShift, you can proceed to Docker push since the registry is exposed by default: you need your username (oc whoami) along with the token oc whoami --show-token.
Before proceeding, make sure you have an Image Stream since it's mandatory in order to push images.
Once obtained these data, proceed to login from your host:
docker login -u `oc whoami` -p `oc whoami --show-token` registry.your.openshift.fqdn.tld:443
Now, you just need to build your image
docker build . -t registry.your.openshift.fqdn.tld:443/your-image-stream/image-name:version
Finally, push it!
docker push registry.your.openshift.fqdn.tld:443/your-image-stream/image-name:version
I have a Java application in VSTS for which a build definition has been created to generate a number of build artifacts which include an ear file and a server configuration file. All of these build artifacts are zipped up in a final build definition task.
We now wish to create a Docker file which encapsulates the above build artifacts in another VSTS Docker build task. This will be done via a build definition commandline task and it is worth pointing out that our target docker registry is a corporate registry, not Azure.
The challenge I am now facing is how to generate the required docker image from the zipped artifact (or its contents if possible). Any ideas on how this could be achieved?
To generate a docker image from zip/tar file you can use docker load command:
docker load < test.tar.gz
Loaded image: test:latest
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
test latest 769b9341d937 7 weeks ago 2.489 MB
After that you can push the image to your private registry:
docker login <REGISTRY_HOST>:<REGISTRY_PORT>
docker tag <IMAGE_ID> <REGISTRY_HOST>:<REGISTRY_PORT>/<APPNAME>:<APPVERSION>
docker push <REGISTRY_HOST>:<REGISTRY_PORT>/<APPNAME>:<APPVERSION>
Example :
docker login repo.company.com:3456
docker tag 769b9341d937 repo.company.com:3456/test:0.1
docker push repo.company.com:3456/test:0.1
So at the end of your build pipeline add a Command Line task and run the above commands (change the values to your zip file location, username, password, registry url etc.).
I'm currently faced with the task of signing our internally created docker images stored in an Artifactory docker repository on premise.
We have a target environment which (currently) has no access to the internet nor to our internal docker registry.
I've learned so far, that by enabling docker content trust with
export DOCKER_CONTENT_TRUST=1
on the machine building the images is mandatory. As far as I understood the documentation the procedure is:
Enable docker content trust on the build client
use docker push which will generate the root and targets key
Store the key(s) in a save location
Upload the image to artifactory
Is it correct, that with 2. the official notary server is/must be used to verify, that the image is indeed signed by our company?
I'm just wondering if our current deployment scenario can use docker content trust:
Store image as myDockerImage.tar.gz (i.e. docker save <IMAGE_NAME>)
copy tar.gz file to target machine
use docker load -i <FILENAME>.tar.gz to import image to local registry on the target machine
docker run (< Must fail if image is not signed by our key)
As already stated the target machine neither has access to our infrastructure nor to the internet. Is it advisable to use docker content trust for this "offline" scenario? Is there a keyfile that can be put on the target machine instead of having a connection to the notary server?
After digging for a while I came to the conclusion, that using
shasum256 <FILE(S)> deployment-files.sha256
in combination with a signature using which signs the deployment-files.sha256 via openssl is my best option.
For totally offline you can't rely on docker trust
You can associate the image repo digest (the sha256 you can use during docker pull) with the digest of a saved image file:
#!/bin/sh
set -e
IMAGE_TAG=alpine:3.14
IMAGE_HASH=sha256:a573d30bfc94d672abd141b3bf320b356e731e3b1a7d79a8ab46ba65c11d79e1
docker pull ${IMAGE_TAG}#${IMAGE_HASH}
# save the image id so the recipient can find the image after docker load
IMAGE_ID=$(docker image inspect ${IMAGE_TAG}#${IMAGE_HASH} -f {{.Id}})
echo ${IMAGE_ID} > ${IMAGE_TAG}.id
docker save -o ${IMAGE_TAG} ${IMAGE_TAG}#${IMAGE_HASH}
sha256sum ${IMAGE_TAG} ${IMAGE_TAG}.id > ${IMAGE_TAG}.sha256
You now have an image that can be used with docker load, and a digest for that image that you can verify at the receiver (assuming you trust how you convey the digest file to them - ie, you can sign it)
The recipient of the image files can then do something like:
sha256sum -c alpine:3.14.sha256
docker load -i alpine:3.14
docker inspect $(cat alpine:3.14.id)