Serverless deploy error - unable to find image locally - serverless

I am following this tutorial on using python packages with Serverless. Everything was working until I went to deploy with serverless-deploy. Then I got the below error. Does anyone know how to remedy this?
Other info:
OS: Windows 10
WLS Distro: Ubuntu v2
Serverless: Generated requirements from C:\Users\path-to-project\numpy-test\requirements.txt in C:\Users\path-to-project\numpy-test\.serverless\requirements.txt...
Serverless: Installing requirements from C:\Users\myname\AppData\Local\UnitedIncome\serverless-python-requirements\Cache\e1e710a0b480eb4e7e39fca7f1ff66fff4b6f5d572ded1d71d5082f9afe1de06_slspyc\requirements.txt ...
Serverless: Docker Image: lambci/lambda:build-python3.6
Serverless: Using download cache directory C:\Users\myname\AppData\Local\UnitedIncome\serverless-python-requirements\Cache\downloadCacheslspyc
Serverless: Running docker run --rm -v C\:/Users/myname/AppData/Local/UnitedIncome/serverless-python-requirements/Cache/e1e710a0b480eb4e7e39fca7f1ff66fff4b6f5d572ded1d71d5082f9afe1de06_slspyc\:/var/task\:z -v C\:/Users/myname/AppData/Local/UnitedIncome/serverless-python-requirements/Cache/downloadCacheslspyc\:/var/useDownloadCache\:z -u 0 lambci/lambda\:build-python3.6 python -m pip install -t /var/task/ -r /var/task/requirements.txt --cache-dir /var/useDownloadCache...
Error ---------------------------------------------------
Error: STDOUT:
STDERR: Unable to find image 'lambci/lambda:build-python3.6' locally
build-python3.6: Pulling from lambci/lambda
832a9fa6947e: Pulling fs layer
fe6bfd165af8: Pulling fs layer
c61e272c0488: Pulling fs layer
7022c87fa044: Pulling fs layer
ab51495a619c: Pulling fs layer
28e9e78ca9d3: Pulling fs layer
6b8ec334143c: Pulling fs layer
7022c87fa044: Waiting
6b8ec334143c: Waiting
ab51495a619c: Waiting
28e9e78ca9d3: Waiting
c61e272c0488: Verifying Checksum
c61e272c0488: Download complete
7022c87fa044: Verifying Checksum
7022c87fa044: Download complete
ab51495a619c: Verifying Checksum
ab51495a619c: Download complete
28e9e78ca9d3: Download complete
6b8ec334143c: Download complete
832a9fa6947e: Verifying Checksum
832a9fa6947e: Download complete
fe6bfd165af8: Download complete
832a9fa6947e: Pull complete
fe6bfd165af8: Pull complete
c61e272c0488: Pull complete
7022c87fa044: Pull complete
ab51495a619c: Pull complete
28e9e78ca9d3: Pull complete
6b8ec334143c: Pull complete
Digest: sha256:9b1cea555bfed62d1fc9e9130efa9842ee144ef02e2a6a266f1c9e6adeb0866f
Status: Downloaded newer image for lambci/lambda:build-python3.6
ERROR: Could not find a version that satisfies the requirement numpy==1.21.2
ERROR: No matching distribution found for numpy==1.21.2
WARNING: You are using pip version 21.0; however, version 21.2.4 is available.
You should consider upgrading via the '/var/lang/bin/python -m pip install --upgrade pip' command.

From Pypi, numpy==1.21.2 requires at python version to be Python >=3.7, <3.11.
You should either upgrade your lambda image to lambci/lambda:build-python3.8 or downgrade the version of numpy to numpy==1.19.5 which supports python3.6.

Related

Environment variable $ydb_dist (/opt/yottadb/current) could not be verified against the executables path (/opt/yottadb/current/yottadb)

I'm getting this error when pulling and running the yottadb/yottadb-debian latest docker image. I'm using the one-liner for docker on the vendor's site with no success.
Is this result expected based on my warning message? Is there something I need to do differently?
% docker run -it --rm -v $(pwd)/ydb-data:/data yottadb/yottadb-debian:latest-master
Unable to find image 'yottadb/yottadb-debian:latest-master' locally
latest-master: Pulling from yottadb/yottadb-debian
e756f3fdd6a3: Pull complete
46aff8aeff03: Pull complete
85c3e3e2f9eb: Pull complete
148d9d91d050: Pull complete
696701bd209c: Pull complete
650e51801ed7: Pull complete
e152d63a4881: Pull complete
Digest: sha256:2455efef59cf561bb1b97e8ede571a0b4533390754c4fa74b51e27b41a0a18b8
Status: Downloaded newer image for yottadb/yottadb-debian:latest-master
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
Error file is at /tmp/ydb_env_1_u6fTQt/err
%YDB-E-YDBDISTUNVERIF, Environment variable $ydb_dist (/opt/yottadb/current) could not be verified against the executables path (/opt/yottadb/current/yottadb)
Sourcing /opt/yottadb/current/ydb_env_set returned status 253
YottaDB docker images are all x86_64, and won't run on ARM64.
However, YottaDB supports ARM64 (aka AArch64) on Debian. You have to install it manually using the ydbinstall script. Post back if you need more help.

Trouble Building and Deploying a Docker Container to Cloud Run

I've had a couple Cloud Run services live for several months. However, I attempted to make some updates to a service yesterday, and suddenly the scripts I have been using since the beginning are no longer functioning.
gcloud build submit
I've been using the following command to build my node/npm project via the remote docker container:
gcloud builds submit --tag gcr.io/PROJECT_ID/generator
I have a dockerfile and .dockerignore in the same directory as my package.json from where I run this script. However, yesterday I suddenly started getting an error which read that a dockerfile is required when using the --tag parameter and the image would not build.
Tentative Solution
After some research, I tried moving my build cnfig into a gcloudbuild-staging.json, which looks like this:
{
"steps": [
{
"name": "gcr.io/cloud-builders/docker",
"args": [
"build",
"-t",
"gcr.io/PROJECT_ID/generator",
"."
]
}
]
}
And I've chnaged my build script to:
gcloud builds submit --config=./gcloudbuild-staging.json
After doing this, the container will build - or as far as I can tell. The console output looks like this:
------------------------------------------------- REMOTE BUILD OUTPUT --------------------------------------------------
starting build "8ca1af4c-d337-4349-959f-0000577e4528"
FETCHSOURCE
Fetching storage object: gs://PROJECT_ID/source/1650660913.623365-8a689bcf007749b7befa6e21ab9086dd.tgz#1650660991205773
Copying gs://PROJECT_ID/source/1650660913.623365-8a689bcf007749b7befa6e21ab9086dd.tgz#1650660991205773...
/ [0 files][ 0.0 B/ 22.2 MiB]
/ [1 files][ 22.2 MiB/ 22.2 MiB]
-
Operation completed over 1 objects/22.2 MiB.
BUILD
Already have image (with digest): gcr.io/cloud-builders/docker
Sending build context to Docker daemon 785.4kB
Step 1/6 : FROM node:14-slim
14-slim: Pulling from library/node
8bd3f5a20b90: Pulling fs layer
3a665e454db5: Pulling fs layer
11fcaa1377c4: Pulling fs layer
bf0a7233d366: Pulling fs layer
0d4d73621610: Pulling fs layer
bf0a7233d366: Waiting
0d4d73621610: Waiting
3a665e454db5: Verifying Checksum
3a665e454db5: Download complete
bf0a7233d366: Verifying Checksum
bf0a7233d366: Download complete
8bd3f5a20b90: Verifying Checksum
8bd3f5a20b90: Download complete
0d4d73621610: Verifying Checksum
0d4d73621610: Download complete
11fcaa1377c4: Verifying Checksum
11fcaa1377c4: Download complete
8bd3f5a20b90: Pull complete
3a665e454db5: Pull complete
11fcaa1377c4: Pull complete
bf0a7233d366: Pull complete
0d4d73621610: Pull complete
Digest: sha256:9ea3dfdff723469a060d1fa80577a090e14ed28157334d649518ef7ef8ba5b9b
Status: Downloaded newer image for node:14-slim
---> 913d072dc4d9
Step 2/6 : WORKDIR /usr/src/app
---> Running in 96bc104b9501
Removing intermediate container 96bc104b9501
---> 3b1b05ea0470
Step 3/6 : COPY package*.json ./
---> a6eca4a75ddd
Step 4/6 : RUN npm ci --only=production
---> Running in 7e870db13a9b
> protobufjs#6.11.2 postinstall /usr/src/app/node_modules/protobufjs
> node scripts/postinstall
added 237 packages in 7.889s
Removing intermediate container 7e870db13a9b
---> 6a86cc961a09
Step 5/6 : COPY . ./
---> 9e1f0f7a69a9
Step 6/6 : CMD [ "node", "index.js" ]
---> Running in d1b4d054a974
Removing intermediate container d1b4d054a974
---> 672075ef5897
Successfully built 672075ef5897
Successfully tagged gcr.io/PROJECT_ID/generator:latest
PUSH
DONE
------------------------------------------------------------------------------------------------------------------------
ID CREATE_TIME DURATION SOURCE IMAGES STATUS
8ca1af4c-d337-4349-959f-0000577e4528 2022-04-22T20:56:31+00:00 31S gs://PROJECT_ID/source/1650660913.623365-8a689bcf007749b7befa6e21ab9086dd.tgz - SUCCESS
There are no errors in the online logs.
gcloud run deploy
Here is the code I use to deploy the container:
gcloud run deploy generator --image gcr.io/PROJECT_ID/generator --region=us-central1 --set-env-vars ENVIRONMENT=DEV
The console output for this is:
Deploying container to Cloud Run service [generator] in project [PROJECT_ID] region [us-central1]
✓ Deploying... Done.
✓ Creating Revision...
✓ Routing traffic...
Done.
Service [generator] revision [generator-00082-kax] has been deployed and is serving 100 percent of traffic.
Service URL: https://generator-SERVICE_ID-uc.a.run.app
No errors in the run console, either. It shows the deployment as if everything is fine.
The Problem
Nothing is changing. Locally, running this service with the front-end app which accesses it produces successful results. However, my staging version of the app hosted on Firebase is still acting as if the old version of the code is active.
What I've Tried
I've made sure I'm testing and deploying on the same git branch
I've done multiple builds and deployments in case there was some kind of fluke.
I've tried using the gcloud command to update a service's traffic to its latest revision
I've made sure my client app is using the correct service URL. It doesn't appear to have changed but I copy/pasted it anyway just in case
My last successful deployment was on March 19, 2022. Since them, the only thing I've done is update all my WSL linux apps - which would include gcloud. I couldn't tell what version I ws on before, but I'm now on 38.0.0 of the Google Cloud CLI.
I've tried searching for my issue but nothing relevant is coming up. I'm totally stumped as to why all of this has stopped working and I'm receiving no errors whatsoever. Any suggestions or further info I can provide?
gcloud builds submit should (!?) continue to work with --tag as long as there is a Dockerfile in the folder from which you're running the command or you explicitly specify a source folder.
I'm not disputing that you received an error but it would be helpful to see the command you used and the error that resulted. You shouldn't have needed to switch to a build config file. Although that isn't the problem.
Using latest as a tag value is challenging. The term suggests that the latest version of a container image will be used but this is often not what happens. It is particularly challenging when a service like Cloud Run is running an image tagged latest and a developer asks the service to run -- what the developer knows (!) is a different image -- but also tagged latest.
As far as most services are concerned, same tag means same image and so it's possible (!) either that Cloud Run is not finding a different image or you're not providing it with a different image. I'm unclear which alternative is occurring but I'm confident that your use of latest is causing some of your issues.
So.... for starters, please consider using a system in which every time you create a new container, you tag it with a unique identifier. A common way to do this is to use a commit hash (as these change with every commit). Alternatively you can use the container's digest (instead of a tag) to reference an image version. This requires image references of the form {IMG}#sha256:{HASH}.
Lastly, gcloud run now (has always?) supported deployment from source (folder) to running service (it does the Cloud Build process for you and deploys the result to Cloud Run. It may be worth using this flow to reduce your steps and thereby the possibility of error.
See: Deploying from source code

How to fix this annoying docker error? (failed to register layer)

This error appears when I try to press ANY docker image.
This is a Fresh installation of docker in 5.0.21-rt14-MANJARO
Unable to find image 'ubuntu:16.04' locally
16.04: Pulling from library/ubuntu
35b42117c431: Extracting [==================================================>] 43.84MB/43.84MB
ad9c569a8d98: Download complete
293b44f45162: Download complete
0c175077525d: Download complete
docker: failed to register layer: Error processing tar file(exit status 1): Error cleaning up after pivot: remove /.pivot_root336598748: device or resource busy.
See 'docker run --help'.
I had the same error with the 5.0.xxx Kernel. Switching back to 4.19.59-1-MANJARO solved the problem...
EDIT:
you might try:
sudo tee /etc/modules-load.d/loop.conf <<< "loop"
sudo modprobe loop
then reboot and try again.
I'm now on 5.2.4-1-MANJARO
and everything works.
I followed these instructions here:
https://linuxhint.com/docker_arch_linux/
Yes, the problem is with your kernel version. I installed the version 5.2.4 and works very well.
Version with problem: 5.0.21

Suse Linux docker file

I have a suse linux 12 ec2 instance. I have activated a image sles11sp3-docker-image using sledocker. In the Dockerfile when I try to install ibm java 1.6 using
RUN zypper in java-1_6_0-ibm, I get following error .
Refreshing service 'container-suseconnect'.
Problem retrieving the repository index file for service 'container-suseconnect':
[|]
Skipping service 'container-suseconnect' because of the above error.
Warning: No repositories defined. Operating only with the installed resolvables. Nothing can be installed.
Loading repository data...
Reading installed packages...
'java-1_6_0-ibm' not found in package names. Trying capabilities.
Resolving package dependencies...
No provider of 'java-1_6_0-ibm' found.
Nothing to do.
The command '/bin/sh -c zypper in java-1_6_0-ibm' returned a non-zero code: 104
Please help
According to the docs (https://www.suse.com/documentation/sles-12/singlehtml/dockerquick/dockerquick.html), running zypper ref -s only gets you repo URLs with 12 hour tokens. Moreover, this command only appears to work while running in Docker on a SLES12 host.
Once I push the image to a repo and run it on another host, zypper ref -s no longer works (same error as yours). I'm basically stuck pre-installing all the base stuff before I publish the image.

Hashes in `docker pull wordpress`

Locally, I just ran docker pull wordpress:
$docker pull wordpress
Using default tag: latest
latest: Pulling from library/wordpress
7268d8f794c4: Already exists
a3ed95caeb02: Download complete
38331772e700: Pull complete
74507bbf90f9: Downloading [=========> ] 13.47 MB/69.26 MB
c6734ca38ed8: Download complete
616f76e75b9d: Download complete
763f79680cbb: Download complete
e70b2d142af2: Download complete
62012af41161: Download complete
33a120b6dfa1: Download complete
ea474957253d: Download complete
757eabb832b4: Downloading [=============> ] 8.518 MB/31.61 MB
286426d94368: Download complete
cde52c0a5f98: Download complete
7c925ca09be1: Download complete
7c4e1930593c: Downloading [============> ] 1.127 MB/4.443 MB
9c4eeb87aed8: Waiting
e13c8ae5c7d1: Waiting
730edfa5d07f: Waiting
The Using default tag: latest is self-explanatory. But, it's not clear to me what all of those hashes, e.g. c6734ca38ed8 and a3ed95caeb02, represent.
Could you please explain?
Those are sha256 hashes for all layers depends on Docker image.
Docker images are based on "layers", just like aufs or overlayFS.
So, when you pull something, Docker need all deps for some image which in a nutshell are just a differences between "commits".
You can inspect all deps using docker images -a to print all layers available. Or use something like this
Layers for docker pull mongo as example.

Resources