Ansible Container unable to load libsudo_util.so.0 for privileged module execution - docker

I am trying to use Ansible Container for a basic example which is supposed to install Node within the image. When I use the ansible-container build command, after successfully building the conductor image, it fails the first task with a sudo related error. The task in question requires root privileges to be executed.
I am running Debian GNU/Linux 9.2 (stretch) with Docker 17.09.0-ce installed through the Docker APT repository. I tried with Ansible both from Debian Stretch (2.2.1.0-2) and from Pypi (2.4.1.0). I tried Ansible Container from Pypi (0.9.3rc0) and from the latest Git source. I always get the exact same error output.
The Ansible module complains about the following:
sudo: error while loading shared libraries: libsudo_util.so.0: cannot open shared object file: No such file or directory
The task being run looks like the following:
- name: Add the Node Source repository signing certificate to APT
apt_key:
id: 9FD3B784BC1C6FC31A8A0A1C1655A0AB68576280
keyserver: hkps://hkps.pool.sks-keyservers.net
become: yes
Both the conductor as well as the service I try to create use the debian:stretch base image.
I am running the ansible-container build command with sudo prepended, because only root may access the Docker socket on my system.
Here is the content of my container.yml:
version: "2"
settings:
conductor:
base: debian:stretch
project_name: container_test
services:
nodejs:
from: debian:stretch
roles:
- nodejs
registries: {}
Here is the full error output:
Building Docker Engine context...
Starting Docker build of Ansible Container Conductor image (please be patient)...
Parsing conductor CLI args.
Dockerâ„¢ daemon integration engine loaded. Build starting. project=container_test
Building service... project=container_test service=nodejs
PLAY [nodejs] ******************************************************************
TASK [Gathering Facts] *********************************************************
ok: [nodejs]
TASK [nodejs : Add the Node Source repository signing certificate to APT] ******
fatal: [nodejs]: FAILED! => {"changed": false, "module_stderr": "sudo: error while loading shared libraries: libsudo_util.so.0: cannot open shared object file: No such file or directory\n", "module_stdout": "", "msg": "MODULE FAILURE", "rc": 127}
to retry, use: --limit #/tmp/tmpTRBQDe/playbook.retry
PLAY RECAP *********************************************************************
nodejs : ok=1 changed=0 unreachable=0 failed=1
ERROR Error applying role! engine=<container.docker.engine.Engine object at 0x7f84da0c5ed0> exit_code=2 playbook=[{'hosts': u'nodejs', 'roles': ['nodejs'], 'vars': {}}]
Traceback (most recent call last):
File "/usr/local/bin/conductor", line 11, in <module>
load_entry_point('ansible-container', 'console_scripts', 'conductor')()
File "/_ansible/container/__init__.py", line 19, in __wrapped__
return fn(*args, **kwargs)
File "/_ansible/container/cli.py", line 408, in conductor_commandline
**params)
File "/_ansible/container/__init__.py", line 19, in __wrapped__
return fn(*args, **kwargs)
File "/_ansible/container/core.py", line 843, in conductorcmd_build
raise RuntimeError('Build failed.')
RuntimeError: Build failed.
Conductor terminated. Cleaning up. command_rc=1 conductor_id=e8899239ad1017a89acc97396d38ab805d937a7b3d74e5a7d2741d7b1124bb0c save_container=False
ERROR Conductor exited with status 1

I found the cause for this:
The base image debian:stretch does not include sudo. Therefore a solution was to set the become_method to su instead. Normally, this can either be done per task, host or playbook. Because it is not obvious where the equivalent of a playbook lies in Ansible Container and how the concept of hosts applies, I only really had the option to use the task level.
I decided to add a role which installs sudo in the image and only set the become_method to su for the solitary task within this role. Atfer the role is applied, no further change is needed and the original tasks work.
A follow-up problem then was, that also GnuPG was not installed either to accomplish the apt_key module task properly. The following solution solves both problems:
- become: yes
become_method: su
block:
- name: Update package cache
apt:
update_cache: yes
cache_valid_time: 86400
- name: Install GnuPG
package:
name: gnupg2
state: present
- name: Install sudo
package:
name: sudo
state: present
Supposedly a more clean option would be to generate a base image that already includes these dependencies to allow for a more streamlined Ansible Container image generation.

Related

GCP deploy error: Creating Revision interrupted

While deploying to GCP using Terraform and in our YAML file we have the following section
# Deploy to Cloud Run
- id: 'deploy'
name: 'gcr.io/cloud-builders/gcloud'
waitFor: ['build-image','push-image']
entrypoint: bash
args:
- '-c'
- |
gcloud beta run services update $_SERVICE_NAME \
'--platform=$_PLATFORM' \
'--image=$_IMAGE_NAME' \
--labels=managed-by=gcp-cloud-build-deploy-cloud-run,commit-sha=$COMMIT_SHA,gcb-build-id=$BUILD_ID \
'--region=$_GCP_REGION' \
--tag '$_REVISION' \
We set WaitFor for all previous dependencies for testing. We are getting this error:
Starting Step #3 - "deploy"
Step #3 - "deploy": Already have image (with digest): gcr.io/cloud-builders/gcloud
Step #3 - "deploy": Deploying...
Step #3 - "deploy": Creating Revision......interrupted
Step #3 - "deploy": Deployment failed
Step #3 - "deploy": ERROR: (gcloud.beta.run.services.update) Revision my-service-ui-00007-doq is not ready.
Finished Step #3 - "deploy"
ERROR
ERROR: build step 3 "gcr.io/cloud-builders/gcloud" failed: step exited with non-zero status: 1
We have tried playing around with the command line and that is showing the same error. We tried several times, so this is not a random error.
Any idea what could cause this? Or any clue how we could investigate it?
I've experienced that if a non-ready revision has a traffic tag assigned (as can be the case when tagging the revision during e.g. deploy), subsequent deploys might fail. So best to go through the UI and remove all traffic tags from revisions which are not green. Maybe that fixes the problem. Just a guess if this could be the problem.
Christian
Remove that revision and tag from the YAML file and try again.
Removing from YAML will help because when updating traffic, "0%" traffic was being assigned to every revision specified in the traffic field. Since this revision is not ready, any operation that is assigning traffic (even though 0%) is causing this error.

Gradle Docker Fail with ERROR: lstat /var/lib/docker/tmp/buildkit-mount145682111/build/libs: no such file or directory

I am creating simple spring boot application named channelling and trying to build docker image using gradle.
Here is my build.gradle script
plugins {
id 'org.springframework.boot' version '2.2.2.RELEASE'
id 'com.palantir.docker' version '0.25.0'
}
docker {
name "${project.name}:${project.version}"
files 'channeling.jar'
tag 'DockerHub', "test-usr/test:${project.version}"
}
version = '0.0.1-SNAPSHOT'
And this is my Dockerfile
FROM openjdk:8-jdk-alpine
ARG JAR_FILE=build/libs/*.jar
COPY ${JAR_FILE} channeling.jar
ENTRYPOINT ["java","-jar","/channeling.jar"]
Once I trying to build image using bellow command, It works fine.
docker build -t springio/test .
But when I run gradle docker
It gives bellow error message.
#6 [2/2] COPY build/libs/*.jar channeling.jar
#6 sha256:73a6a8447f65c5bb42b12cceabb3dfa40d4f67e73c569b617331cfdcb9a6a963
#6 ERROR: lstat /var/lib/docker/tmp/buildkit-mount145682111/build/libs: no such file or directory
------
> [2/2] COPY build/libs/*.jar channeling.jar:
------
lstat /var/lib/docker/tmp/buildkit-mount145682111/build/libs: no such file or directory
> Task :channel:docker FAILED
FAILURE: Build failed with an exception
Project structure is like bellow.
I am using bellow gradle and docker versions.
gradle : 6.8
docker : 20.10.2
I searched this issue online but didn't find the solution. Please help to resolve this.
I had a similar issue. I fixed it by avoiding using 'docker' function in Gradle build. Instead I added the following to build.gradle.kts:
configure<com.palantir.gradle.docker.DockerExtension> {
dependsOn(tasks.findByPath("build"))
name = "${project.name}:${version}"
files("build/libs/${project.name}-${version}.jar")
buildArgs(mapOf("JAR_FILE" to "${project.name}-${version}.jar"))
}
Also, make sure you put dependsOn function, as it has to execute after build.

Ansible - Docker's repository key is missing after using apt module

I'm not sure this issue is related to Ansible or Docker, but here it goes:
I can list Docker's repository key with apt-key:
/etc/apt/trusted.gpg.d/docker-key.gpg
-------------------------------------
pub rsa4096 2017-02-22 [SCEA]
9DC8 5822 9FC7 DD38 854A E2D8 8D81 803C 0EBF CD88
uid [ unknown] Docker Release (CE deb) <docker#docker.com>
sub rsa4096 2017-02-22 [S]
I can connect to the Docker repo with it, download everything and so on.
However after I run the following Ansible code, this key just vanishes and I have to set it up again or else I'm unable to connect to the repository.
- name: Update apt cache
apt:
update_cache: yes
cache_valid_time: 3600
- name: Update packages
apt:
upgrade: safe
I'm sure the key disappears after the code snippet above. I would like to know why or if I'm overlooking anything.

How could we debug a Docker container which status in unhealthy?

We are following this tutorial to be able to use the Data Platform IRIS:
https://github.com/es-comunidad-intersystems/webinar-gestion-apis
We have found an issue, because of it looks like the IRIS version which is being requested in the tutorial, is not longer being available at the download page.
We have downloaded the closest version which is:
InterSystems IRIS
2019.4
Then we have tried to follow the steps:
docker load -i iris-2019.4.0.383.0-docker.tar.gz
It outputs:
Loaded image: intersystems/iris:2019.4.0.383.0
Then we have downloaded the webinar code:
git clone https://github.com/es-comunidad-intersystems/webinar-gestion-apis.git
After that we tried to build the Docker image as follows:
docker build . --tag webinar-gestion-apis:stable --no-cache
And we have seen the output:
Sending build context to Docker daemon 754.2kB
Step 1/9 : FROM intersystems/iris:2019.3.0.302.0
pull access denied for intersystems/iris, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
So then, we have thought that this issue is related to the Dockerfile, because of it had the following command:
# building from the InterSystems IRIS
FROM intersystems/iris:2019.3.0.302.0
To adjust it to get the verion we have download, we have written:
# building from the InterSystems IRIS
FROM intersystems/iris:2019.4.0.383.0
So after it, we wrote:
docker build . --tag webinar-gestion-apis:stable --no-cache
And it installed the image correctly:
Being the output:
Sending build context to Docker daemon 758.8kB
Step 1/9 : FROM intersystems/iris:2019.4.0.383.0
---> 46e2532c2583
Step 2/9 : USER root
---> Running in 3de765837aa5
Removing intermediate container 3de765837aa5
---> 35a7d04b1f5a
Step 3/9 : RUN mkdir -p /opt/webinar/install
---> Running in 1b1690dff84f
Removing intermediate container 1b1690dff84f
---> 64d42f352bb9
Step 4/9 : COPY install /opt/webinar/install
---> 2710ae3d8265
Step 5/9 : RUN mkdir -p /opt/webinar/src
---> Running in 12ccd30d880b
Removing intermediate container 12ccd30d880b
---> c2e5d7dff819
Step 6/9 : COPY src /opt/webinar/src/
---> 943d888243a9
Step 7/9 : RUN chown -R ${ISC_PACKAGE_MGRUSER}:${ISC_PACKAGE_IRISGROUP} /opt/webinar
---> Running in 57a5b34bbf70
Removing intermediate container 57a5b34bbf70
---> a8629b4948a0
Step 8/9 : USER irisowner
---> Running in 93d6814d7452
Removing intermediate container 93d6814d7452
---> 4e9faf862ebe
Step 9/9 : RUN iris start iris && printf 'zn "USER" \n do $system.OBJ.Load("/opt/webinar/src/Webinar/Installer.cls","c")\n do ##class(Webinar.Installer).Run()\n zn "%%SYS"\n do ##class(SYS.Container).QuiesceForBundling()\n h\n' | irissession IRIS && iris stop iris quietly
---> Running in 2e28a60b29b4
Using 'iris.cpf' configuration file
This copy of InterSystems IRIS has been licensed for use exclusively by:
Local license key file not found.
Copyright (c) 1986-2019 by InterSystems Corporation
Any other use is a violation of your license agreement
1 alert(s) during startup. See messages.log for details.
Starting IRIS
Node: 2e28a60b29b4, Instance: IRIS
USER>
USER>
Load started on 06/13/2020 08:18:52
Loading file /opt/webinar/src/Webinar/Installer.cls as udl
Compiling class Webinar.Installer
Compiling routine Webinar.Installer.1
Load finished successfully.
USER>
START INSTALLER
2020-06-13 08:18:58 0 Webinar.Installer: Installation starting at 2020-06-13 08:18:58, LogLevel=0
2020-06-13 08:18:58 0 : Creating namespace WEBINAR
Load of directory started on 06/13/2020 08:19:08
Loading file /opt/webinar/src/Webinar/Installer.cls as udl
Loading file /opt/webinar/src/Webinar/API/Leaderboard/v1/impl.cls as udl
Loading file /opt/webinar/src/Webinar/API/Leaderboard/v1/spec.cls as udl
Loading file /opt/webinar/src/Webinar/Data/Player.cls as udl
Loading file /opt/webinar/src/Webinar/Data/Team.cls as udl
Compilation started on 06/13/2020 08:19:08 with qualifiers 'cuk'
Compiling 5 classes, using 2 worker jobs
Compiling class Webinar.API.Leaderboard.v1.impl
Compiling class Webinar.API.Leaderboard.v1.spec
Compiling class Webinar.Data.Player
Compiling class Webinar.Installer
Compiling class Webinar.Data.Team
Compiling table Webinar_Data.Player
Compiling table Webinar_Data.Team
Compiling routine Webinar.API.Leaderboard.v1.impl.1
Compiling routine Webinar.Data.Team.1
Compiling routine Webinar.Installer.1
Compiling routine Webinar.Data.Player.1
Compiling class Webinar.API.Leaderboard.v1.impl
Compiling class Webinar.API.Leaderboard.v1.disp
Compiling routine Webinar.API.Leaderboard.v1.impl.1
Compiling routine Webinar.API.Leaderboard.v1.disp.1
Compilation finished successfully in 0.470s.
Load finished successfully.
Load started on 06/13/2020 08:19:09
Loading file /opt/webinar/install/WebTerminal-v4.9.0.xml as xml
Imported class: WebTerminal.Analytics
Imported class: WebTerminal.Autocomplete
Imported class: WebTerminal.Common
Imported class: WebTerminal.Core
Imported class: WebTerminal.Engine
Imported class: WebTerminal.ErrorDecomposer
Imported class: WebTerminal.Handlers
Imported class: WebTerminal.Installer
Imported class: WebTerminal.Router
Imported class: WebTerminal.StaticContent
Imported class: WebTerminal.Trace
Imported class: WebTerminal.Updater
Compiling 12 classes, using 2 worker jobs
Compiling class WebTerminal.Analytics
Compiling class WebTerminal.ErrorDecomposer
Compiling class WebTerminal.Common
Compiling class WebTerminal.StaticContent
Compiling class WebTerminal.Handlers
Compiling class WebTerminal.Updater
Compiling class WebTerminal.Autocomplete
Compiling class WebTerminal.Core
Compiling class WebTerminal.Trace
Compiling class WebTerminal.Router
Compiling class WebTerminal.Engine
Compiling routine WebTerminal.Analytics.1
Compiling routine WebTerminal.ErrorDecomposer.1
Compiling routine WebTerminal.Common.1
Compiling routine WebTerminal.StaticContent.1
Compiling routine WebTerminal.Updater.1
Compiling routine WebTerminal.Handlers.1
Compiling routine WebTerminal.Core.1
Compiling routine WebTerminal.Router.1
Compiling routine WebTerminal.Trace.1
Compiling routine WebTerminal.Autocomplete.1
Compiling routine WebTerminal.Engine.1
Compiling class WebTerminal.Installer
Compiling routine WebTerminal.Installer.1
Installing WebTerminal application to WEBINAR
Creating WEB application "/terminal"...
WEB application "/terminal" is created.
Assigning role %DB_IRISSYS to a web application; resulting roles: :%DB_IRISSYS:%DB_USER
Creating WEB application "/terminalsocket"...
WEB application "/terminalsocket" is created.
%All namespace is created.
Mapping %WebTerminal package into all namespaces: %All
WebTerminal package successfully mapped into all namespaces.
Load finished successfully.
2020-06-13 08:19:09 0 Webinar.Installer: Installation succeeded at 2020-06-13 08:19:09
2020-06-13 08:19:09 0 %Installer: Elapsed time 11.29037s
INSTALLER SUCCESS
USER>
%SYS>
%SYS>
Removing intermediate container 2e28a60b29b4
---> e23ab1a58cd2
Successfully built e23ab1a58cd2
Successfully tagged webinar-gestion-apis:stable
SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host. All files and directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories.
Now is the difficulty, when we try to run the container it says "unhealthy"
docker-compose up -d
The outputs is:
Starting iris-2019.4 ... done
Being the docker-compose.yml (we have kept the original git hub repo file, just changing the container-name from iris-2019.3 to iris-2019.4)
version: '3.2'
services:
iris:
image: webinar-gestion-apis:stable
container_name: iris-2019.4
ports:
- "51773:51773"
- "52773:52773"
volumes:
- ./config/iris.key:/usr/irissys/mgr/iris.key
- ./shared:/shared
When we try to use:
docker-compose ps
We observe:
Name Command State Ports
----------------------------------------------------------------------------------------------
iris-2019.4 /iris-main Up (unhealthy) 0.0.0.0:51773->51773/tcp, 0.0.0.0:52773->52773/tcp
And if we try to debug it and see the logs, we have:
docker inspect --format "{{json .State.Health }}" iris-2019.4
And it shows:
{"Status":"unhealthy","FailingStreak":4,"Log":[{"Start":"2020-06-13T08:30:56.232804406Z","End":"2020-06-13T08:30:56.328718067Z","ExitCode":1,"Output":""},{"Start":"2020-06-13T08:31:56.332937629Z","End":"2020-06-13T08:31:56.427169416Z","ExitCode":1,"Output":""},{"Start":"2020-06-13T08:32:56.43026636Z","End":"2020-06-13T08:32:56.5141952Z","ExitCode":1,"Output":""},{"Start":"2020-06-13T08:33:56.520060854Z","End":"2020-06-13T08:33:56.605017629Z","ExitCode":1,"Output":""}]}
Being the result that we can not connect to:
http://localhost:52773/csp/sys/UtilHome.csp
How could we debug a Docker container which status in unhealthy?
Maybe a little late but probably that error you were having was caused because of IAM version + license file you were using.
You can try a newer version which runs on IRIS 2021 and IAM 2.3.3:
https://openexchange.intersystems.com/package/workshop-rest-iam
I've got several 'unhealthy' and 'warn' container startups, some of them without access to the management portal, the solution was to enter in the container via command line by clicking the console button (first one)
command line icon
And then write
bash
cat /usr/irissys/mgr/messages.log
This will show the startup log where you will hopefully able to see the error

sam build --use-container failed but sam build is success

The issue happens in the following setup:
Virtual Machine launched in OpenStack
OS is Ubuntu 16.04 LTS
Python version 3.7.6 with virtualenv installed
SAM CLI version 0.39.0
To replicate the issues, you may use the above setup and perform the following steps:
$ sam init
$ Choice: 1
$ Runtime 9 Select python3.7
$ Project name[sam-app]: sam-app
$ Template selection: 1 Select Hello World Example
Wait for the application to be generated.
$ cd sam-app
$ python3 -m virtualenv venv
$ source venv/bin/activate
$(venv) source venv/bin/activate
$(venv) sam build
The following output shall appear:
Building resource 'HelloWorldFunction'
Running PythonPipBuilder:ResolveDependencies
Running PythonPipBuilder:CopySource
Build Succeeded
Built Artifacts : .aws-sam/build
Built Template : .aws-sam/build/template.yaml
Commands you can use next
=========================
[*] Invoke Function: sam local invoke
[*] Deploy: sam deploy --guided
However, if the --use-container flag is used, the following error will appear
Telemetry endpoint configured to be https://aws-serverless-tools-telemetry.us-west-2.amazonaws.com/metrics
'build' command is called
Starting Build inside a container
No Parameters detected in the template
2 resources found in the template
Found Serverless function with name='HelloWorldFunction' and CodeUri='hello_world/'
Building resource 'HelloWorldFunction'
Fetching lambci/lambda:build-python3.7 Docker container image......
Mounting /home/ubuntu/test/sam-app/hello_world as /tmp/samcli/source:ro,delegated inside runtime container
Using the request object from command line argument
Loading workflow module 'aws_lambda_builders.workflows'
Registering workflow 'PythonPipBuilder' with capability 'Capability(language='python', dependency_manager='pip', application_framework=None)'
Registering workflow 'NodejsNpmBuilder' with capability 'Capability(language='nodejs', dependency_manager='npm', application_framework=None)'
Registering workflow 'RubyBundlerBuilder' with capability 'Capability(language='ruby', dependency_manager='bundler', application_framework=None)'
Registering workflow 'GoDepBuilder' with capability 'Capability(language='go', dependency_manager='dep', application_framework=None)'
Registering workflow 'GoModulesBuilder' with capability 'Capability(language='go', dependency_manager='modules', application_framework=None)'
Registering workflow 'JavaGradleWorkflow' with capability 'Capability(language='java', dependency_manager='gradle', application_framework=None)'
Registering workflow 'JavaMavenWorkflow' with capability 'Capability(language='java', dependency_manager='maven', application_framework=None)'
Registering workflow 'DotnetCliPackageBuilder' with capability 'Capability(language='dotnet', dependency_manager='cli-package', application_framework=None)'
Found workflow 'PythonPipBuilder' to support capabilities 'Capability(language='python', dependency_manager='pip', application_framework=None)'
Running workflow 'PythonPipBuilder'
Running PythonPipBuilder:ResolveDependencies
calling pip download -r /tmp/samcli/source/requirements.txt --dest /tmp/samcli/scratch
PythonPipBuilder:ResolveDependencies failed
Traceback (most recent call last):
File "/var/lang/lib/python3.7/site-packages/aws_lambda_builders/workflows/python_pip/actions.py", line 42, in execute
requirements_path=self.manifest_path,
File "/var/lang/lib/python3.7/site-packages/aws_lambda_builders/workflows/python_pip/packager.py", line 137, in build_dependencies
self._dependency_builder.build_site_packages(requirements_path, artifacts_dir_path, scratch_dir_path)
File "/var/lang/lib/python3.7/site-packages/aws_lambda_builders/workflows/python_pip/packager.py", line 198, in build_site_packages
wheels, packages_without_wheels = self._download_dependencies(scratch_directory, requirements_filepath)
File "/var/lang/lib/python3.7/site-packages/aws_lambda_builders/workflows/python_pip/packager.py", line 222, in _download_dependencies
deps = self._download_all_dependencies(requirements_filename, directory)
File "/var/lang/lib/python3.7/site-packages/aws_lambda_builders/workflows/python_pip/packager.py", line 305, in _download_all_dependencies
self._pip.download_all_dependencies(requirements_filename, directory)
File "/var/lang/lib/python3.7/site-packages/aws_lambda_builders/workflows/python_pip/packager.py", line 594, in download_all_dependencies
raise NoSuchPackageError(str(package_name))
aws_lambda_builders.workflows.python_pip.packager.NoSuchPackageError: Could not satisfy the requirement: requests
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/var/lang/lib/python3.7/site-packages/aws_lambda_builders/workflow.py", line 269, in run
action.execute()
File "/var/lang/lib/python3.7/site-packages/aws_lambda_builders/workflows/python_pip/actions.py", line 45, in execute
raise ActionFailedError(str(ex))
aws_lambda_builders.actions.ActionFailedError: Could not satisfy the requirement: requests
Builder workflow failed
Traceback (most recent call last):
File "/var/lang/lib/python3.7/site-packages/aws_lambda_builders/workflows/python_pip/actions.py", line 42, in execute
requirements_path=self.manifest_path,
File "/var/lang/lib/python3.7/site-packages/aws_lambda_builders/workflows/python_pip/packager.py", line 137, in build_dependencies
self._dependency_builder.build_site_packages(requirements_path, artifacts_dir_path, scratch_dir_path)
File "/var/lang/lib/python3.7/site-packages/aws_lambda_builders/workflows/python_pip/packager.py", line 198, in build_site_packages
wheels, packages_without_wheels = self._download_dependencies(scratch_directory, requirements_filepath)
File "/var/lang/lib/python3.7/site-packages/aws_lambda_builders/workflows/python_pip/packager.py", line 222, in _download_dependencies
deps = self._download_all_dependencies(requirements_filename, directory)
File "/var/lang/lib/python3.7/site-packages/aws_lambda_builders/workflows/python_pip/packager.py", line 305, in _download_all_dependencies
self._pip.download_all_dependencies(requirements_filename, directory)
File "/var/lang/lib/python3.7/site-packages/aws_lambda_builders/workflows/python_pip/packager.py", line 594, in download_all_dependencies
raise NoSuchPackageError(str(package_name))
aws_lambda_builders.workflows.python_pip.packager.NoSuchPackageError: Could not satisfy the requirement: requests
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/var/lang/lib/python3.7/site-packages/aws_lambda_builders/workflow.py", line 269, in run
action.execute()
File "/var/lang/lib/python3.7/site-packages/aws_lambda_builders/workflows/python_pip/actions.py", line 45, in execute
raise ActionFailedError(str(ex))
aws_lambda_builders.actions.ActionFailedError: Could not satisfy the requirement: requests
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/var/lang/lib/python3.7/site-packages/aws_lambda_builders/__main__.py", line 126, in main
mode=params.get("mode", None),
File "/var/lang/lib/python3.7/site-packages/aws_lambda_builders/builder.py", line 125, in build
return workflow.run()
File "/var/lang/lib/python3.7/site-packages/aws_lambda_builders/workflow.py", line 76, in wrapper
func(self, *args, **kwargs)
File "/var/lang/lib/python3.7/site-packages/aws_lambda_builders/workflow.py", line 276, in run
raise WorkflowFailedError(workflow_name=self.NAME, action_name=action.NAME, reason=str(ex))
aws_lambda_builders.exceptions.WorkflowFailedError: PythonPipBuilder:ResolveDependencies - Could not satisfy the requirement: requests
Build inside container returned response {"jsonrpc": "2.0", "id": 1, "error": {"code": 400, "message": "PythonPipBuilder:ResolveDependencies - Could not satisfy the requirement: requests"}}
Build Failed
Sending Telemetry: {'metrics': [{'commandRun': {'awsProfileProvided': False, 'debugFlagProvided': True, 'region': '', 'commandName': 'sam build', 'duration': 107282, 'exitReason': 'BuildError', 'exitCode': 1, 'requestId': '62d1fc73-70e5-4592-8c78-8fa273684592', 'installationId': 'ce8ffa14-684f-4628-97fe-288848fcf73d', 'sessionId': '4ca5bb42-5cf1-4ae5-89ad-9ed00366fefb', 'executionEnvironment': 'CLI', 'pyversion': '3.7.6', 'samcliVersion': '0.39.0'}}]}
HTTPSConnectionPool(host='aws-serverless-tools-telemetry.us-west-2.amazonaws.com', port=443): Read timed out. (read timeout=0.1)
Error: PythonPipBuilder:ResolveDependencies - Could not satisfy the requirement: requests
The above issues arise due to python PythonPipBuilder was not able to resolve the dependence. I had since log into docker and install the dependence manually, subsequently I suspected it was due to docker network issues.
Subsequently, the sam application was built by adding in additional flag which enable docker to use host network.
$(venv) sam build --use-container --docker-network host --debug
The output of the command:
Telemetry endpoint configured to be https://aws-serverless-tools-telemetry.us-west-2.amazonaws.com/metrics
'build' command is called
Starting Build inside a container
No Parameters detected in the template
2 resources found in the template
Found Serverless function with name='HelloWorldFunction' and CodeUri='hello_world/'
Building resource 'HelloWorldFunction'
Fetching lambci/lambda:build-python3.7 Docker container image......
Mounting /home/ubuntu/test/sam-app/hello_world as /tmp/samcli/source:ro,delegated inside runtime container
Using the request object from command line argument
Loading workflow module 'aws_lambda_builders.workflows'
Registering workflow 'PythonPipBuilder' with capability 'Capability(language='python', dependency_manager='pip', application_framework=None)'
Registering workflow 'NodejsNpmBuilder' with capability 'Capability(language='nodejs', dependency_manager='npm', application_framework=None)'
Registering workflow 'RubyBundlerBuilder' with capability 'Capability(language='ruby', dependency_manager='bundler', application_framework=None)'
Registering workflow 'GoDepBuilder' with capability 'Capability(language='go', dependency_manager='dep', application_framework=None)'
Registering workflow 'GoModulesBuilder' with capability 'Capability(language='go', dependency_manager='modules', application_framework=None)'
Registering workflow 'JavaGradleWorkflow' with capability 'Capability(language='java', dependency_manager='gradle', application_framework=None)'
Registering workflow 'JavaMavenWorkflow' with capability 'Capability(language='java', dependency_manager='maven', application_framework=None)'
Registering workflow 'DotnetCliPackageBuilder' with capability 'Capability(language='dotnet', dependency_manager='cli-package', application_framework=None)'
Found workflow 'PythonPipBuilder' to support capabilities 'Capability(language='python', dependency_manager='pip', application_framework=None)'
Running workflow 'PythonPipBuilder'
Running PythonPipBuilder:ResolveDependencies
calling pip download -r /tmp/samcli/source/requirements.txt --dest /tmp/samcli/scratch
Full dependency closure: {requests==2.23.0(wheel), certifi==2019.11.28(wheel), urllib3==1.25.8(wheel), chardet==3.0.4(wheel), idna==2.9(wheel)}
initial compatible: {requests==2.23.0(wheel), certifi==2019.11.28(wheel), urllib3==1.25.8(wheel), chardet==3.0.4(wheel), idna==2.9(wheel)}
initial incompatible: set()
Downloading missing wheels: set()
compatible wheels after second download pass: {requests==2.23.0(wheel), certifi==2019.11.28(wheel), urllib3==1.25.8(wheel), chardet==3.0.4(wheel), idna==2.9(wheel)}
Build missing wheels from sdists (C compiling True): set()
compatible after building wheels (no C compiling): {requests==2.23.0(wheel), certifi==2019.11.28(wheel), urllib3==1.25.8(wheel), chardet==3.0.4(wheel), idna==2.9(wheel)}
Build missing wheels from sdists (C compiling False): set()
compatible after building wheels (C compiling): {requests==2.23.0(wheel), certifi==2019.11.28(wheel), urllib3==1.25.8(wheel), chardet==3.0.4(wheel), idna==2.9(wheel)}
Final compatible: {chardet==3.0.4(wheel), requests==2.23.0(wheel), certifi==2019.11.28(wheel), urllib3==1.25.8(wheel), idna==2.9(wheel)}
Final incompatible: set()
Final missing wheels: set()
PythonPipBuilder:ResolveDependencies succeeded
Running PythonPipBuilder:CopySource
PythonPipBuilder:CopySource succeeded
Build inside container returned response {"jsonrpc": "2.0", "id": 1, "result": {"artifacts_dir": "/tmp/samcli/artifacts"}}
Build inside container was successful. Copying artifacts from container to host
Copying from container: /tmp/samcli/artifacts/. -> /home/ubuntu/test/sam-app/.aws-sam/build/HelloWorldFunction
Build inside container succeeded
Build Succeeded
Built Artifacts : .aws-sam/build
Built Template : .aws-sam/build/template.yaml
Commands you can use next
=========================
[*] Invoke Function: sam local invoke
[*] Deploy: sam deploy --guided
Sending Telemetry: {'metrics': [{'commandRun': {'awsProfileProvided': False, 'debugFlagProvided': True, 'region': '', 'commandName': 'sam build', 'duration': 5544, 'exitReason': 'success', 'exitCode': 0, 'requestId': 'bd9e7b0a-82ac-40a8-a172-a2d49f0633ff', 'installationId': 'ce8ffa14-684f-4628-97fe-288848fcf73d', 'sessionId': '48762175-e559-4759-b2fa-6a66822e381e', 'executionEnvironment': 'CLI', 'pyversion': '3.7.6', 'samcliVersion': '0.39.0'}}]}
Credit also goes to my colleague Dr. Phetsouvanh Silivanxay to troubleshoot the issue together.

Resources