how to use aws sam cli with volumes --docker-volume-basedir - aws-sam-cli

I have sam initiated on local machine and below is the directory structure
- application (python3 code)
- efs (got some files)
- template-local.yaml
Now, How do I tell sam local to start api with volume attachs.
I am running below command
sam local start-api \
--template template-local.yaml \
--docker-volume-basedir "/localmachine/project-folder/efs:/mnt/efs" \
--profile otherAccount
The containers succesfully runs, but when I hit a request localhost:3000, then it crashes with below error
...
File "/usr/local/Cellar/aws-sam-cli/0.45.0/libexec/lib/python3.7/site-packages/docker/errors.py", line 31, in create_api_error_from_http_exception
raise cls(e, response=response, explanation=explanation)
docker.errors.APIError: 500 Server Error: Internal Server Error ("invalid volume specification: '/localmachine/project-folder/efs/application:/var/task:ro,delegated'")
2020-06-22 20:52:04 127.0.0.1 - - [22/Jun/2020 20:52:04] "GET /status HTTP/1.1" 502 -
AWS SAM cli doc
https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-cli-command-reference-sam-local-start-api.html
doesnt give much details on volume mounting
Is there anything I am doing wrong.

Unfortunately there's no way to mount external volume on SAM. I'm trying to use this repo to build a docker with sam inside and try to mound and external volume

Related

Can I run k8s master INSIDE a docker container? Getting errors about k8s looking for host's kernel details

In a docker container I want to run k8s.
When I run kubeadm join ... or kubeadm init commands I see sometimes errors like
\"modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could
not open moddep file
'/lib/modules/3.10.0-1062.1.2.el7.x86_64/modules.dep.bin'.
nmodprobe:
FATAL: Module configs not found in directory
/lib/modules/3.10.0-1062.1.2.el7.x86_64",
err: exit status 1
because (I think) my container does not have the expected kernel header files.
I realise that the container reports its kernel based on the host that is running the container; and looking at k8s code I see
// getKernelConfigReader search kernel config file in a predefined list. Once the kernel config
// file is found it will read the configurations into a byte buffer and return. If the kernel
// config file is not found, it will try to load kernel config module and retry again.
func (k *KernelValidator) getKernelConfigReader() (io.Reader, error) {
possibePaths := []string{
"/proc/config.gz",
"/boot/config-" + k.kernelRelease,
"/usr/src/linux-" + k.kernelRelease + "/.config",
"/usr/src/linux/.config",
}
so I am bit confused what is simplest way to run k8s inside a container such that it consistently past this getting the kernel info.
I note that running docker run -it solita/centos-systemd:7 /bin/bash on a macOS host I see :
# uname -r
4.9.184-linuxkit
# ls -l /proc/config.gz
-r--r--r-- 1 root root 23834 Nov 20 16:40 /proc/config.gz
but running exact same on a Ubuntu VM I see :
# uname -r
4.4.0-142-generic
# ls -l /proc/config.gz
ls: cannot access /proc/config.gz
[Weirdly I don't see this FATAL: Module configs not found in directory error every time, but I guess that is a separate question!]
UPDATE 22/November/2019. I see now that k8s DOES run okay in a container. Real problem was weird/misleading logs. I have added an answer to clarify.
I do not believe that is possible given the nature of containers.
You should instead test your app in a docker container then deploy that image to k8s either in the cloud or locally using minikube.
Another solution is to run it under kind which uses docker driver instead of VirtualBox
https://kind.sigs.k8s.io/docs/user/quick-start/
It seems the FATAL error part was a bit misleading.
It was badly formatted by my test environment (all on one line.
When k8s was failing I saw the FATAL and assumed (incorrectly) that was root cause.
When I format the logs nicely I see ...
kubeadm join 172.17.0.2:6443 --token 21e8ab.1e1666a25fd37338 --discovery-token-unsafe-skip-ca-verification --experimental-control-plane --ignore-preflight-errors=all --node-name 172.17.0.3
[preflight] Running pre-flight checks
[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.4.0-142-generic
DOCKER_VERSION: 18.09.3
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.3. Latest validated version: 18.06
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file '/lib/modules/4.4.0-142-generic/modules.dep.bin'\nmodprobe: FATAL: Module configs not found in directory /lib/modules/4.4.0-142-generic\n", err: exit status 1
[discovery] Trying to connect to API Server "172.17.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://172.17.0.2:6443"
[discovery] Failed to request cluster info, will try again: [the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps cluster-info)]
There are other errors later, which I originally though were a side-effect of the nasty looking FATAL error e.g. .... "[util/etcd] Attempt timed out"]} but I now think root cause is Etcd part times out sometimes.
Adding this answer in case someone else puzzled like I was.

Running 'docker-compose up' throws permission denied when trying official samaple of Docker

I am using Docker 1.13 community edition on a CentOS 7 x64 machine. When I was following a Docker Compose sample from Docker official tutorial, all things were OK until I added these lines to the docker-compose.yml file:
volumes:
- .:/code
After adding it, I faced the following error:
can't open file 'app.py': [Errno 13] Permission denied. It seems that the problem is due to a SELinux limit. Using this post I ran the following command:
su -c "setenforce 0"
to solve the problem temporarily, but running this command:
chcon -Rt svirt_sandbox_file_t /path/to/volume
couldn't help me.
Finally I found the correct rule to add to SELinux:
# ausearch -c 'python' --raw | audit2allow -M my-python
# semodule -i my-python.pp
I found it when I opened the SELinux Alert Browser and clicked on 'Details' button on the row related to this error. The more detailed information from SELinux:
SELinux is preventing /usr/local/bin/python3.4 from read access on the
file app.py.
***** Plugin catchall (100. confidence) suggests **************************
If you believe that python3.4 should be allowed read access on the
app.py file by default. Then you should report this as a bug. You can
generate a local policy module to allow this access. Do allow this
access for now by executing:
ausearch -c 'python' --raw | audit2allow -M my-python
semodule -i my-python.pp

Docker: How to change php-fpm conf

I just started using Docker and still not used to it yet. I added images from https://github.com/markoshust/docker-magento
However when i tried to run magento setup page, i see these errors:
phpfpm_1 | - - 12/Jun/2018:18:49:04 +0000 "GET /setup/index.php/navigation" 403
phpfpm_1 | [12-Jun-2018 18:49:04] WARNING: [pool www] child 8 said into stderr: "NOTICE: Access to the script '/var/www/html/setup/index.php/navigation' has been denied (see security.limit_extensions)"
app_1 | 2018/06/12 18:49:04 [error] 7#7: *1 FastCGI sent in stderr: "Access to the script '/var/www/html/setup/index.php/navigation' has been denied (see security.limit_extensions)" while reading response header from upstream, client: 172.18.0.1, server: localhost, request: "GET /setup/index.php/navigation HTTP/1.1", upstream: "fastcgi://unix:/sock/docker.sock:", host: "magento2.test", referrer: "http://magento2.test/setup/"
A little search online suggests changing the config in www.conf but i'm not sure how to do that with Docker, i tried editing the image's template and deleted the containers and try creating the containers again but still getting these errors. How can i verify the configs are correct and if i'm doing it wrong how to correctly change the config files? Thanks
You may access the container using docker exec and do whatever you want.
Example:
docker container exec -it <container_id> bash
If still you don't have permissions to change folders or files, try using -u root.
Example:
docker container exec -u root -it <container_id> bash
You may also do a bind mount to the container folder.
Example:
docker container run -v /host/folder:/container/folder <image>
This will map all contents and put on your host for easy updating files.

startNodeManager.sh not found

I have been trying to run Oracle weblogic in Docker containers and i am facing trouble in starting the NodeManager.I ran the following command.
docker run -d --name MS1 --link wlsadmin:wlsadmin -p 8001:8001 -e ADMIN_PASSWORD=#123 \
-e MS_NAME=MS1 --volumes-from wlsadmin a5e55 createServer.sh
Under normal circumstances it is expected to start the Nodemanager.
I am able to access the weblogic console and start the Managed Server which then returns the error-
-- Warning For server MS1, the Node Manager associated with machine Machine_MS1 is not reachable
This is the part of the log file that is returned on executing the above "docker run" command :
Domain Home: /u01/oracle/user_projects/domains/base_domain
Managed Server Name: MS1
NodeManager Name:
----> 'weblogic' admin password: ctebs#123
Waiting for WebLogic Admin Server on wlsadmin:7001 to become available...
WebLogic Admin Server is now available. Proceeding...
Setting NodeManager
----> No NodeManager Name set
Node Manager Name: Machine_MS1
Node Manager Home for Container: /u01/oracle/user_projects/domains/base_domain/Machine_MS1
cp: cannot stat '/u01/oracle/user_projects/domains/base_domain /bin/startNodeManager.sh': No such file or directory
cp: cannot stat '/u01/oracle/user_projects/domains/base_domain/nodemanager/*': No such file or directory
NODEMGR_HOME_STR: NODEMGR_HOME="/u01/oracle/user_projects/domains/base_domain/Machine_MS1"
NODEMGRHOME_STR: NodeManagerHome=/u01/oracle/user_projects/domains/base_domain/Machine_MS1
DOMAINSFILE_STR: DomainsFile=/u01/oracle/user_projects/domains/base_domain/Machine_MS1/nodemanager.domains
LOGFILE_STR: LogFile=/u01/oracle/user_projects/domains/base_domain/Machine_MS1/nodemanager.log
sed: can't read /u01/oracle/user_projects/domains/base_domain/Machine_MS1/startNodeManager.sh: No such file or directory
sed: can't read /u01/oracle/user_projects/domains/base_domain/Machine_MS1/nodemanager.properties: No such file or directory
sed: can't read /u01/oracle/user_projects/domains/base_domain/Machine_MS1/nodemanager.properties: No such file or directory
sed: can't read /u01/oracle/user_projects/domains/base_domain/Machine_MS1/nodemanager.properties: No such file or directory
Starting NodeManager in background...
NodeManager started.
Connection refused (Connection refused). Could not connect to NodeManager. Check that it is running at /172.17.0.3:5556.
Starting server MS1 ...No stack trace available.
This Exception occurred at Tue Dec 12 03:38:06 GMT 2017.
weblogic.management.scripting.ScriptException: Error occurred while performing start : Server with name MS1 failed to be started
No stack trace available.
How can I get past this error message?
You can try and follow this OracleWebLogic workshop intro which points out:
The ~/docker-images/OracleWebLogic/samples/1221-domain/container-scripts has useful Bash and WLST scripts that provide three possible modes to run WebLogic Managed Servers on a Docker container. Make sure you have an AdminServer container running before starting a ManagedServer container.
The sample scripts will by default, attempt to find the AdminServer running at t3://wlsadmin:8001. You can change this.
But most importantly, the AdminServer container has to be linked with Docker's --link parameter.
Below, are the three suggestions for running ManagedServer Container within the sample 12c-domain:
Start NodeManager (Manually):
docker run -d --link wlsadmin:wlsadmin startNodeManager.sh
Start NodeManager and Create a Machine Automatically:
docker run -d --link wlsadmin:wlsadmin createMachine.sh
Start NodeManager, Create a Machine, and Create a ManagedServer Automatically
docker run -d --link wlsadmin:wlsadmin createServer.sh
See more at "Example of Image with WLS Domain", removed in commit e49bb4d in Apr. 2019, 2 yers later, since Oracle no longer supports WebLogic versions.

couchbase - [Errno 113] No route to host

I am trying to setup a Couchbase container using Docker on a CentOS machine. I installed the couchbase server but when i init a cluster via the following command i get [Errno 113] No route to host
couchbase-cli cluster-init -c 1.2.3.4:8091 --cluster-init-username=admin --cluster-
init-password=couchbase --cluster-init-ramsize=512
ERROR: command: cluster-init: 1.2.3.4:8091, [Errno 113] No route to host
But the Couchbase server URL -
http://1.2.3.4:8091
gives me a Couchbase home page with the setup button on the right bottom.
It was a small issue. In the Dockerfile - if i try to access the couchbase host with IP, i am getting "No route to host" error but when i try "localhost" it works fine. Its a temporary fix i found as of now.

Resources