Problem when creating a lambda function (AWS) using docker container: Lambda does not have permission to access the ECR image - docker

I'm trying to create AWS's lambda function using a docker container, I followed this guide.
This is my code in lambda_function.py:
def lambda_handler(event, context):
print('dummy calling analyze_file')
print('DONE')
and this is my Dockerfile:
FROM public.ecr.aws/lambda/python:3.8 as build-image
ARG FUNCTION_DIR="./"
RUN yum update -y
RUN yum install -y g++ \
make \
cmake \
unzip \
libcurl4-openssl-dev
RUN pip install --upgrade pip
RUN yum install -y git cmake libmad-devel libsndfile-devel gd-devel boost-devel
RUN yum install -y install apt-utils gcc libpq-dev libsndfile-dev
RUN python -m pip install boto3
COPY requirements.txt ${FUNCTION_DIR}
RUN python -m pip install -r requirements.txt
COPY ${FUNCTION_DIR} ./
RUN python -m pip install \
--target ${FUNCTION_DIR} \
awslambdaric
CMD [ "lambda_function.lambda_handler" ]
When doing the following:
docker run -d -p 9000:8080 image-name:latest
curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{}'
docker logs 1111111111
I'm getting the expected prints. The image is succcessfuly uploaded to the ECR.
The problem starts when I'm trying to use this image as lambda function. I'm getting:
Lambda does not have permission to access the ECR image. Check the ECR permissions.
Even though the settings are the default:
Execution role: Create a new role with basic Lambda permissions
Architecture: x86_64
No Container image overrides
I also tried to add full permissions to the created IAM Role, and still got the same message. Why would this error happend if not for permissions? Anyone got any lead for me?
Edit:
Comments asked for the role definition, so I started with this one:
(AWSLambdaBasicExecutionRole-aaaaa-aaaaaaa-aaaaaa)
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "logs:CreateLogGroup",
"Resource": "arn:aws:logs:us-east-2:111111111:*"
},
{
"Effect": "Allow",
"Action": [
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:us-east-2:111111111:log-group:/aws/lambda/my-lambda-name:*"
]
}
]
}
When it did not work, I tried adding this statement (and still, it didn't work):
{
"Sid": "LambdaECRImageRetrievalPolicy",
"Effect": "Allow",
"Action": [
"ecr:*"
],
"Resource": "*"
}

ECR repositories has their own permissions (and they are kind of hidden):
Choose ecr repository -> permissions (left navigation bar) -> edit policy json
I entered this policy and the problem was solved:
{
"Version" : "2008-10-17",
"Statement" : [ {
"Sid" : "LambdaECRImageRetrievalPolicy",
"Effect" : "Allow",
"Principal" : {
"Service" : "lambda.amazonaws.com"
},
"Action" : [ "ecr:BatchGetImage", "ecr:GetDownloadUrlForLayer", "ecr:SetRepositoryPolicy", "ecr:DeleteRepositoryPolicy", "ecr:GetRepositoryPolicy" ],
"Condition" : {
"StringLike" : {
"aws:sourceArn" : " arn:aws:lambda:us-east-2:1111111111:function:*"
}
}
} ]
}

Related

AWS ECS EC2 ECR not updating files after deployment with docker volume nginx

I am being stuck on issue with my volume and ECS.
I would like to attach volume so i can store there .env files etc so i dont have to recreate this manually after every deployment.
The problem is, the way I have it set up it does not update(or overwrite) files, which are pushed to ECR. So If i do code change and push it to git, it does following:
Creates new image and pushes it to ECR
It Creates new containers with image pushed to ECR (it dynamically assigns tag to the image)
when I do docker ps on EC2 I see new containers, and container with code changes is built from correct image which has just been pushed to ECR. So it seems all is working fine until this point.
But the code changes dont appear when i refresh browser nor after clearing caches.
I am attaching volume to the folder /var/www/html where sits my app, so from my understanding this code should get replaced during deployment. But the problem is, it does not replaces the code.
When I remove the volume, I can see the code changes everytime deployment finishes but I also always have to create manually .env file + run couple of commands.
PS: I have another container (mysql) which is setting volume exactly the same way and changes I do in database are persistent even after new container is created.
Please see my Docker file and taskDefinition.json to see how I deal with volumes.
Dockerfile:
FROM alpine:${ALPINE_VERSION}
# Setup document root
WORKDIR /var/www/html
# Install packages and remove default server definition
RUN apk add --no-cache \
curl \
nginx \
php8 \
php8-ctype \
php8-curl \
php8-dom \
php8-fpm \
php8-gd \
php8-intl \
php8-json \
php8-mbstring \
php8-mysqli \
php8-pdo \
php8-opcache \
php8-openssl \
php8-phar \
php8-session \
php8-xml \
php8-xmlreader \
php8-zlib \
php8-tokenizer \
php8-fileinfo \
php8-json \
php8-xml \
php8-xmlwriter \
php8-simplexml \
php8-dom \
php8-pdo_mysql \
php8-pdo_sqlite \
php8-tokenizer \
php8-pecl-redis \
php8-bcmath \
php8-exif \
supervisor \
nano \
sudo
# Create symlink so programs depending on `php` still function
RUN ln -s /usr/bin/php8 /usr/bin/php
# Configure nginx
COPY tools/docker/config/nginx.conf /etc/nginx/nginx.conf
# Configure PHP-FPM
COPY tools/docker/config/fpm-pool.conf /etc/php8/php-fpm.d/www.conf
COPY tools/docker/config/php.ini /etc/php8/conf.d/custom.ini
# Configure supervisord
COPY tools/docker/config/supervisord.conf /etc/supervisor/conf.d/supervisord.conf
# Make sure files/folders needed by the processes are accessable when they run under the nobody user
RUN chown -R nobody.nobody /var/www/html /run /var/lib/nginx /var/log/nginx
# Install composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN apk update && apk add bash
# Install node npm
RUN apk add --update nodejs npm \
&& npm config set --global loglevel warn \
&& npm install --global marked \
&& npm install --global node-gyp \
&& npm install --global yarn \
# Install node-sass's linux bindings
&& npm rebuild node-sass
# Switch to use a non-root user from here on
USER nobody
# Add application
COPY --chown=nobody ./ /var/www/html/
RUN cat /var/www/html/resources/js/Components/Sections/About.vue
RUN composer install --optimize-autoloader --no-interaction --no-progress --ignore-platform-req=ext-zip --ignore-platform-req=ext-zip
USER root
RUN yarn && yarn run production
USER nobody
VOLUME /var/www/html
# Expose the port nginx is reachable on
EXPOSE 8080
# Let supervisord start nginx & php-fpm
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf"]
# Configure a healthcheck to validate that everything is up&running
HEALTHCHECK --timeout=10s CMD curl --silent --fail http://127.0.0.1:8080/fpm-ping
taskDefinition.json
{
"containerDefinitions": [
{
"name": "fooweb-nginx-php",
"cpu": 100,
"memory": 512,
"links": [
"mysql"
],
"portMappings": [
{
"containerPort": 8080,
"hostPort": 80,
"protocol": "tcp"
}
],
"essential": true,
"environment": [],
"mountPoints": [
{
"sourceVolume": "fooweb-storage-web",
"containerPath": "/var/www/html"
}
]
},
{
"name": "mysql",
"image": "mysql",
"cpu": 50,
"memory": 512,
"portMappings": [
{
"containerPort": 3306,
"hostPort": 4306,
"protocol": "tcp"
}
],
"essential": true,
"environment": [
{
"name": "MYSQL_DATABASE",
"value": "123"
},
{
"name": "MYSQL_PASSWORD",
"value": "123"
},
{
"name": "MYSQL_USER",
"value": "123"
},
{
"name": "MYSQL_ROOT_PASSWORD",
"value": "123"
}
],
"mountPoints": [
{
"sourceVolume": "fooweb-storage-mysql",
"containerPath": "/var/lib/mysql"
}
]
}
],
"family": "art_web_task_definition",
"taskRoleArn": "arn:aws:iam::123:role/ecs-task-execution-role",
"executionRoleArn": "arn:aws:iam::123:role/ecs-task-execution-role",
"networkMode": "bridge",
"volumes": [
{
"name": "fooweb-storage-mysql",
"dockerVolumeConfiguration": {
"scope": "shared",
"autoprovision": true,
"driver": "local"
}
},
{
"name": "fooweb-storage-web",
"dockerVolumeConfiguration": {
"scope": "shared",
"autoprovision": true,
"driver": "local"
}
}
],
"placementConstraints": [],
"requiresCompatibilities": [
"EC2"
],
"cpu": "1536",
"memory": "1536",
"tags": []
}
So I believe there will be some problem with the way I have set the volume or maybe there could be some permission issue ?
Many thanks !
"I am attaching volume to the folder /var/www/html where sits my app,
so from my understanding this code should get replaced during
deployment."
That's the opposite of how docker volumes work.
It is going to ignore anything in /var/www/html inside the docker image, and instead reuse whatever you have in the mounted volume. Mounted volumes are primarily for persisting files between container restarts and image changes. If there is updated code in /var/www/html inside the image you are building, and you want that updated code to be active when your application is deployed, then you can't mount that as a volume.
If you are specifying a VOLUME instruction in your Dockerfile, then the very first time you ran your container in it would have "initialized" the volume with the files that are inside the docker container, as part of the process of creating the volume. After that, the files in the volume on the host server are persisted across container restarts/deployments, and any new updates to that path inside the new docker images are ignored.

It seems you are running Vue CLI inside a container

I m trying to run my vuejs app using vs-code remote-containers. Its deployed and I can access it via the url: localhost:8080/ but If I update some js file, its not re-compiling and even not hot-reloading.
devcontainer.json
{
"name": "Aquawebvue",
"dockerFile": "Dockerfile",
"appPort": [3000],
"runArgs": ["-u", "node"],
"settings": {
"workbench.colorTheme": "Cobalt2",
"terminal.integrated.automationShell.linux": "/bin/bash"
},
"postCreateCommand": "yarn",
"extensions": [
"esbenp.prettier-vscode",
"wesbos.theme-cobalt2",
]
}
Dockerfile
FROM node:12.13.0
RUN npm install -g prettier
After opening container and running cmd 'yarn serve' in terminal it builds and deploy successfully but I got this warning:
It seems you are running Vue CLI inside a container.
Since you are using a non-root publicPath, the hot-reload socket
will not be able to infer the correct URL to connect. You should
explicitly specify the URL via devServer.public.
VSCode has a pre-defined .devcontainer directory for Vue projects. It can be found on GitHub. You can install it automatically by running the command Add Development Container Configuration Files... > Show All Definitions > Vue.
Dockerfile
# [Choice] Node.js version: 14, 12, 10
ARG VARIANT=14
FROM mcr.microsoft.com/vscode/devcontainers/javascript-node:0-${VARIANT}
RUN su node -c "umask 0002 && npm install -g http-server #vue/cli #vue/cli-service-global"
WORKDIR /app
EXPOSE 8080
# [Optional] Uncomment this section to install additional OS packages.
# RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
# && apt-get -y install --no-install-recommends <your-package-list-here>
devcontainer.json
{
"name": "Vue (Community)",
"build": {
"dockerfile": "Dockerfile",
"context": "..",
// Update 'VARIANT' to pick a Node version: 10, 12, 14
"args": { "VARIANT": "14" }
},
// Set *default* container specific settings.json values on container create.
"settings": {
"terminal.integrated.shell.linux": "/bin/zsh"
},
// Add the IDs of extensions you want installed when the container is created.
"extensions": [
"dbaeumer.vscode-eslint",
"octref.vetur"
],
// Use 'forwardPorts' to make a list of ports inside the container available locally.
"forwardPorts": [
8080
],
// Use 'postCreateCommand' to run commands after the container is created.
// "postCreateCommand": "uname -a",
// Comment out connect as root instead. More info: https://aka.ms/vscode-remote/containers/non-root.
"remoteUser": "node"
}

Docker socket.io

I am trying to run the client from docker so that it connects to my server that is outside the containers but does not connect to me, it only works if I run it locally or if I pass the --netork host parameter, the latter is not valid since I have what to launch multiple containers
This is my code
var client = require('socket.io-client');
var options = {
secure:true,
reconnect: true,
rejectUnauthorized : false,
forceNew : true
};
var socket = client.connect('wss://192.168.1.15:8443',options);
var channel = process.env.SESSION;
var canal_1 = channel+'-1';
var canal_2 = channel+'-2';
socket.on('connect', function(){});
socket.on(canal_1, function(data){
console.log(data)
});
socket.on(canal_2, function(data){
console.log(data)
});
socket.on('disconnect', function(){});
And this is my docker
FROM node:12.13-alpine
RUN apk update && apk add --update alpine-sdk wget libxtst-dev libpng-dev python2 xorg-server-dev
RUN mkdir -p /home/node/app/node_modules && chown -R node:node /home/node/app
WORKDIR /home/node/app
COPY package*.json ./
RUN npm install robotjs
USER node
RUN npm install
COPY --chown=node:node . .
ENTRYPOINT [ "node"]
And this my package.json
{
"name": "nodejs-socket",
"version": "1.0.0",
"description": "nodejs socket",
"author": "Ricardo Jimenez Hurtado <jimenezhurtadoricardo#gmail.com>",
"license": "MIT",
"main": "server.js",
"keywords": [
"nodejs",
"express",
"socket"
],
"dependencies": {
"express": "^4.16.4",
"fs": "0.0.1-security",
"https": "^1.0.0",
"socket.io-client": "^2.3.0"
}
}
And this is my run code for docker
docker run -ti -e SESSION=1 node_server client_socket.js
Thanks for all
Best regards
You can achieve what you want here in a number of ways.
The best of which would be to also run your server node in a docker container, and use docker networks to enable communication between the server and clients.
I solve this problem with two steps, first I develop a better code with socket.io and socket.io-client for that the client join to a room and sendo step I revise all my network configuration for that my servername can be find for all my networks device
thanks for all

VSCode: Display forwarding from docker container in Remote Development Extension

How to set up remote display forwarding from Docker container using the new Remote Development extension?
Currently, my .devcontainer contains:
devcontainer.json
{
"name": "kinetic_v5",
"context": "..",
"dockerFile": "Dockerfile",
"workspaceFolder": "/workspace",
"runArgs": [
"--net", "host",
"-e", "DISPLAY=${env:DISPLAY}",
"-e", "QT_GRAPHICSSYSTEM=native",
"-e", "CONTAINER_NAME=kinetic_v5",
"-v", "/tmp/.X11-unix:/tmp/.X11-unix",
"--device=/dev/dri:/dev/dri",
"--name=kinetic_v5",
],
"extensions": [
"ms-python.python"
]
}
Dockerfile
FROM docker.is.localnet:5000/amd/official:16.04
RUN apt-get update && \
apt-get install -y zsh \
fonts-powerline \
locales \
# set up locale
&& locale-gen en_US.UTF-8
RUN pip install Cython
# run the installation script
RUN wget https://github.com/robbyrussell/oh-my-zsh/raw/master/tools/install.sh -O - | zsh || true
CMD ["zsh"]
This doesn't seem to do the job.
Setup details:
OS: linux
Product: Visual Studio Code - Insiders
Product Version: 1.35.0-insider
Language: en
UPDATE: You can find a thread on official git repo about this issue here.

Packer fails my docker build with error "sudo: not found" despite sudo being present

I'm trying to build a packer image with docker on it and I want docker to create a docker image with a custom script. The relevant portion of my code is (note that the top builder double-checks that sudo is installed):
{
"type": "shell",
"inline": [
"apt-get install sudo"
]
},
{
"type": "docker",
"image": "python:3",
"commit": true,
"changes": [
"RUN pip install Flask",
"CMD [\"python\", \"echo.py\"]"
]
}
The relevant portion of my screen output is:
==> docker: provisioning with shell script: /var/folders/s8/g1_gobbldygook/T/packer-shell23453453245
docker: /temp/script_1234.sh: 3: /tmp/script_1234.sh: sudo: not found
==> docker: killing the contaner: 34234hashvomit234234
Build 'docker' errored: Scipt exited with non-zero exit status: 127
The script in question is not one of mine. It's some randomly generated script that has a different series of four numbers every time I build. I'm new to both packer and docker, so maybe it's obvious what the problem is, but it's not to me.
There seem to be a few problems with your packer input. Since you haven't included the complete input file it's hard to tell, but notice a couple of things:
You probably need to run apt-get update before calling apt-get install sudo. Without that, even if the image has cached package metadata it is probably stale. If I try to build an image using your input it fails with:
E: Unable to locate package sudo
While not a problem in this context, it's good to explicitly include -y on the apt-get command line when you're running it non-interactively:
apt-get -y install sudo
In situations where apt-get is attached to a terminal, this will prevent it from prompting for confirmation. This is not a necessary change to your input, but I figure it's good to be explicit.
Based on the docs and on my testing, you can't include a RUN statement in the changes block of a docker builder. That fails with:
Stderr: Error response from daemon: run is not a valid change command
Fortunately, we can move that pip install command into a shell provisioner.
With those changes, the following input successfully builds an image:
{
"builders": [{
"type": "docker",
"image": "python:3",
"commit": true
}],
"provisioners": [{
"type": "shell",
"inline": [
"apt-get update",
"apt-get -y install sudo",
"pip install Flask"
]
}],
"post-processors": [[ {
"type": "docker-tag",
"repository": "packer-test",
"tag": "latest"
} ]]
}
(NB: Tested using Packer v1.3.5)

Resources