VSCode: Display forwarding from docker container in Remote Development Extension - docker

How to set up remote display forwarding from Docker container using the new Remote Development extension?
Currently, my .devcontainer contains:
devcontainer.json
{
"name": "kinetic_v5",
"context": "..",
"dockerFile": "Dockerfile",
"workspaceFolder": "/workspace",
"runArgs": [
"--net", "host",
"-e", "DISPLAY=${env:DISPLAY}",
"-e", "QT_GRAPHICSSYSTEM=native",
"-e", "CONTAINER_NAME=kinetic_v5",
"-v", "/tmp/.X11-unix:/tmp/.X11-unix",
"--device=/dev/dri:/dev/dri",
"--name=kinetic_v5",
],
"extensions": [
"ms-python.python"
]
}
Dockerfile
FROM docker.is.localnet:5000/amd/official:16.04
RUN apt-get update && \
apt-get install -y zsh \
fonts-powerline \
locales \
# set up locale
&& locale-gen en_US.UTF-8
RUN pip install Cython
# run the installation script
RUN wget https://github.com/robbyrussell/oh-my-zsh/raw/master/tools/install.sh -O - | zsh || true
CMD ["zsh"]
This doesn't seem to do the job.
Setup details:
OS: linux
Product: Visual Studio Code - Insiders
Product Version: 1.35.0-insider
Language: en
UPDATE: You can find a thread on official git repo about this issue here.

Related

AWS ECS EC2 ECR not updating files after deployment with docker volume nginx

I am being stuck on issue with my volume and ECS.
I would like to attach volume so i can store there .env files etc so i dont have to recreate this manually after every deployment.
The problem is, the way I have it set up it does not update(or overwrite) files, which are pushed to ECR. So If i do code change and push it to git, it does following:
Creates new image and pushes it to ECR
It Creates new containers with image pushed to ECR (it dynamically assigns tag to the image)
when I do docker ps on EC2 I see new containers, and container with code changes is built from correct image which has just been pushed to ECR. So it seems all is working fine until this point.
But the code changes dont appear when i refresh browser nor after clearing caches.
I am attaching volume to the folder /var/www/html where sits my app, so from my understanding this code should get replaced during deployment. But the problem is, it does not replaces the code.
When I remove the volume, I can see the code changes everytime deployment finishes but I also always have to create manually .env file + run couple of commands.
PS: I have another container (mysql) which is setting volume exactly the same way and changes I do in database are persistent even after new container is created.
Please see my Docker file and taskDefinition.json to see how I deal with volumes.
Dockerfile:
FROM alpine:${ALPINE_VERSION}
# Setup document root
WORKDIR /var/www/html
# Install packages and remove default server definition
RUN apk add --no-cache \
curl \
nginx \
php8 \
php8-ctype \
php8-curl \
php8-dom \
php8-fpm \
php8-gd \
php8-intl \
php8-json \
php8-mbstring \
php8-mysqli \
php8-pdo \
php8-opcache \
php8-openssl \
php8-phar \
php8-session \
php8-xml \
php8-xmlreader \
php8-zlib \
php8-tokenizer \
php8-fileinfo \
php8-json \
php8-xml \
php8-xmlwriter \
php8-simplexml \
php8-dom \
php8-pdo_mysql \
php8-pdo_sqlite \
php8-tokenizer \
php8-pecl-redis \
php8-bcmath \
php8-exif \
supervisor \
nano \
sudo
# Create symlink so programs depending on `php` still function
RUN ln -s /usr/bin/php8 /usr/bin/php
# Configure nginx
COPY tools/docker/config/nginx.conf /etc/nginx/nginx.conf
# Configure PHP-FPM
COPY tools/docker/config/fpm-pool.conf /etc/php8/php-fpm.d/www.conf
COPY tools/docker/config/php.ini /etc/php8/conf.d/custom.ini
# Configure supervisord
COPY tools/docker/config/supervisord.conf /etc/supervisor/conf.d/supervisord.conf
# Make sure files/folders needed by the processes are accessable when they run under the nobody user
RUN chown -R nobody.nobody /var/www/html /run /var/lib/nginx /var/log/nginx
# Install composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN apk update && apk add bash
# Install node npm
RUN apk add --update nodejs npm \
&& npm config set --global loglevel warn \
&& npm install --global marked \
&& npm install --global node-gyp \
&& npm install --global yarn \
# Install node-sass's linux bindings
&& npm rebuild node-sass
# Switch to use a non-root user from here on
USER nobody
# Add application
COPY --chown=nobody ./ /var/www/html/
RUN cat /var/www/html/resources/js/Components/Sections/About.vue
RUN composer install --optimize-autoloader --no-interaction --no-progress --ignore-platform-req=ext-zip --ignore-platform-req=ext-zip
USER root
RUN yarn && yarn run production
USER nobody
VOLUME /var/www/html
# Expose the port nginx is reachable on
EXPOSE 8080
# Let supervisord start nginx & php-fpm
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf"]
# Configure a healthcheck to validate that everything is up&running
HEALTHCHECK --timeout=10s CMD curl --silent --fail http://127.0.0.1:8080/fpm-ping
taskDefinition.json
{
"containerDefinitions": [
{
"name": "fooweb-nginx-php",
"cpu": 100,
"memory": 512,
"links": [
"mysql"
],
"portMappings": [
{
"containerPort": 8080,
"hostPort": 80,
"protocol": "tcp"
}
],
"essential": true,
"environment": [],
"mountPoints": [
{
"sourceVolume": "fooweb-storage-web",
"containerPath": "/var/www/html"
}
]
},
{
"name": "mysql",
"image": "mysql",
"cpu": 50,
"memory": 512,
"portMappings": [
{
"containerPort": 3306,
"hostPort": 4306,
"protocol": "tcp"
}
],
"essential": true,
"environment": [
{
"name": "MYSQL_DATABASE",
"value": "123"
},
{
"name": "MYSQL_PASSWORD",
"value": "123"
},
{
"name": "MYSQL_USER",
"value": "123"
},
{
"name": "MYSQL_ROOT_PASSWORD",
"value": "123"
}
],
"mountPoints": [
{
"sourceVolume": "fooweb-storage-mysql",
"containerPath": "/var/lib/mysql"
}
]
}
],
"family": "art_web_task_definition",
"taskRoleArn": "arn:aws:iam::123:role/ecs-task-execution-role",
"executionRoleArn": "arn:aws:iam::123:role/ecs-task-execution-role",
"networkMode": "bridge",
"volumes": [
{
"name": "fooweb-storage-mysql",
"dockerVolumeConfiguration": {
"scope": "shared",
"autoprovision": true,
"driver": "local"
}
},
{
"name": "fooweb-storage-web",
"dockerVolumeConfiguration": {
"scope": "shared",
"autoprovision": true,
"driver": "local"
}
}
],
"placementConstraints": [],
"requiresCompatibilities": [
"EC2"
],
"cpu": "1536",
"memory": "1536",
"tags": []
}
So I believe there will be some problem with the way I have set the volume or maybe there could be some permission issue ?
Many thanks !
"I am attaching volume to the folder /var/www/html where sits my app,
so from my understanding this code should get replaced during
deployment."
That's the opposite of how docker volumes work.
It is going to ignore anything in /var/www/html inside the docker image, and instead reuse whatever you have in the mounted volume. Mounted volumes are primarily for persisting files between container restarts and image changes. If there is updated code in /var/www/html inside the image you are building, and you want that updated code to be active when your application is deployed, then you can't mount that as a volume.
If you are specifying a VOLUME instruction in your Dockerfile, then the very first time you ran your container in it would have "initialized" the volume with the files that are inside the docker container, as part of the process of creating the volume. After that, the files in the volume on the host server are persisted across container restarts/deployments, and any new updates to that path inside the new docker images are ignored.

Problem when creating a lambda function (AWS) using docker container: Lambda does not have permission to access the ECR image

I'm trying to create AWS's lambda function using a docker container, I followed this guide.
This is my code in lambda_function.py:
def lambda_handler(event, context):
print('dummy calling analyze_file')
print('DONE')
and this is my Dockerfile:
FROM public.ecr.aws/lambda/python:3.8 as build-image
ARG FUNCTION_DIR="./"
RUN yum update -y
RUN yum install -y g++ \
make \
cmake \
unzip \
libcurl4-openssl-dev
RUN pip install --upgrade pip
RUN yum install -y git cmake libmad-devel libsndfile-devel gd-devel boost-devel
RUN yum install -y install apt-utils gcc libpq-dev libsndfile-dev
RUN python -m pip install boto3
COPY requirements.txt ${FUNCTION_DIR}
RUN python -m pip install -r requirements.txt
COPY ${FUNCTION_DIR} ./
RUN python -m pip install \
--target ${FUNCTION_DIR} \
awslambdaric
CMD [ "lambda_function.lambda_handler" ]
When doing the following:
docker run -d -p 9000:8080 image-name:latest
curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{}'
docker logs 1111111111
I'm getting the expected prints. The image is succcessfuly uploaded to the ECR.
The problem starts when I'm trying to use this image as lambda function. I'm getting:
Lambda does not have permission to access the ECR image. Check the ECR permissions.
Even though the settings are the default:
Execution role: Create a new role with basic Lambda permissions
Architecture: x86_64
No Container image overrides
I also tried to add full permissions to the created IAM Role, and still got the same message. Why would this error happend if not for permissions? Anyone got any lead for me?
Edit:
Comments asked for the role definition, so I started with this one:
(AWSLambdaBasicExecutionRole-aaaaa-aaaaaaa-aaaaaa)
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "logs:CreateLogGroup",
"Resource": "arn:aws:logs:us-east-2:111111111:*"
},
{
"Effect": "Allow",
"Action": [
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:us-east-2:111111111:log-group:/aws/lambda/my-lambda-name:*"
]
}
]
}
When it did not work, I tried adding this statement (and still, it didn't work):
{
"Sid": "LambdaECRImageRetrievalPolicy",
"Effect": "Allow",
"Action": [
"ecr:*"
],
"Resource": "*"
}
ECR repositories has their own permissions (and they are kind of hidden):
Choose ecr repository -> permissions (left navigation bar) -> edit policy json
I entered this policy and the problem was solved:
{
"Version" : "2008-10-17",
"Statement" : [ {
"Sid" : "LambdaECRImageRetrievalPolicy",
"Effect" : "Allow",
"Principal" : {
"Service" : "lambda.amazonaws.com"
},
"Action" : [ "ecr:BatchGetImage", "ecr:GetDownloadUrlForLayer", "ecr:SetRepositoryPolicy", "ecr:DeleteRepositoryPolicy", "ecr:GetRepositoryPolicy" ],
"Condition" : {
"StringLike" : {
"aws:sourceArn" : " arn:aws:lambda:us-east-2:1111111111:function:*"
}
}
} ]
}

Setting up XDebug with Docker and VSCode

I set up a Laravel dev environment using Docker - nginx:stable-alpine, php:8.0-fpm-alpine and mysql:5.7.32. I install Xdebug from my php.dockerfile:
RUN apk --no-cache add pcre-dev ${PHPIZE_DEPS} \
&& pecl install xdebug \
&& docker-php-ext-enable xdebug \
&& apk del pcre-dev ${PHPIZE_DEPS}
And include two volumes in docker-compose to point php to xdebug.ini and error_reporting.ini:
volumes:
- .:/var/www/html
- ../docker/php/conf.d/xdebug.ini:/usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini
- ../docker/php/conf.d/error_reporting.ini:/usr/local/etc/php/conf.d/error_reporting.ini
My xdebug.ini looks like this:
zend_extension=xdebug
[xdebug]
xdebug.mode=develop,debug,trace,profile,coverage
xdebug.start_with_request = yes
xdebug.discover_client_host = 0
xdebug.remote_connect_back = 1
xdebug.client_port = 9003
xdebug.remote_host='host.docker.internal'
xdebug.idekey=VSCODE
When I return phpinfo() I can see that everything looks set up correctly, showing xdebug version 3.0.4 is installed, but when I set a breakpoint in VSCode and run the debugger its not hitting it.
My launch.json looks like this:
{
"version": "0.2.0",
"configurations": [
{
"name": "XDebug Docker",
"type": "php",
"request": "launch",
"port": 9003,
"pathMappings": {
"/var/www/html": "${workspaceFolder}/src",
}
},
]
}
My folder structure looks like this:
- Project
-- /docker
-- nginx.dockerfile
-- php.dockerfile
--- /nginx
--- /certs
--- default.conf
--- /php
---- /conf.d
---- error_reporting.ini
---- xdebug.ini
-- /src (the laravel app)
Xdebug 3 has changed the names of settings. Instead of xdebug.remote_host, you need to use xdebug.client_host, as per the upgrade guide: https://xdebug.org/docs/upgrade_guide#changed-xdebug.remote_host
xdebug.remote_connect_back has also been renamed in favour of xdebug.discover_client_host, but when using Xdebug with Docker, you should leave that set to 0.

Create a custom Docker dev environment

I like the Docker dev environment tool but I'd like to also be able have some tools preinstalled when a user clones the repository using the Docker Dev Environment tool.
I've have a .devcontainer folder in the repository with a Dockerfile:
# [Choice] Alpine version: 3.13, 3.12, 3.11, 3.10
ARG VARIANT="3.13"
FROM mcr.microsoft.com/vscode/devcontainers/base:0-alpine-${VARIANT}
# Install Terraform CLI
# Install GCloud SDK
And a devcontainer.json file:
{
"name": "Alpine",
"build": {
"dockerfile": "Dockerfile",
// Update 'VARIANT' to pick an Alpine version: 3.10, 3.11, 3.12, 3.13
"args": { "VARIANT": "3.13" }
},
// Set *default* container specific settings.json values on container create.
"settings": {},
// Add the IDs of extensions you want installed when the container is created.
// Note that some extensions may not work in Alpine Linux. See https://aka.ms/vscode-remote/linux.
"extensions": [],
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],
// Use 'postCreateCommand' to run commands after the container is created.
// "postCreateCommand": "uname -a",
// Uncomment when using a ptrace-based debugger like C++, Go, and Rust
// "runArgs": [ "--cap-add=SYS_PTRACE", "--security-opt", "seccomp=unconfined" ],
// Comment out connect as root instead. More info: https://aka.ms/vscode-remote/containers/non-root.
"remoteUser": "vscode"
}
I've tried to include curl and install commands in the Dockerfile but the commands just don't seem to work. To clarify, once the container is built I can't seem to access the CLI tools eg. terraform --version says terraform not found.
The docker launches as a VSCode window running in the container and I am attempting to use the CLI tools from the VSCode terminal if that makes a difference.
EDIT: So the issue is that creating an environment from the Docker dashboard doesn't read in your .devcontainer folder and files, it jus creates a stock basic container. You need to clone the repository, open in VSCode, and then Reopen in Container and it will build your environment.
I swapped to Ubuntu as the base image instead of Alpine and then instead of creating the dev environment from the Docker dashboard I instead opened the project folder locally in VSCode and selected "Reopen in Container". It then seemed to install everything and I have the CLI tools available now.
The below install commands come from the official documentation from each provider. I'm going to retest pulling the repository down through the Docker dashboard to see if it works.
# [Choice] Ubuntu version: bionic, focal
ARG VARIANT="focal"
FROM mcr.microsoft.com/vscode/devcontainers/base:0-${VARIANT}
# Installs Terragrunt + Terraform
ARG TERRAGRUNT_PATH=/bin/terragrunt
ARG TERRAGRUNT_VERSION=0.31.1
RUN wget https://github.com/gruntwork-io/terragrunt/releases/download/v${TERRAGRUNT_VERSION}/terragrunt_linux_amd64 -O ${TERRAGRUNT_PATH} \
&& chmod 755 ${TERRAGRUNT_PATH}
# Installs GCloud SDK
RUN echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] http://packages.cloud.google.com/apt cloud-sdk main" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list && curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key --keyring /usr/share/keyrings/cloud.google.gpg add - && apt-get update -y && apt-get install google-cloud-sdk -y

It seems you are running Vue CLI inside a container

I m trying to run my vuejs app using vs-code remote-containers. Its deployed and I can access it via the url: localhost:8080/ but If I update some js file, its not re-compiling and even not hot-reloading.
devcontainer.json
{
"name": "Aquawebvue",
"dockerFile": "Dockerfile",
"appPort": [3000],
"runArgs": ["-u", "node"],
"settings": {
"workbench.colorTheme": "Cobalt2",
"terminal.integrated.automationShell.linux": "/bin/bash"
},
"postCreateCommand": "yarn",
"extensions": [
"esbenp.prettier-vscode",
"wesbos.theme-cobalt2",
]
}
Dockerfile
FROM node:12.13.0
RUN npm install -g prettier
After opening container and running cmd 'yarn serve' in terminal it builds and deploy successfully but I got this warning:
It seems you are running Vue CLI inside a container.
Since you are using a non-root publicPath, the hot-reload socket
will not be able to infer the correct URL to connect. You should
explicitly specify the URL via devServer.public.
VSCode has a pre-defined .devcontainer directory for Vue projects. It can be found on GitHub. You can install it automatically by running the command Add Development Container Configuration Files... > Show All Definitions > Vue.
Dockerfile
# [Choice] Node.js version: 14, 12, 10
ARG VARIANT=14
FROM mcr.microsoft.com/vscode/devcontainers/javascript-node:0-${VARIANT}
RUN su node -c "umask 0002 && npm install -g http-server #vue/cli #vue/cli-service-global"
WORKDIR /app
EXPOSE 8080
# [Optional] Uncomment this section to install additional OS packages.
# RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
# && apt-get -y install --no-install-recommends <your-package-list-here>
devcontainer.json
{
"name": "Vue (Community)",
"build": {
"dockerfile": "Dockerfile",
"context": "..",
// Update 'VARIANT' to pick a Node version: 10, 12, 14
"args": { "VARIANT": "14" }
},
// Set *default* container specific settings.json values on container create.
"settings": {
"terminal.integrated.shell.linux": "/bin/zsh"
},
// Add the IDs of extensions you want installed when the container is created.
"extensions": [
"dbaeumer.vscode-eslint",
"octref.vetur"
],
// Use 'forwardPorts' to make a list of ports inside the container available locally.
"forwardPorts": [
8080
],
// Use 'postCreateCommand' to run commands after the container is created.
// "postCreateCommand": "uname -a",
// Comment out connect as root instead. More info: https://aka.ms/vscode-remote/containers/non-root.
"remoteUser": "node"
}

Resources