It seems you are running Vue CLI inside a container - docker

I m trying to run my vuejs app using vs-code remote-containers. Its deployed and I can access it via the url: localhost:8080/ but If I update some js file, its not re-compiling and even not hot-reloading.
devcontainer.json
{
"name": "Aquawebvue",
"dockerFile": "Dockerfile",
"appPort": [3000],
"runArgs": ["-u", "node"],
"settings": {
"workbench.colorTheme": "Cobalt2",
"terminal.integrated.automationShell.linux": "/bin/bash"
},
"postCreateCommand": "yarn",
"extensions": [
"esbenp.prettier-vscode",
"wesbos.theme-cobalt2",
]
}
Dockerfile
FROM node:12.13.0
RUN npm install -g prettier
After opening container and running cmd 'yarn serve' in terminal it builds and deploy successfully but I got this warning:
It seems you are running Vue CLI inside a container.
Since you are using a non-root publicPath, the hot-reload socket
will not be able to infer the correct URL to connect. You should
explicitly specify the URL via devServer.public.

VSCode has a pre-defined .devcontainer directory for Vue projects. It can be found on GitHub. You can install it automatically by running the command Add Development Container Configuration Files... > Show All Definitions > Vue.
Dockerfile
# [Choice] Node.js version: 14, 12, 10
ARG VARIANT=14
FROM mcr.microsoft.com/vscode/devcontainers/javascript-node:0-${VARIANT}
RUN su node -c "umask 0002 && npm install -g http-server #vue/cli #vue/cli-service-global"
WORKDIR /app
EXPOSE 8080
# [Optional] Uncomment this section to install additional OS packages.
# RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
# && apt-get -y install --no-install-recommends <your-package-list-here>
devcontainer.json
{
"name": "Vue (Community)",
"build": {
"dockerfile": "Dockerfile",
"context": "..",
// Update 'VARIANT' to pick a Node version: 10, 12, 14
"args": { "VARIANT": "14" }
},
// Set *default* container specific settings.json values on container create.
"settings": {
"terminal.integrated.shell.linux": "/bin/zsh"
},
// Add the IDs of extensions you want installed when the container is created.
"extensions": [
"dbaeumer.vscode-eslint",
"octref.vetur"
],
// Use 'forwardPorts' to make a list of ports inside the container available locally.
"forwardPorts": [
8080
],
// Use 'postCreateCommand' to run commands after the container is created.
// "postCreateCommand": "uname -a",
// Comment out connect as root instead. More info: https://aka.ms/vscode-remote/containers/non-root.
"remoteUser": "node"
}

Related

Docker image build fails: "protoc-gen-grpc-web: program not found or is not executable"

i inherited a project with several microservices running on kubernetes. after copying the repo and running the steps that the previous team outlined, i have an issue building one of the images that i need to deploy. the script for the build is:
cd graph_endpoint
cp ../../Protobufs/Graph_Endpoint/graph_endpoint.proto .
protoc -I. graph_endpoint.proto --js_out=import_style=commonjs:.
protoc -I. graph_endpoint.proto --grpc-web_out=import_style=commonjs,mode=grpcwebtext:.
export NODE_OPTIONS=--openssl-legacy-provider
npx webpack ./test.js --mode development
cp ./dist/graph_endpoint.js ../public/graph_endpoint.js
cd ..
docker build . -t $1/canvas-lti-frontend:v2
docker push $1/canvas-lti-frontend:v2
i'm getting an error from line 4:
protoc-gen-grpc-web: program not found or is not executable
--grpc-web_out: protoc-gen-grpc-web: Plugin failed with status code 1.
any idea how to fix it? i have no prior experience with docker.
here's the Dockerfile:
FROM node:16
# Install app dependencies
COPY package*.json /frontend-app/
WORKDIR /frontend-app
RUN npm install
COPY server.js /frontend-app/
# Bundle app source
COPY public /frontend-app/public
COPY routes /frontend-app/routes
COPY controllers /frontend-app/controllers
WORKDIR /frontend-app
EXPOSE 3000
CMD [ "node", "server.js"]
and package.json:
{
"name": "frontend",
"version": "1.0.0",
"description": "The user-facing application for the Canvas LTI Student Climate Dashboard",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"author": "",
"license": "ISC",
"dependencies": {
"#okta/oidc-middleware": "^4.3.0",
"#okta/okta-signin-widget": "^5.14.0",
"express": "^4.18.2",
"express-session": "^1.17.2",
"vue": "^2.6.14"
},
"devDependencies": {
"nodemon": "^2.0.20",
"protoc-gen-grpc-web": "^1.4.1"
}
}
You don't have protoc-gen-grpc-web installed on the machine on which you're running the build script.
You can download the plugins from the grpc-web repo's releases page.
protoc has a plugin mechanism.
protoc looks for its plugins in the path and expects these binaries to be prefixed protoc-gen-{foo}.
However, when you reference the plugin from protoc, you simply use {foo} and generally this is suffixed with _out and sometimes _opt, i.e. protoc ... --{foo}_out --{foo}_opt.
The plugin protoc-gen-grpc-web (once installed and accessible in the host's path) is thus referenced with protoc ... --grpc_web_out=...

AWS ECS EC2 ECR not updating files after deployment with docker volume nginx

I am being stuck on issue with my volume and ECS.
I would like to attach volume so i can store there .env files etc so i dont have to recreate this manually after every deployment.
The problem is, the way I have it set up it does not update(or overwrite) files, which are pushed to ECR. So If i do code change and push it to git, it does following:
Creates new image and pushes it to ECR
It Creates new containers with image pushed to ECR (it dynamically assigns tag to the image)
when I do docker ps on EC2 I see new containers, and container with code changes is built from correct image which has just been pushed to ECR. So it seems all is working fine until this point.
But the code changes dont appear when i refresh browser nor after clearing caches.
I am attaching volume to the folder /var/www/html where sits my app, so from my understanding this code should get replaced during deployment. But the problem is, it does not replaces the code.
When I remove the volume, I can see the code changes everytime deployment finishes but I also always have to create manually .env file + run couple of commands.
PS: I have another container (mysql) which is setting volume exactly the same way and changes I do in database are persistent even after new container is created.
Please see my Docker file and taskDefinition.json to see how I deal with volumes.
Dockerfile:
FROM alpine:${ALPINE_VERSION}
# Setup document root
WORKDIR /var/www/html
# Install packages and remove default server definition
RUN apk add --no-cache \
curl \
nginx \
php8 \
php8-ctype \
php8-curl \
php8-dom \
php8-fpm \
php8-gd \
php8-intl \
php8-json \
php8-mbstring \
php8-mysqli \
php8-pdo \
php8-opcache \
php8-openssl \
php8-phar \
php8-session \
php8-xml \
php8-xmlreader \
php8-zlib \
php8-tokenizer \
php8-fileinfo \
php8-json \
php8-xml \
php8-xmlwriter \
php8-simplexml \
php8-dom \
php8-pdo_mysql \
php8-pdo_sqlite \
php8-tokenizer \
php8-pecl-redis \
php8-bcmath \
php8-exif \
supervisor \
nano \
sudo
# Create symlink so programs depending on `php` still function
RUN ln -s /usr/bin/php8 /usr/bin/php
# Configure nginx
COPY tools/docker/config/nginx.conf /etc/nginx/nginx.conf
# Configure PHP-FPM
COPY tools/docker/config/fpm-pool.conf /etc/php8/php-fpm.d/www.conf
COPY tools/docker/config/php.ini /etc/php8/conf.d/custom.ini
# Configure supervisord
COPY tools/docker/config/supervisord.conf /etc/supervisor/conf.d/supervisord.conf
# Make sure files/folders needed by the processes are accessable when they run under the nobody user
RUN chown -R nobody.nobody /var/www/html /run /var/lib/nginx /var/log/nginx
# Install composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN apk update && apk add bash
# Install node npm
RUN apk add --update nodejs npm \
&& npm config set --global loglevel warn \
&& npm install --global marked \
&& npm install --global node-gyp \
&& npm install --global yarn \
# Install node-sass's linux bindings
&& npm rebuild node-sass
# Switch to use a non-root user from here on
USER nobody
# Add application
COPY --chown=nobody ./ /var/www/html/
RUN cat /var/www/html/resources/js/Components/Sections/About.vue
RUN composer install --optimize-autoloader --no-interaction --no-progress --ignore-platform-req=ext-zip --ignore-platform-req=ext-zip
USER root
RUN yarn && yarn run production
USER nobody
VOLUME /var/www/html
# Expose the port nginx is reachable on
EXPOSE 8080
# Let supervisord start nginx & php-fpm
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf"]
# Configure a healthcheck to validate that everything is up&running
HEALTHCHECK --timeout=10s CMD curl --silent --fail http://127.0.0.1:8080/fpm-ping
taskDefinition.json
{
"containerDefinitions": [
{
"name": "fooweb-nginx-php",
"cpu": 100,
"memory": 512,
"links": [
"mysql"
],
"portMappings": [
{
"containerPort": 8080,
"hostPort": 80,
"protocol": "tcp"
}
],
"essential": true,
"environment": [],
"mountPoints": [
{
"sourceVolume": "fooweb-storage-web",
"containerPath": "/var/www/html"
}
]
},
{
"name": "mysql",
"image": "mysql",
"cpu": 50,
"memory": 512,
"portMappings": [
{
"containerPort": 3306,
"hostPort": 4306,
"protocol": "tcp"
}
],
"essential": true,
"environment": [
{
"name": "MYSQL_DATABASE",
"value": "123"
},
{
"name": "MYSQL_PASSWORD",
"value": "123"
},
{
"name": "MYSQL_USER",
"value": "123"
},
{
"name": "MYSQL_ROOT_PASSWORD",
"value": "123"
}
],
"mountPoints": [
{
"sourceVolume": "fooweb-storage-mysql",
"containerPath": "/var/lib/mysql"
}
]
}
],
"family": "art_web_task_definition",
"taskRoleArn": "arn:aws:iam::123:role/ecs-task-execution-role",
"executionRoleArn": "arn:aws:iam::123:role/ecs-task-execution-role",
"networkMode": "bridge",
"volumes": [
{
"name": "fooweb-storage-mysql",
"dockerVolumeConfiguration": {
"scope": "shared",
"autoprovision": true,
"driver": "local"
}
},
{
"name": "fooweb-storage-web",
"dockerVolumeConfiguration": {
"scope": "shared",
"autoprovision": true,
"driver": "local"
}
}
],
"placementConstraints": [],
"requiresCompatibilities": [
"EC2"
],
"cpu": "1536",
"memory": "1536",
"tags": []
}
So I believe there will be some problem with the way I have set the volume or maybe there could be some permission issue ?
Many thanks !
"I am attaching volume to the folder /var/www/html where sits my app,
so from my understanding this code should get replaced during
deployment."
That's the opposite of how docker volumes work.
It is going to ignore anything in /var/www/html inside the docker image, and instead reuse whatever you have in the mounted volume. Mounted volumes are primarily for persisting files between container restarts and image changes. If there is updated code in /var/www/html inside the image you are building, and you want that updated code to be active when your application is deployed, then you can't mount that as a volume.
If you are specifying a VOLUME instruction in your Dockerfile, then the very first time you ran your container in it would have "initialized" the volume with the files that are inside the docker container, as part of the process of creating the volume. After that, the files in the volume on the host server are persisted across container restarts/deployments, and any new updates to that path inside the new docker images are ignored.

Create a custom Docker dev environment

I like the Docker dev environment tool but I'd like to also be able have some tools preinstalled when a user clones the repository using the Docker Dev Environment tool.
I've have a .devcontainer folder in the repository with a Dockerfile:
# [Choice] Alpine version: 3.13, 3.12, 3.11, 3.10
ARG VARIANT="3.13"
FROM mcr.microsoft.com/vscode/devcontainers/base:0-alpine-${VARIANT}
# Install Terraform CLI
# Install GCloud SDK
And a devcontainer.json file:
{
"name": "Alpine",
"build": {
"dockerfile": "Dockerfile",
// Update 'VARIANT' to pick an Alpine version: 3.10, 3.11, 3.12, 3.13
"args": { "VARIANT": "3.13" }
},
// Set *default* container specific settings.json values on container create.
"settings": {},
// Add the IDs of extensions you want installed when the container is created.
// Note that some extensions may not work in Alpine Linux. See https://aka.ms/vscode-remote/linux.
"extensions": [],
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],
// Use 'postCreateCommand' to run commands after the container is created.
// "postCreateCommand": "uname -a",
// Uncomment when using a ptrace-based debugger like C++, Go, and Rust
// "runArgs": [ "--cap-add=SYS_PTRACE", "--security-opt", "seccomp=unconfined" ],
// Comment out connect as root instead. More info: https://aka.ms/vscode-remote/containers/non-root.
"remoteUser": "vscode"
}
I've tried to include curl and install commands in the Dockerfile but the commands just don't seem to work. To clarify, once the container is built I can't seem to access the CLI tools eg. terraform --version says terraform not found.
The docker launches as a VSCode window running in the container and I am attempting to use the CLI tools from the VSCode terminal if that makes a difference.
EDIT: So the issue is that creating an environment from the Docker dashboard doesn't read in your .devcontainer folder and files, it jus creates a stock basic container. You need to clone the repository, open in VSCode, and then Reopen in Container and it will build your environment.
I swapped to Ubuntu as the base image instead of Alpine and then instead of creating the dev environment from the Docker dashboard I instead opened the project folder locally in VSCode and selected "Reopen in Container". It then seemed to install everything and I have the CLI tools available now.
The below install commands come from the official documentation from each provider. I'm going to retest pulling the repository down through the Docker dashboard to see if it works.
# [Choice] Ubuntu version: bionic, focal
ARG VARIANT="focal"
FROM mcr.microsoft.com/vscode/devcontainers/base:0-${VARIANT}
# Installs Terragrunt + Terraform
ARG TERRAGRUNT_PATH=/bin/terragrunt
ARG TERRAGRUNT_VERSION=0.31.1
RUN wget https://github.com/gruntwork-io/terragrunt/releases/download/v${TERRAGRUNT_VERSION}/terragrunt_linux_amd64 -O ${TERRAGRUNT_PATH} \
&& chmod 755 ${TERRAGRUNT_PATH}
# Installs GCloud SDK
RUN echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] http://packages.cloud.google.com/apt cloud-sdk main" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list && curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key --keyring /usr/share/keyrings/cloud.google.gpg add - && apt-get update -y && apt-get install google-cloud-sdk -y

VSCode: Display forwarding from docker container in Remote Development Extension

How to set up remote display forwarding from Docker container using the new Remote Development extension?
Currently, my .devcontainer contains:
devcontainer.json
{
"name": "kinetic_v5",
"context": "..",
"dockerFile": "Dockerfile",
"workspaceFolder": "/workspace",
"runArgs": [
"--net", "host",
"-e", "DISPLAY=${env:DISPLAY}",
"-e", "QT_GRAPHICSSYSTEM=native",
"-e", "CONTAINER_NAME=kinetic_v5",
"-v", "/tmp/.X11-unix:/tmp/.X11-unix",
"--device=/dev/dri:/dev/dri",
"--name=kinetic_v5",
],
"extensions": [
"ms-python.python"
]
}
Dockerfile
FROM docker.is.localnet:5000/amd/official:16.04
RUN apt-get update && \
apt-get install -y zsh \
fonts-powerline \
locales \
# set up locale
&& locale-gen en_US.UTF-8
RUN pip install Cython
# run the installation script
RUN wget https://github.com/robbyrussell/oh-my-zsh/raw/master/tools/install.sh -O - | zsh || true
CMD ["zsh"]
This doesn't seem to do the job.
Setup details:
OS: linux
Product: Visual Studio Code - Insiders
Product Version: 1.35.0-insider
Language: en
UPDATE: You can find a thread on official git repo about this issue here.

Packer fails my docker build with error "sudo: not found" despite sudo being present

I'm trying to build a packer image with docker on it and I want docker to create a docker image with a custom script. The relevant portion of my code is (note that the top builder double-checks that sudo is installed):
{
"type": "shell",
"inline": [
"apt-get install sudo"
]
},
{
"type": "docker",
"image": "python:3",
"commit": true,
"changes": [
"RUN pip install Flask",
"CMD [\"python\", \"echo.py\"]"
]
}
The relevant portion of my screen output is:
==> docker: provisioning with shell script: /var/folders/s8/g1_gobbldygook/T/packer-shell23453453245
docker: /temp/script_1234.sh: 3: /tmp/script_1234.sh: sudo: not found
==> docker: killing the contaner: 34234hashvomit234234
Build 'docker' errored: Scipt exited with non-zero exit status: 127
The script in question is not one of mine. It's some randomly generated script that has a different series of four numbers every time I build. I'm new to both packer and docker, so maybe it's obvious what the problem is, but it's not to me.
There seem to be a few problems with your packer input. Since you haven't included the complete input file it's hard to tell, but notice a couple of things:
You probably need to run apt-get update before calling apt-get install sudo. Without that, even if the image has cached package metadata it is probably stale. If I try to build an image using your input it fails with:
E: Unable to locate package sudo
While not a problem in this context, it's good to explicitly include -y on the apt-get command line when you're running it non-interactively:
apt-get -y install sudo
In situations where apt-get is attached to a terminal, this will prevent it from prompting for confirmation. This is not a necessary change to your input, but I figure it's good to be explicit.
Based on the docs and on my testing, you can't include a RUN statement in the changes block of a docker builder. That fails with:
Stderr: Error response from daemon: run is not a valid change command
Fortunately, we can move that pip install command into a shell provisioner.
With those changes, the following input successfully builds an image:
{
"builders": [{
"type": "docker",
"image": "python:3",
"commit": true
}],
"provisioners": [{
"type": "shell",
"inline": [
"apt-get update",
"apt-get -y install sudo",
"pip install Flask"
]
}],
"post-processors": [[ {
"type": "docker-tag",
"repository": "packer-test",
"tag": "latest"
} ]]
}
(NB: Tested using Packer v1.3.5)

Resources