AWS ECS EC2 ECR not updating files after deployment with docker volume nginx - docker

I am being stuck on issue with my volume and ECS.
I would like to attach volume so i can store there .env files etc so i dont have to recreate this manually after every deployment.
The problem is, the way I have it set up it does not update(or overwrite) files, which are pushed to ECR. So If i do code change and push it to git, it does following:
Creates new image and pushes it to ECR
It Creates new containers with image pushed to ECR (it dynamically assigns tag to the image)
when I do docker ps on EC2 I see new containers, and container with code changes is built from correct image which has just been pushed to ECR. So it seems all is working fine until this point.
But the code changes dont appear when i refresh browser nor after clearing caches.
I am attaching volume to the folder /var/www/html where sits my app, so from my understanding this code should get replaced during deployment. But the problem is, it does not replaces the code.
When I remove the volume, I can see the code changes everytime deployment finishes but I also always have to create manually .env file + run couple of commands.
PS: I have another container (mysql) which is setting volume exactly the same way and changes I do in database are persistent even after new container is created.
Please see my Docker file and taskDefinition.json to see how I deal with volumes.
Dockerfile:
FROM alpine:${ALPINE_VERSION}
# Setup document root
WORKDIR /var/www/html
# Install packages and remove default server definition
RUN apk add --no-cache \
curl \
nginx \
php8 \
php8-ctype \
php8-curl \
php8-dom \
php8-fpm \
php8-gd \
php8-intl \
php8-json \
php8-mbstring \
php8-mysqli \
php8-pdo \
php8-opcache \
php8-openssl \
php8-phar \
php8-session \
php8-xml \
php8-xmlreader \
php8-zlib \
php8-tokenizer \
php8-fileinfo \
php8-json \
php8-xml \
php8-xmlwriter \
php8-simplexml \
php8-dom \
php8-pdo_mysql \
php8-pdo_sqlite \
php8-tokenizer \
php8-pecl-redis \
php8-bcmath \
php8-exif \
supervisor \
nano \
sudo
# Create symlink so programs depending on `php` still function
RUN ln -s /usr/bin/php8 /usr/bin/php
# Configure nginx
COPY tools/docker/config/nginx.conf /etc/nginx/nginx.conf
# Configure PHP-FPM
COPY tools/docker/config/fpm-pool.conf /etc/php8/php-fpm.d/www.conf
COPY tools/docker/config/php.ini /etc/php8/conf.d/custom.ini
# Configure supervisord
COPY tools/docker/config/supervisord.conf /etc/supervisor/conf.d/supervisord.conf
# Make sure files/folders needed by the processes are accessable when they run under the nobody user
RUN chown -R nobody.nobody /var/www/html /run /var/lib/nginx /var/log/nginx
# Install composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN apk update && apk add bash
# Install node npm
RUN apk add --update nodejs npm \
&& npm config set --global loglevel warn \
&& npm install --global marked \
&& npm install --global node-gyp \
&& npm install --global yarn \
# Install node-sass's linux bindings
&& npm rebuild node-sass
# Switch to use a non-root user from here on
USER nobody
# Add application
COPY --chown=nobody ./ /var/www/html/
RUN cat /var/www/html/resources/js/Components/Sections/About.vue
RUN composer install --optimize-autoloader --no-interaction --no-progress --ignore-platform-req=ext-zip --ignore-platform-req=ext-zip
USER root
RUN yarn && yarn run production
USER nobody
VOLUME /var/www/html
# Expose the port nginx is reachable on
EXPOSE 8080
# Let supervisord start nginx & php-fpm
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf"]
# Configure a healthcheck to validate that everything is up&running
HEALTHCHECK --timeout=10s CMD curl --silent --fail http://127.0.0.1:8080/fpm-ping
taskDefinition.json
{
"containerDefinitions": [
{
"name": "fooweb-nginx-php",
"cpu": 100,
"memory": 512,
"links": [
"mysql"
],
"portMappings": [
{
"containerPort": 8080,
"hostPort": 80,
"protocol": "tcp"
}
],
"essential": true,
"environment": [],
"mountPoints": [
{
"sourceVolume": "fooweb-storage-web",
"containerPath": "/var/www/html"
}
]
},
{
"name": "mysql",
"image": "mysql",
"cpu": 50,
"memory": 512,
"portMappings": [
{
"containerPort": 3306,
"hostPort": 4306,
"protocol": "tcp"
}
],
"essential": true,
"environment": [
{
"name": "MYSQL_DATABASE",
"value": "123"
},
{
"name": "MYSQL_PASSWORD",
"value": "123"
},
{
"name": "MYSQL_USER",
"value": "123"
},
{
"name": "MYSQL_ROOT_PASSWORD",
"value": "123"
}
],
"mountPoints": [
{
"sourceVolume": "fooweb-storage-mysql",
"containerPath": "/var/lib/mysql"
}
]
}
],
"family": "art_web_task_definition",
"taskRoleArn": "arn:aws:iam::123:role/ecs-task-execution-role",
"executionRoleArn": "arn:aws:iam::123:role/ecs-task-execution-role",
"networkMode": "bridge",
"volumes": [
{
"name": "fooweb-storage-mysql",
"dockerVolumeConfiguration": {
"scope": "shared",
"autoprovision": true,
"driver": "local"
}
},
{
"name": "fooweb-storage-web",
"dockerVolumeConfiguration": {
"scope": "shared",
"autoprovision": true,
"driver": "local"
}
}
],
"placementConstraints": [],
"requiresCompatibilities": [
"EC2"
],
"cpu": "1536",
"memory": "1536",
"tags": []
}
So I believe there will be some problem with the way I have set the volume or maybe there could be some permission issue ?
Many thanks !

"I am attaching volume to the folder /var/www/html where sits my app,
so from my understanding this code should get replaced during
deployment."
That's the opposite of how docker volumes work.
It is going to ignore anything in /var/www/html inside the docker image, and instead reuse whatever you have in the mounted volume. Mounted volumes are primarily for persisting files between container restarts and image changes. If there is updated code in /var/www/html inside the image you are building, and you want that updated code to be active when your application is deployed, then you can't mount that as a volume.
If you are specifying a VOLUME instruction in your Dockerfile, then the very first time you ran your container in it would have "initialized" the volume with the files that are inside the docker container, as part of the process of creating the volume. After that, the files in the volume on the host server are persisted across container restarts/deployments, and any new updates to that path inside the new docker images are ignored.

Related

Docker container exits (code 255) with error "task already exists" and does not restart automatically

I have a basic container that opens up a ssh tunnel to a machine.
Recently I noticed the container has exited with error code 255 with an error message saying the task already exists:
"Id": "7eb92418992a1a1c3e44d6b47257dc503d4fa4d0f26050956533d617ac369479",
"Created": "2022-08-29T18:19:41.286843867Z",
"Path": "sh",
"Args": [
"-c",
"apk update && apk add openssh-client &&\n chmod 400 ~/.ssh/abc.pem\n while true; do \n exec ssh -o StrictHostKeyChecking=no -i ~/.ssh/abc.pem -nNT -L *:33333:localhost:5001 abc#192.168.1.1; \n done"
],
"State": {
"Status": "exited",
"Running": false,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 0,
"ExitCode": 255,
"Error": "task 7eb92418992a1a1c3e44d6b47257dc503d4fa4d0f26050956533d617ac369479: already exists",
"StartedAt": "2022-08-30T19:43:58.575463029Z",
"FinishedAt": "2022-08-30T19:51:23.511624168Z"
},
More importantly even though the restart policy is always, the docker engine did not start the container after the container exit.
abc:
container_name: abc
image: alpine:latest
restart: always
command: >
sh -c "apk update && apk add openssh-client &&
chmod 400 ~/.ssh/${PEM_FILENAME}
while true; do
exec ssh -o StrictHostKeyChecking=no -i ~/.ssh/${PEM_FILENAME} -nNT -L *:33333:localhost:5001 abc#${IP};
done"
volumes:
- ./ssh:/root/.ssh:rw
expose:
- 33333
Does anyone know under what situation error task already exists can happen?
Also any idea why docker engine did not start the container after exit?
Update 1:
Also any idea why docker engine did not start the container after exit? [Answered by #Mihai]
According to Restart policy details:
A restart policy only takes effect after a container starts
successfully. In this case, starting successfully means that the
container is up for at least 10 seconds and Docker has started
monitoring it. This prevents a container which does not start at all from going into a restart loop.
Sine we have:
"StartedAt": "2022-08-30T19:43:58.575463029Z",
"FinishedAt": "2022-08-30T19:51:23.511624168Z"
then FinishedAt - StartedAt ~ 8 seconds < 10 seconds that's why docker engine is not restarting the container. Which I think it is not a good logic. docker engine should have a retry mechanism to retry for instance at least 3 times before giving up.
I would suggest this solution:
create Dockerfile in an empty folder as:
FROM alpine:latest
RUN apk update && apk add openssh-client
build the image:
docker build -t alpinessh .
Run it with docker run:
docker run -d \
--restart "always" \
--name alpine_ssh \
-u $(id -u):$(id -g) \
-v $HOME/.ssh:/user/.ssh \
-p 33333:33333 \
alpinessh \
ssh -o StrictHostKeyChecking=no -i /user/.ssh/${PEM_FILENAME} -nNT -L :33333:localhost:5001 abc#${IP}
(make sure to set the env variables that you need)
Running with docker-compose follows the same logic.
** NOTE **
Mapping ~/.ssh inside the container is not the best of ideas. It would be better to copy the key to a different location and use it from there. Reason is: inside the container you are root and any files created in your ~/.ssh by the container would be created/accessed by root (uid=0). For example known_hosts - if you don't already have one, you will get a fresh new one owned by root.
For this reason I am running the container as the current UID:GID on the host.

It seems you are running Vue CLI inside a container

I m trying to run my vuejs app using vs-code remote-containers. Its deployed and I can access it via the url: localhost:8080/ but If I update some js file, its not re-compiling and even not hot-reloading.
devcontainer.json
{
"name": "Aquawebvue",
"dockerFile": "Dockerfile",
"appPort": [3000],
"runArgs": ["-u", "node"],
"settings": {
"workbench.colorTheme": "Cobalt2",
"terminal.integrated.automationShell.linux": "/bin/bash"
},
"postCreateCommand": "yarn",
"extensions": [
"esbenp.prettier-vscode",
"wesbos.theme-cobalt2",
]
}
Dockerfile
FROM node:12.13.0
RUN npm install -g prettier
After opening container and running cmd 'yarn serve' in terminal it builds and deploy successfully but I got this warning:
It seems you are running Vue CLI inside a container.
Since you are using a non-root publicPath, the hot-reload socket
will not be able to infer the correct URL to connect. You should
explicitly specify the URL via devServer.public.
VSCode has a pre-defined .devcontainer directory for Vue projects. It can be found on GitHub. You can install it automatically by running the command Add Development Container Configuration Files... > Show All Definitions > Vue.
Dockerfile
# [Choice] Node.js version: 14, 12, 10
ARG VARIANT=14
FROM mcr.microsoft.com/vscode/devcontainers/javascript-node:0-${VARIANT}
RUN su node -c "umask 0002 && npm install -g http-server #vue/cli #vue/cli-service-global"
WORKDIR /app
EXPOSE 8080
# [Optional] Uncomment this section to install additional OS packages.
# RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
# && apt-get -y install --no-install-recommends <your-package-list-here>
devcontainer.json
{
"name": "Vue (Community)",
"build": {
"dockerfile": "Dockerfile",
"context": "..",
// Update 'VARIANT' to pick a Node version: 10, 12, 14
"args": { "VARIANT": "14" }
},
// Set *default* container specific settings.json values on container create.
"settings": {
"terminal.integrated.shell.linux": "/bin/zsh"
},
// Add the IDs of extensions you want installed when the container is created.
"extensions": [
"dbaeumer.vscode-eslint",
"octref.vetur"
],
// Use 'forwardPorts' to make a list of ports inside the container available locally.
"forwardPorts": [
8080
],
// Use 'postCreateCommand' to run commands after the container is created.
// "postCreateCommand": "uname -a",
// Comment out connect as root instead. More info: https://aka.ms/vscode-remote/containers/non-root.
"remoteUser": "node"
}

VSCode: Display forwarding from docker container in Remote Development Extension

How to set up remote display forwarding from Docker container using the new Remote Development extension?
Currently, my .devcontainer contains:
devcontainer.json
{
"name": "kinetic_v5",
"context": "..",
"dockerFile": "Dockerfile",
"workspaceFolder": "/workspace",
"runArgs": [
"--net", "host",
"-e", "DISPLAY=${env:DISPLAY}",
"-e", "QT_GRAPHICSSYSTEM=native",
"-e", "CONTAINER_NAME=kinetic_v5",
"-v", "/tmp/.X11-unix:/tmp/.X11-unix",
"--device=/dev/dri:/dev/dri",
"--name=kinetic_v5",
],
"extensions": [
"ms-python.python"
]
}
Dockerfile
FROM docker.is.localnet:5000/amd/official:16.04
RUN apt-get update && \
apt-get install -y zsh \
fonts-powerline \
locales \
# set up locale
&& locale-gen en_US.UTF-8
RUN pip install Cython
# run the installation script
RUN wget https://github.com/robbyrussell/oh-my-zsh/raw/master/tools/install.sh -O - | zsh || true
CMD ["zsh"]
This doesn't seem to do the job.
Setup details:
OS: linux
Product: Visual Studio Code - Insiders
Product Version: 1.35.0-insider
Language: en
UPDATE: You can find a thread on official git repo about this issue here.

Docker Image Not Loading on Localhost. Network, etc

I have been having really hard time figuring out the problem with my docker image not loading on localhost. I am using Docker for Windows 7. Following is some information regarding my docker image, network, etc, if it can be any helpful in proposing solutions.
I have EXPOSE 80 in my docker file. and listen 8080 in my shinyserver.conf file. I run my image by typing docker run -p 80:8080 imagename. It goes back to command line. Then I check http://localhost or http://localhost:8080, or http://172.17.0.1 or http://172.17.0.1:8080 or http://192.168.99.100 and no image shows up.
When I run docker ps there is no running container (makes sense, since image is not running anyway on localhost). All exited containers show with docker ps -a. There some containers that I haven't even created by show (i guess it comes with creating images).
I run docker inspect to find the IP address and here is what I get (there is no IP on fitfarmz which is my imagename)
$ docker inspect -f '{{.Name}} - {{.NetworkSettings.IPAddress }}' $(docker ps -aq)
/mystifying_johnson -
/wonderful_newton -
/flamboyant_brown -
/gallant_austin -
/fitfarmz-2 -
/fitfarmz -
/amazing_easley -
/nervous_lewin -
/silly_wiles -
/lucid_hopper -
/quizzical_kirch -
/boring_gates -
/clever_booth -
/determined_mestorf -
/pedantic_wozniak -
/goofy_goldstine -
/sharp_ardinghelli -
/xenodochial_lamport -
/keen_panini -
/blissful_lamarr -
/suspicious_boyd -
/confident_hodgkin -
/vigorous_lewin - 172.17.0.2
/quirky_khorana -
/agitated_knuth -
I also got info on docker network:
$ docker network inspect bridge
[
{
"Name": "bridge",
"Id": "9105bd8d679ad2e7d814781b4fa99f375cff3a99a047e70ef63e463c35c5ae28"
,
"Created": "2018-09-08T22:22:51.075519268Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
Here is the dockerfile:
FROM r-base:3.5.0
# Install Ubuntu packages
RUN apt-get update && apt-get install -y \
sudo \
gdebi-core \
pandoc \
pandoc-citeproc \
libcurl4-gnutls-dev \
libcairo2-dev/unstable \
libxt-dev \
libssl-dev
# Add shiny user
RUN groupadd shiny \
&& useradd --gid shiny --shell /bin/bash --create-home shiny
# Download and install ShinyServer
RUN wget --no-verbose https://download3.rstudio.org/ubuntu-14.04/x86_64/shiny-server-1.5.7.907-amd64.deb && \
gdebi shiny-server-1.5.7.907-amd64.deb
# Install R packages that are required
RUN R -e "install.packages(c('Benchmarking', 'plotly', 'DT'), repos='http://cran.rstudio.com/')"
RUN R -e "install.packages('shiny', repos='https://cloud.r-project.org/')"
# Copy configuration files into the Docker image
COPY shiny-server.conf /etc/shiny-server/shiny-server.conf
COPY /app /srv/shiny-server/
# Make the ShinyApp available at port 80
EXPOSE 80
# Copy further configuration files into the Docker image
COPY shiny-server.sh /usr/bin/shiny-server.sh
CMD ["/usr/bin/shiny-server.sh"]
And shinyserver.conf:
# Define the user we should use when spawning R Shiny processes
run_as shiny;
# Define a top-level server which will listen on a port
server {
# Instruct this server to listen on port 80. The app at dokku-alt need expose PORT 80, or 500 e etc. See the docs
listen 8080;
# Define the location available at the base URL
location / {
# Run this location in 'site_dir' mode, which hosts the entire directory
# tree at '/srv/shiny-server'
site_dir /srv/shiny-server;
# Define where we should put the log files for this location
log_dir /var/log/shiny-server;
# Should we list the contents of a (non-Shiny-App) directory when the user
# visits the corresponding URL?
directory_index on;
}
}
Here is the shiny-server.sh file:
mkdir -p /var/log/shiny-server
chown shiny.shiny /var/log/shiny-server
exec shiny-server >> /var/log/shiny-server.log 2>&1
Any ideas what is going wrong? I use the command: docker run -p 80:8080 fitfarmz to run the image.

ssh-agent does not remember identities when running inside a docker container in DC/OS

I am trying to run a service using DC/OS and Docker. I created my Stack using the template for my region from here. I also created the following Dockerfile:
FROM ubuntu:16.04
RUN apt-get update && apt-get install -y expect openssh-client
WORKDIR "/root"
ENTRYPOINT eval "$(ssh-agent -s)" && \
mkdir -p .ssh && \
echo $PRIVATE_KEY > .ssh/id_rsa && \
chmod 600 /root/.ssh/id_rsa && \
expect -c "spawn ssh-add /root/.ssh/id_rsa; expect \"Enter passphrase for /root/.ssh/id_rsa:\" send \"\"; interact " && \
while true; do ssh-add -l; sleep 2; done
I have a private repository that I would like to clone/pull from when the docker container starts. This is why I am trying to add the private key to the ssh-agent.
If I run this image as a docker container locally and supply the private key using the PRIVATE_KEY environment variable, everything works fine. I see that the identity is added.
The problem that I have is that when I try to run a service on DC/OS using the docker image, the ssh-agent does not seem to remember the identity that was added using the private key.
I have checked the error log from DC/OS. There are no errors.
Does anyone know why running the docker container on DC/OS is any different compared to running it locally?
EDIT: I have added details of the description of the DC/OS service in case it helps:
{
"id": "/SOME-ID",
"instances": 1,
"cpus": 1,
"mem": 128,
"disk": 0,
"gpus": 0,
"constraints": [],
"fetch": [],
"storeUrls": [],
"backoffSeconds": 1,
"backoffFactor": 1.15,
"maxLaunchDelaySeconds": 3600,
"container": {
"type": "DOCKER",
"volumes": [],
"docker": {
"image": "IMAGE NAME FROM DOCKERHUB",
"network": "BRIDGE",
"portMappings": [{
"containerPort": SOME PORT NUMBER,
"hostPort": SOME PORT NUMBER,
"servicePort": SERVICE PORT NUMBER,
"protocol": "tcp",
"name": “default”
}],
"privileged": false,
"parameters": [],
"forcePullImage": true
}
},
"healthChecks": [],
"readinessChecks": [],
"dependencies": [],
"upgradeStrategy": {
"minimumHealthCapacity": 1,
"maximumOverCapacity": 1
},
"unreachableStrategy": {
"inactiveAfterSeconds": 300,
"expungeAfterSeconds": 600
},
"killSelection": "YOUNGEST_FIRST",
"requirePorts": true,
"env": {
"PRIVATE_KEY": "ID_RSA PRIVATE_KEY WITH \n LINE BREAKS",
}
}
Docker Version
Check that your local version of Docker matches the version installed on the DC/OS agents. By default, the DC/OS 1.9.3 AWS CloudFormation templates uses CoreOS 1235.12.0, which comes with Docker 1.12.6. It's possible that the entrypoint behavior has changed since then.
Docker Command
Check the Mesos task logs for the Marathon app in question and see what docker run command was executed. You might be passing it slightly different arguments when testing locally.
Script Errors
As mentioned in another answer, the script you provided has several errors that may or may not be related to the failure.
echo $PRIVATE_KEY should be echo "$PRIVATE_KEY" to preserve line breaks. Otherwise key decryption will fail with Bad passphrase, try again for /root/.ssh/id_rsa:.
expect -c "spawn ssh-add /root/.ssh/id_rsa; expect \"Enter passphrase for /root/.ssh/id_rsa:\" send \"\"; interact " should be expect -c "spawn ssh-add /root/.ssh/id_rsa; expect \"Enter passphrase for /root/.ssh/id_rsa:\"; send \"\n\"; interact ". It's missing a semi-colon and a line break. Otherwise the expect command fails without executing.
File Based Secrets
Enterprise DC/OS 1.10 (1.10.0-rc1 out now) has a new feature named File Based Secrets which allows for injecting files (like id_rsa files) without including their contents in the Marathon app definition, storing them securely in Vault using DC/OS Secrets.
Creation: https://docs.mesosphere.com/1.10/security/secrets/create-secrets/
Usage: https://docs.mesosphere.com/1.10/security/secrets/use-secrets/
File based secrets wont do the ssh-add for you, but it should make it easier and more secure to get the file into the container.
Mesos Bug
Mesos 1.2.0 switched to using Docker --env_file instead of -e to pass in environment variables. This triggers a Docker env_file bug that it doesn't support line breaks. A workaround was put into Mesos and DC/OS, but the fix may not be in the minor version you are using.
A manual workaround is to convert the rsa_id to base64 for the Marathon definition and back in your entrypoint script.
The key file contents being passed via PRIVATE_KEY originally contain line breaks. After echoing the PRIVATE_KEY variable content to ~/.ssh/id_rsa the line breaks will be gone. You can fix that issue by wrapping the $PRIVATE_KEY variable with double quotes.
Another issue arises when the container is started without attached TTY, typically via -i -t command line parameters to docker run. The password request will fail and won't add the ssh key to the ssh-agent. For the container being run in DC/OS, the interaction probably won't make sense, so you should change your entrypoint script accordingly. That will require your ssh key to be passwordless.
This changed Dockerfile should work:
ENTRYPOINT eval "$(ssh-agent -s)" && \
mkdir -p .ssh && \
echo "$PRIVATE_KEY" > .ssh/id_rsa && \
chmod 600 /root/.ssh/id_rsa && \
ssh-add /root/.ssh/id_rsa && \
while true; do ssh-add -l; sleep 2; done

Resources