Why does my docker image fail when running as task in AWS ECS (Fargate)? - docker

I have a docker image in ECR, which is used for my ECS task. The task spins up and runs for a couple of minutes. Then it shuts down, after reporting the following error:
2021-11-07 00:00:58npm ERR! A complete log of this run can be found in:
2021-11-07 00:00:58npm ERR! /home/node/.npm/_logs/2021-11-07T00_00_58_665Z-debug.log
2021-11-07 00:00:58npm ERR! signal SIGTERM
2021-11-07 00:00:58npm ERR! command sh -c node bin/www
2021-11-07 00:00:58npm ERR! command failed
2021-11-07 00:00:58npm ERR! path /usr/src/app
2021-11-06 23:59:25> my-app#0.0.0 start
2021-11-06 23:59:25> node bin/www
My Dockerfile looks like:
LABEL maintainer="my-team"
LABEL description="App for AWS ECS"
EXPOSE 8080
WORKDIR /usr/src/app
RUN chown -R node:node /usr/src/app
RUN apk add bash
RUN apk add openssl
COPY --chown=node src/package*.json ./
USER node
ARG NODE_ENV=dev
ENV NODE_ENV ${NODE_ENV}
RUN npm ci
COPY --chown=node ./src/generate-cert.sh ./
RUN ./generate-cert.sh
COPY --chown=node src/ ./
ENTRYPOINT ["npm","start"]
My package.json contains:
{
"name": "my-app",
"version": "0.0.0",
"private": true,
"scripts": {
"start": "node ./bin/www",
"test": "jest --coverage"
},
The app is provisioned using terraform, with the following task definition:
resource "aws_ecs_task_definition" "task_definition" {
family = "dataviz_task"
network_mode = "awsvpc"
requires_compatibilities = ["FARGATE"]
cpu = "256"
memory = "512"
task_role_arn = aws_iam_role.dataviz_ecs_role.arn
execution_role_arn = aws_iam_role.dataviz_ecs_task_execution_role.arn
container_definitions = jsonencode([{
entryPoint : [
"npm",
"start"
],
environment : [
{ "name" : "ENV", "value" : local.container_environment }
]
essential : true,
image : "${var.account_id}${var.ecr_image_address}:latest",
lifecycle : {
ignore_changes : "image"
}
logConfiguration : {
"logDriver" : "awslogs",
"options" : {
"awslogs-group" : var.log_stream_name,
"awslogs-region" : var.region,
"awslogs-stream-prefix" : "ecs"
}
},
name : local.container_name,
portMappings : [
{
"containerPort" : local.container_port,
"hostPort" : local.host_port,
"protocol" : "tcp"
}
]
}])
}
My application runs locally in docker, but not when using the same image in AWS ECS.
To run locally, I uses a Make command make restart
which runs this from my Makefile:
build:
#docker build \
--build-arg NODE_ENV=local \
--tag $(DEV_IMAGE_TAG) \
. > /dev/null
.PHONY: package
package:
#docker build \
--tag $(PROD_IMAGE_TAG) \
--build-arg NODE_ENV=production \
. > /dev/null
.PHONY: start
start: build
#docker run \
--rm \
--publish 8080:8080 \
--name $(IMAGE_NAME) \
--detach \
--env ENV=local \
$(DEV_IMAGE_TAG) > /dev/null
.PHONY: stop
stop:
#docker stop $(IMAGE_NAME) > /dev/null
.PHONY: restart
restart:
ifeq ($(shell (docker ps | grep $(IMAGE_NAME))),)
#make start > /dev/null
else
#make stop > /dev/null
#make start > /dev/null
endif
Why does my docker image fail when running as task in AWS ECS (Fargate)?

Related

How to solve the problem with starting devcontainer?

I tried to run devcontainer. Set up files:
devcontainer.json
{
"name": "C++",
"build": {
"dockerfile": "Dockerfile"
},
"features": {
"ghcr.io/devcontainers/features/git:1": {}
}
}
Dockerfile
FROM mcr.microsoft.com/devcontainers/cpp:0-debian-11
ARG REINSTALL_CMAKE_VERSION_FROM_SOURCE="3.22.2"
# Optionally install the cmake for vcpkg
COPY ./reinstall-cmake.sh /tmp/
RUN if [ "${REINSTALL_CMAKE_VERSION_FROM_SOURCE}" != "none" ]; then \
chmod +x /tmp/reinstall-cmake.sh && /tmp/reinstall-cmake.sh ${REINSTALL_CMAKE_VERSION_FROM_SOURCE}; \
fi \
&& rm -f /tmp/reinstall-cmake.sh
But when i try to run devcontainer i get error:
[2022-12-23T18:57:44.771Z] ERROR: invalid character '\x00' looking for beginning of value
[2022-12-23T18:57:44.863Z] Stop (969 ms): Run: docker buildx build --load --build-arg BUILDKIT_INLINE_CACHE=1 -f C:\Users\BOGUS_~1.NEW\AppData\Local\Temp\devcontainercli\container-features\0.25.2-1671821861765\Dockerfile-with-features -t vsc-test-9da7bcb89243449acfae569e26bf0e4b --target dev_containers_target_stage --build-context dev_containers_feature_content_source=C:\Users\BOGUS_~1.NEW\AppData\Local\Temp\devcontainercli\container-features\0.25.2-1671821861765 --build-arg _DEV_CONTAINERS_BASE_IMAGE=dev_container_auto_added_stage_label --build-arg _DEV_CONTAINERS_IMAGE_USER=root --build-arg _DEV_CONTAINERS_FEATURE_CONTENT_SOURCE=dev_container_feature_content_temp c:\Projects\docker_projects\Cpp\test\.devcontainer
[2022-12-23T18:57:44.865Z] Error: Command failed: docker buildx build --load --build-arg BUILDKIT_INLINE_CACHE=1 -f C:\Users\BOGUS_~1.NEW\AppData\Local\Temp\devcontainercli\container-features\0.25.2-1671821861765\Dockerfile-with-features -t vsc-test-9da7bcb89243449acfae569e26bf0e4b --target dev_containers_target_stage --build-context dev_containers_feature_content_source=C:\Users\BOGUS_~1.NEW\AppData\Local\Temp\devcontainercli\container-features\0.25.2-1671821861765 --build-arg _DEV_CONTAINERS_BASE_IMAGE=dev_container_auto_added_stage_label --build-arg _DEV_CONTAINERS_IMAGE_USER=root --build-arg _DEV_CONTAINERS_FEATURE_CONTENT_SOURCE=dev_container_feature_content_temp c:\Projects\docker_projects\Cpp\test\.devcontainer
[2022-12-23T18:57:44.866Z] at Doe (c:\Users\Bogus_Kladik.NEW-PC\.vscode\extensions\ms-vscode-remote.remote-containers-0.266.1\dist\spec-node\devContainersSpecCLI.js:1894:1669)
[2022-12-23T18:57:44.866Z] at process.processTicksAndRejections (node:internal/process/task_queues:96:5)
[2022-12-23T18:57:44.866Z] at async EF (c:\Users\Bogus_Kladik.NEW-PC\.vscode\extensions\ms-vscode-remote.remote-containers-0.266.1\dist\spec-node\devContainersSpecCLI.js:1893:1978)
[2022-12-23T18:57:44.866Z] at async uT (c:\Users\Bogus_Kladik.NEW-PC\.vscode\extensions\ms-vscode-remote.remote-containers-0.266.1\dist\spec-node\devContainersSpecCLI.js:1893:901)
[2022-12-23T18:57:44.866Z] at async Poe (c:\Users\Bogus_Kladik.NEW-PC\.vscode\extensions\ms-vscode-remote.remote-containers-0.266.1\dist\spec-node\devContainersSpecCLI.js:1899:2128)
[2022-12-23T18:57:44.867Z] at async Zf (c:\Users\Bogus_Kladik.NEW-PC\.vscode\extensions\ms-vscode-remote.remote-containers-0.266.1\dist\spec-node\devContainersSpecCLI.js:1899:3278)
[2022-12-23T18:57:44.867Z] at async aue (c:\Users\Bogus_Kladik.NEW-PC\.vscode\extensions\ms-vscode-remote.remote-containers-0.266.1\dist\spec-node\devContainersSpecCLI.js:2020:15276)
[2022-12-23T18:57:44.867Z] at async oue (c:\Users\Bogus_Kladik.NEW-PC\.vscode\extensions\ms-vscode-remote.remote-containers-0.266.1\dist\spec-node\devContainersSpecCLI.js:2020:15030)
[2022-12-23T18:57:44.882Z] Stop (5862 ms): Run: C:\Users\Bogus_Kladik.NEW-PC\AppData\Local\Programs\Microsoft VS Code\Code.exe --ms-enable-electron-run-as-node c:\Users\Bogus_Kladik.NEW-PC\.vscode\extensions\ms-vscode-remote.remote-containers-0.266.1\dist\spec-node\devContainersSpecCLI.js up --user-data-folder c:\Users\Bogus_Kladik.NEW-PC\AppData\Roaming\Code\User\globalStorage\ms-vscode-remote.remote-containers\data --workspace-folder c:\Projects\docker_projects\Cpp\test --workspace-mount-consistency cached --id-label devcontainer.local_folder=c:\Projects\docker_projects\Cpp\test --log-level debug --log-format json --config c:\Projects\docker_projects\Cpp\test\.devcontainer\devcontainer.json --default-user-env-probe loginInteractiveShell --mount type=volume,source=vscode,target=/vscode,external=true --skip-post-create --update-remote-user-uid-default on --mount-workspace-git-root true
[2022-12-23T18:57:44.882Z] Exit code 1
[2022-12-23T18:57:44.889Z] Command failed: C:\Users\Bogus_Kladik.NEW-PC\AppData\Local\Programs\Microsoft VS Code\Code.exe --ms-enable-electron-run-as-node c:\Users\Bogus_Kladik.NEW-PC\.vscode\extensions\ms-vscode-remote.remote-containers-0.266.1\dist\spec-node\devContainersSpecCLI.js up --user-data-folder c:\Users\Bogus_Kladik.NEW-PC\AppData\Roaming\Code\User\globalStorage\ms-vscode-remote.remote-containers\data --workspace-folder c:\Projects\docker_projects\Cpp\test --workspace-mount-consistency cached --id-label devcontainer.local_folder=c:\Projects\docker_projects\Cpp\test --log-level debug --log-format json --config c:\Projects\docker_projects\Cpp\test\.devcontainer\devcontainer.json --default-user-env-probe loginInteractiveShell --mount type=volume,source=vscode,target=/vscode,external=true --skip-post-create --update-remote-user-uid-default on --mount-workspace-git-root true
[2022-12-23T18:57:44.889Z] Exit code 1
How can I fix this problem?
Of the attempts, I can note the update of Docker Desktop, the update of wsl
The problem seems to be related to the use of BuildKit and Inline Cache in Docker.
The work-around suggested here is either:
To disable BuildKit in Docker:
Under Linux:
# in /etc/bash.bashrc
export DOCKER_BUILDKIT=0
Under Docker Dashboard:
Go to Settings > Docker Engine and set 'buildkit' to 'false':
"features": {
"buildkit": true
},
To disable the Inline Cache, either:
in the Dockerfile :
"args": {
"BUILDKIT_INLINE_CACHE": "0"
}
in the devcontainer.json:
"build": {
"dockerfile": "Dockerfile",
"args": {
"BUILDKIT_INLINE_CACHE": "0"
}
},
in the docker-compose.yml:
services:
app:
build:
context: .
dockerfile: Dockerfile
args:
BUILDKIT_INLINE_CACHE: 0

Instance deployment: The Docker container unexpectedly ended after it was started

hi guys not sure what im doing wrong here but when ever i upload my project docker image to elastic beanstalk i get this error: Instance deployment: The Docker container unexpectedly ended after it was started. I am new to this and i am not sure why this happens please help if you can.
docker image
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install --force
COPY . .
ENV APP_PORT 8080
EXPOSE 8080
CMD [ "node", "app.js" ]
.gitlab-ci.yml file
- build
- run
variables:
APP_NAME: ${CI_PROJECT_NAME}
APP_VERSION: "1.0.0"`enter code here`
S3_BUCKET: "${S3_BUCKET}"
AWS_ID: ${MY_AWS_ID}
AWS_ACCESS_KEY_ID: ${MY_AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${MY_AWS_SECRET_ACCESS_KEY}
AWS_REGION: us-east-1
AWS_PLATFORM: Docker
create_eb_version:
stage: build
image: python:latest
allow_failure: false
script: |
pip install awscli #Install awscli tools
echo "Creating zip file ${APP_NAME}"
python zip.py ${APP_NAME}
echo "Creating AWS Version Label"
AWS_VERSION_LABEL=${APP_NAME}-${APP_VERSION}-${CI_PIPELINE_ID}
S3_KEY="$AWS_VERSION_LABEL.zip"
echo "Uploading to S3"
aws s3 cp ${APP_NAME}.zip s3://${S3_BUCKET}/${S3_KEY} --region ${AWS_REGION}
echo "Creating app version"
aws elasticbeanstalk create-application-version \
--application-name ${APP_NAME} \
--version-label $AWS_VERSION_LABEL \
--region ${AWS_REGION} \
--source-bundle S3Bucket=${S3_BUCKET},S3Key=${S3_KEY} \
--description "${CI_COMMIT_DESCRIPTION}" \
--auto-create-application \
only:
refs:
- main
deploy_aws_eb:
stage: run
image: coxauto/aws-ebcli
when: manual
script: |
AWS_VERSION_LABEL=${APP_NAME}-${APP_VERSION}-${CI_PIPELINE_ID}
echo "Deploying app to tf test"
eb init -i ${APP_NAME} -p ${AWS_PLATFORM} -k ${AWS_ID} --region ${AWS_REGION}
echo "Deploying to enviroment"
eb deploy ${APP_ENVIROMENT_NAME} --version ${AWS_VERSION_LABEL}
echo "done"
only:
refs:
- main
I got it to work my node version in my docker file was set to 10 and not 16 changed the node version to 16 and also replaced APP_PORT with PORT because that's what I named it

Why Cypress service is failing?

Below is my pipeline on which I'm trying to get Cypress job to run tests against
Nginx service (which points to the main app) which is built at the build stage
The below is based on official template from here https://gitlab.com/cypress-io/cypress-example-docker-gitlab/-/blob/master/.gitlab-ci.yml
:
image: docker:stable
services:
- docker:dind
stages:
- build
- test
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- .npm
- cache/Cypress
- node_modules
job:
stage: build
script:
- export REACT_APP_USERS_SERVICE_URL=http://127.0.0.1
- apk add --update --no-cache gcc g++ make python2 python2-dev py-pip python3-dev docker-compose npm
- docker-compose up -d --build
e2e:
image: cypress/included:9.1.1
stage: test
script:
- export CYPRESS_VIDEO=false
- export CYPRESS_baseUrl=http://nginx:80
- npm i randomstring
- $(npm bin)/cypress run -t -v $PWD/e2e -w /e2e -e CYPRESS_VIDEO -e CYPRESS_baseUrl --network testdriven_default
- docker-compose down
Error output:
Cypress encountered an error while parsing the argument config
You passed: if [ -x /usr/local/bin/bash ]; then
exec /usr/local/bin/bash
elif [ -x /usr/bin/bash ]; then
exec /usr/bin/bash
elif [ -x /bin/bash ]; then
exec /bin/bash
elif [ -x /usr/local/bin/sh ]; then
exec /usr/local/bin/sh
elif [ -x /usr/bin/sh ]; then
exec /usr/bin/sh
elif [ -x /bin/sh ]; then
exec /bin/sh
elif [ -x /busybox/sh ]; then
exec /busybox/sh
else
echo shell not found
exit 1
fi
The error was: Cannot read properties of undefined (reading 'split')
What is wrong with this set up ?
From #jparkrr on GitHub : https://github.com/cypress-io/cypress-docker-images/issues/300#issuecomment-626324350
I had the same problem. You can specify entrypoint: [""] for the image in .gitlab-ci.yml.
Read more about it here: https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#overriding-the-entrypoint-of-an-image
In your case :
e2e:
image:
name: cypress/included:9.1.1
entrypoint: [""]

how to dynamically set an environment variable to true within a dockerfile

my app has feature flags that i would like to dynamically set to true for my npm build.
essentially i'd like to do something like
COMPILE_ASSETS=true npm build or NEW_EMAILS=true npm build, only dynamically from CI.
i have a CI pipeline that will grab the flag, but am having trouble setting it to the true and running npm in the Dockerfile.
my Dockerfile -
FROM ubuntu:bionic
ARG FEATURE_FLAG
RUN if [ "x$FEATURE_FLAG" = "x" ] ; \
then npm run build ; \
else $FEATURE_FLAG=true npm run build; \
fi
this gets run with --
docker build --no-cache --rm -t testing --build-arg FEATURE_FLAG=my_feature_flag . (i would like to keep this the way it is)
in CI i get
/bin/sh: 1: my_feature_flag=true: not found
i've tried various forms of the else statement --
else export $FEATURE_FLAG=true npm run build; (this actually looks like it works on my mac but fails in CI with export: : bad variable name
else ${FEATURE_FLAG:+$FEATURE_FLAG=true} npm build;
else eval(`$FEATURE_FLAG=true npm build`);
`else env $FEATURE_FLAG=true bash -c 'npm build';
these all fail :(
i've tried reworking the Dockerfile completely and setting the flag to true as an ENV --
ARG FEATURE_FLAG
ENV FF_SET_TRUE=${FEATURE_FLAG:+$FEATURE_FLAG=true}
ENV FF_SET_TRUE=${FF_SET_TRUE:-null}
RUN if [ "$FF_SET_TRUE" = "null" ] ; \
then npm build; \
else $FF_SET_TRUE npm build; \
fi
nothing works! is this simply a bash limitation? is expanding a variable before running a command is not possible?
or is this not possible with Docker?
Did you mean:
FROM ubuntu:bionic
ARG FEATURE_FLAG
RUN set -eux; \
if [ "x$FEATURE_FLAG" == "x" ] ; then \
npm run build ; \
else \
eval $($FEATURE_FLAG=true npm run build); \
fi
You need to wrap your command in eval for the variable to expand based on the ARGS being passed.
this worked!
ARG FEATURE_FLAG
RUN if [ -z "$FEATURE_FLAG" ] ; \
then npm run build ; \
else \
echo setting $FEATURE_FLAG to true; \
env "$FEATURE_FLAG"=true sh -c 'npm run build'; \
fi

How to copy folder from parent into current directory for Dockerfile using Makefile

I have a makefile that looks like this:
push:
docker build -t dataengineering/dataloader .
docker tag dataengineering/dataloader:latest 127579856528.dkr.ecr.us-west-2.amazonaws.com/dataengineering/dataloader:latest
docker push 127579856528.dkr.ecr.us-west-2.amazonaws.com/dataengineering/dataloader:latest
deploy:
#if [ ! "$(environment)" ]; then echo "environment must be defined" && exit 1; fi
#if [ ! "$(target)" ]; then echo "target must be defined" && exit 1; fi
kubectl delete deploy dataloader-$(target) -n dataengineering|| continue
kubectl apply -f kube/$(environment)/deployment-$(target).yaml -n dataengineering
But I need a folder inside the dataloader in order for my dockerfile to actually work.
Does this work?
push:
cd ..; cp -r datastore/ dataloader/
docker build -t dataengineering/dataloader .
docker tag dataengineering/dataloader:latest 1111111111.dkr.ecr.us-west-2.amazonaws.com/dataengineering/dataloader:latest
docker push 11111111111.dkr.ecr.us-west-2.amazonaws.com/dataengineering/dataloader:latest
deploy:
#if [ ! "$(environment)" ]; then echo "environment must be defined" && exit 1; fi
#if [ ! "$(target)" ]; then echo "target must be defined" && exit 1; fi
kubectl delete deploy dataloader-$(target) -n dataengineering|| continue
kubectl apply -f kube/$(environment)/deployment-$(target).yaml -n dataengineering
My dockerfile:
FROM python:3.7
WORKDIR /var/dataloader
COPY assertions/ ./assertions/
...
COPY datastore/ ./datastore/
RUN pip3 install -r requirements.txt
ENTRYPOINT ["python", "dataloader.py"]
If all you need is to copy the directory into the current directory (which would server as your Docker context), you can use cp -r ../datastore/ dataloader/. Unless you want the dataloader directory to be in the same directory as the datastore directory, then you'd do cp -r ../datastore/ ../dataloader/.

Resources