Instance deployment: The Docker container unexpectedly ended after it was started - docker

hi guys not sure what im doing wrong here but when ever i upload my project docker image to elastic beanstalk i get this error: Instance deployment: The Docker container unexpectedly ended after it was started. I am new to this and i am not sure why this happens please help if you can.
docker image
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install --force
COPY . .
ENV APP_PORT 8080
EXPOSE 8080
CMD [ "node", "app.js" ]
.gitlab-ci.yml file
- build
- run
variables:
APP_NAME: ${CI_PROJECT_NAME}
APP_VERSION: "1.0.0"`enter code here`
S3_BUCKET: "${S3_BUCKET}"
AWS_ID: ${MY_AWS_ID}
AWS_ACCESS_KEY_ID: ${MY_AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${MY_AWS_SECRET_ACCESS_KEY}
AWS_REGION: us-east-1
AWS_PLATFORM: Docker
create_eb_version:
stage: build
image: python:latest
allow_failure: false
script: |
pip install awscli #Install awscli tools
echo "Creating zip file ${APP_NAME}"
python zip.py ${APP_NAME}
echo "Creating AWS Version Label"
AWS_VERSION_LABEL=${APP_NAME}-${APP_VERSION}-${CI_PIPELINE_ID}
S3_KEY="$AWS_VERSION_LABEL.zip"
echo "Uploading to S3"
aws s3 cp ${APP_NAME}.zip s3://${S3_BUCKET}/${S3_KEY} --region ${AWS_REGION}
echo "Creating app version"
aws elasticbeanstalk create-application-version \
--application-name ${APP_NAME} \
--version-label $AWS_VERSION_LABEL \
--region ${AWS_REGION} \
--source-bundle S3Bucket=${S3_BUCKET},S3Key=${S3_KEY} \
--description "${CI_COMMIT_DESCRIPTION}" \
--auto-create-application \
only:
refs:
- main
deploy_aws_eb:
stage: run
image: coxauto/aws-ebcli
when: manual
script: |
AWS_VERSION_LABEL=${APP_NAME}-${APP_VERSION}-${CI_PIPELINE_ID}
echo "Deploying app to tf test"
eb init -i ${APP_NAME} -p ${AWS_PLATFORM} -k ${AWS_ID} --region ${AWS_REGION}
echo "Deploying to enviroment"
eb deploy ${APP_ENVIROMENT_NAME} --version ${AWS_VERSION_LABEL}
echo "done"
only:
refs:
- main

I got it to work my node version in my docker file was set to 10 and not 16 changed the node version to 16 and also replaced APP_PORT with PORT because that's what I named it

Related

Error response from daemon: failed to parse Dockerfile: Syntax error - can't find = in "RUN". Must be of the form: name=value

When I'm trying to build the Dockerfile from aws codebuild I'm getting the this error as
"Error response from daemon: failed to parse Dockerfile: Syntax error - can't find = in "RUN". Must be of the form: name=value"
In code build I'm using ubuntu as environmment with aws/codebuild/standard 5.0 runtime version.
Here is my Dockerfile and buildspec.yml:
FROM alipine:3.12
ARG SONARQUBE_VERSION=8.9.0.43852
COPY ./sonarqube-8.9.0.43852.zip /opt/'sonarqube-${SONARQUBE_VERSION}.zip'
RUN chmod 777 /opt/sonarqube-8.9.0.43852.zip
---------------
--------------
COPY --chown=sonarqube:sonarqube run.sh sonar.sh ${SONARQUBE_HOME}/bin/
USER sonarqube
RUN chmod 777 ${SONARQUBE_HOME}/bin/run.sh; \
RUN chmod 777 ${SONARQUBE_HOME}/bin/sonar.sh
VOLUME $SONARQUBE_HOME
WORKDIR ${SONARQUBE_HOME}
EXPOSE 9000
ENTRYPOINT ["/opt/sonarqube/bin/run.sh"]
CMD ["/opt/sonarqube/bin/sonar.sh"]
And the below is my buildspec.yml
version: 0.2
phases:
install:
runtime-versions:
java: corretto11
pre_build:
commands:
- echo Logging in to Amazon ECR.....
- aws --version
- aws ecr get-login-password --region eu-west-1 | docker login --username AWS --password-stdin <MyAccountId>.dkr.ecr.eu-west-1.amazonaws.com/sonarqube
build:
commands:
- aws s3 cp s3://ravitej/SonarQube/sonarqube-8.9.0.43852.zip .
- chmod +x sonarqube-8.9.0.43852.zip
- echo $(pwd)
- echo Build started on `date`
- echo Building the Docker image...
- echo $(docker version)
- REPOSITORY_URI=<MyAccountID>.dkr.ecr.eu-west-1.amazonaws.com/sonarqube
- IMAGE_TAG=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)
- docker build -t $REPOSITORY_URI:$IMAGE_TAG .
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker images...
- docker push $REPOSITORY_URI:$IMAGE_TAG
- echo Writing image definitions file...
- printf '[{"name":"sonarqube","imageUri":"%s"}]' $REPOSITORY_URI:$IMAGE_TAG > imagedefinitions.json
- echo imagedefinitions.json
artifacts:
files: imagedefinitions.json
Banging my head againt the table for the past 2 days.
Please help..
This is the error from code build:
enter image description here
I just had a similar problem which lead me here and the issue was a trailing \ in my Dockerfile. In your case I guess it is this line:
RUN chmod 777 ${SONARQUBE_HOME}/bin/run.sh; \
Which causes problems for the next line.
This also happens when you try to add a comment using hash at the end of the line, e.g. like # OK here:
$ cat Dockerfile
[..]
ENV RSTUDIO_VER=1.3.1073 # OK
[..]
The docker build's parser won't like it!
$ docker build -t mirekphd/ml-cpu-r40-rs-cust .
Sending build context to Docker daemon 142.2MB
Error response from daemon: failed to parse Dockerfile: Syntax error - can't find = in "#". Must be of the form: name=value

Gitlab CI job with specific user

I am trying to run Gitlab CI job of anchore engine to scan docker image. The command in script section fails with error of permission denied. I found out the command requires root user permissions. Sudo is not installed in the docker image I'm using as gitlab runner and only non sudo user anchore is there in the docker container.
Below is the CI job for container scanning.
container_scan:
stage: scan
image:
name: anchore/anchore-engine:latest
entrypoint: ['']
services:
- name: anchore/engine-db-preload:latest
alias: anchore-db
variables:
GIT_STRATEGY: none
ANCHORE_HOST_ID: "localhost"
ANCHORE_ENDPOINT_HOSTNAME: "localhost"
ANCHORE_CLI_USER: "admin"
ANCHORE_CLI_PASS: "foobar"
ANCHORE_CLI_SSL_VERIFY: "n"
ANCHORE_FAIL_ON_POLICY: "true"
ANCHORE_TIMEOUT: "500"
script:
- |
curl -o /tmp/anchore_ci_tools.py https://raw.githubusercontent.com/anchore/ci-tools/master/scripts/anchore_ci_tools.py
chmod +x /tmp/anchore_ci_tools.py
ln -s /tmp/anchore_ci_tools.py /usr/local/bin/anchore_ci_tools
- anchore_ci_tools --setup
- anchore-cli registry add "$CI_REGISTRY" gitlab-ci-token "$CI_JOB_TOKEN" --skip-validate
- anchore_ci_tools --analyze --report --image "$IMAGE_NAME" --timeout "$ANCHORE_TIMEOUT"
- |
if ; then
anchore-cli evaluate check "$IMAGE_NAME"
else
set +o pipefail
anchore-cli evaluate check "$IMAGE_NAME" | tee /dev/null
fi
artifacts:
name: ${CI_JOB_NAME}-${CI_COMMIT_REF_NAME}
paths:
- image-*-report.json
The CI job fails at ln -s /tmp/anchore_ci_tools.py /usr/local/bin/anchore_ci_tools in the script section.
I have tried to add user in the entrypoint section
name: anchore/anchore-engine:latest
entrypoint: ['bash', '-c', 'useradd myuser && exec su myuser -c bash']
but it did not allow to create a user. I have tried running the docker container in linux with docker run -it --user=root anchore/anchore-engine:latest /bin/bash and it run without any problem. How can I simulate the same in gitlab-ci job?

How do I set docker-credential-ecr-login in my PATH before anything else in GitLab CI

I'm using AWS ECR to host a private Dockerfile image, and I would like to use it in GitLab CI.
Accordingly to the documentation I need to set docker-credential-ecr-login to fetch the private image, but I have no idea how to do that before anything else. That's my .gitlab-ci file:
image: 0222822883.dkr.ecr.us-east-1.amazonaws.com/api-build:latest
tests:
stage: test
before_script:
- echo "before_script"
- apt install amazon-ecr-credential-helper
- apk add --no-cache curl jq python py-pip
- pip install awscli
script:
- echo "script"
- bundle install
- bundle exec rspec
allow_failure: true # for now as we do not have tests
Thank you.
I confirm the feature at stake is not yet available in GitLab CI; however I've recently seen it is possible to implement a generic workaround to run a dedicated CI script within a container taken from a private Docker image.
The template file .gitlab-ci.yml below is adapted from the OP's example, using the Docker-in-Docker approach I suggested in this other SO answer, itself inspired by the GitLab CI doc dealing with dind:
stages:
- test
variables:
IMAGE: "0222822883.dkr.ecr.us-east-1.amazonaws.com/api-build:latest"
REGION: "ap-northeast-1"
tests:
stage: test
image: docker:latest
services:
- docker:dind
variables:
# GIT_STRATEGY: none # uncomment if "git clone" is unneeded for this job
before_script:
- ': before_script'
- apt install amazon-ecr-credential-helper
- apk add --no-cache curl jq python py-pip
- pip install awscli
- $(aws ecr get-login --no-include-email --region "$REGION")
- docker pull "$IMAGE"
script:
- ': script'
- |
docker run --rm -v "$PWD:/build" -w /build "$IMAGE" /bin/bash -c "
export PS4='+ \e[33;1m($CI_JOB_NAME # line \$LINENO) \$\e[0m ' # optional
set -ex
## TODO insert your multi-line shell script here ##
echo \"One comment\" # quotes must be escaped here
: A better comment
echo $PWD # interpolated outside the container
echo \$PWD # interpolated inside the container
bundle install
bundle exec rspec
## (cont'd) ##
"
- ': done'
allow_failure: true # for now as we do not have tests
This example assumes the Docker $IMAGE contains the /bin/bash binary, and relies on the so-called block style of YAML.
The above template already contains comments, but to be self-contained:
You need to escape double quotes if your Bash commands contain them, because the whole code is surrounded by docker run … " and ";
You also need to escape local Bash variables (cf. the \$PWD above), otherwise these variables will be resolved prior running the docker run … "$IMAGE" /bin/bash -c "…" command itself.
I replaced the echo "stuff" or so commands with their more effective colon counterpart:
set -x
: stuff
: note that these three shell commands do nothing
: but printing their args thanks to the -x option.
[Feedback is welcome as I can't directly test this config (I'm not an AWS ECR user), but I'm puzzled by the fact the OP's example contained at the same time some apt and apk commands…]
Related remark on a pitfall of set -e
Beware that the following script is buggy:
set -e
command1 && command2
command3
Namely, write instead:
set -e
command1 ; command2
command3
or:
set -e
( command1 && command2 )
command3
To be convinced about this, you can try running:
bash -e -c 'false && true; echo $?; echo this should not be run'
→ 1
→ this should not be run
bash -e -c 'false; true; echo $?; echo this should not be run'
bash -e -c '( false && true ); echo $?; echo this should not be run'
From GitLab documentation. In order to interact with your AWS account, the GitLab CI/CD pipelines require both AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY to be defined in your GitLab settings under Settings > CI/CD > Variables. Then add to your before script:
image: 0222822883.dkr.ecr.us-east-1.amazonaws.com/api-build:latest
tests:
stage: test
before_script:
- echo "before_script"
- apt install amazon-ecr-credential-helper
- apk add --no-cache curl jq python py-pip
- pip install awscli
- $( aws ecr get-login --no-include-email )
script:
- echo "script"
- bundle install
- bundle exec rspec
allow_failure: true # for now as we do not have tests
Also, you had a typo is awscli, not awsclir.Then add the builds, tests and push accordingly.
I think that you have some sort of logic error in the case. image in the build configuration is a CI scripts runner image, not image you build and deploy.
I think you don't have to use it in any case since it is just an image which has utilities & connections to the GitLab CI & etc. The image shouldn't have any dependencies of your project normally.
Please check examples like this one https://gist.github.com/jlis/4bc528041b9661ae6594c63cd2ef673c to get it more clear how to do it a correct way.
I faced the same problem using docker executor mode of gitlab runner.
SSH into the EC2 instance showed that docker-credential-ecr-login was present in /usr/bin/. To pass it to the container I had to mount this package to the gitlab runner container.
gitlab-runner register -n \
--url '${gitlab_url}' \
--registration-token '${registration_token}' \
--template-config /tmp/gitlab_runner.template.toml \
--executor docker \
--tag-list '${runner_name}' \
--description 'gitlab runner for ${runner_name}' \
--docker-privileged \
--docker-image "alpine" \
--docker-disable-cache=true \
--docker-volumes "/var/run/docker.sock:/var/run/docker.sock" \
--docker-volumes "/cache" \
--docker-volumes "/usr/bin/docker-credential-ecr-login:/usr/bin/docker-credential-ecr-login" \
--docker-volumes "/home/gitlab-runner/.docker:/root/.docker"
More information on this thread as well: https://gitlab.com/gitlab-org/gitlab-runner/-/issues/1583#note_375018948
We have a similar setup where we need to run CI jobs based off of an Image that is hosted on ECR.
Steps to follow:-
follow this guide here>> https://github.com/awslabs/amazon-ecr-credential-helper
gist of this above link is if you are on "Amazon Linux 2"
sudo amazon-linux-extras enable docker
sudo yum install amazon-ecr-credential-helper
open the ~/.docker/config.json on your gitlab runner in VI editor
Paste this code in the ~/.docker/config.json
{
"credHelpers":
{
"aws_account_id.dkr.ecr.region.amazonaws.com": "ecr-login"
}
}
source ~/.bashrc
systemctl restart docker
also remove any references of DOCKER_AUTH_CONFIG from your GitLab>>CI/CD>> Variables
That's it

CircleCI environmental variables for HEROKU not being set properly causing GIT to fail

I am a CircleCI user, and I am setting up an integration with Heroku.
I want to do the following, and setup security with integrations with dockerHub and also to Heroku from the CircleCI portal page, using this config.yml file.
The problem is that CircleCI doesn't seem to know what these variables should be set to, and instead just echos.
${HEROKU_API_KEY} ${HEROKU_APP}
config.yml
version: 2
jobs:
build:
working_directory: ~/springboot_swagger_example-master-cassandra
docker:
- image: circleci/openjdk:8-jdk-browsers
steps:
- checkout
- restore_cache:
key: springboot_swagger_example-master-cassandra-{{ checksum "pom.xml" }}
- run: mvn dependency:go-offline
- save_cache:
paths:
- ~/.m2
key: springboot_swagger_example-master-cassandra-{{ checksum "pom.xml" }}
- type: add-ssh-keys
- type: deploy
name: "Deploy to Heroku"
command: |
if [ "${CIRCLE_BRANCH}" == "master" ]; then
# Install Heroku fingerprint (this is heroku's own key, NOT any of your private or public keys)
echo 'heroku.com ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAu8erSx6jh+8ztsfHwkNeFr/SZaSOcvoa8AyMpaerGIPZDB2TKNgNkMSYTLYGDK2ivsqXopo2W7dpQRBIVF80q9mNXy5tbt1WE04gbOBB26Wn2hF4bk3Tu+BNMFbvMjPbkVlC2hcFuQJdH4T2i/dtauyTpJbD/6ExHR9XYVhdhdMs0JsjP/Q5FNoWh2ff9YbZVpDQSTPvusUp4liLjPfa/i0t+2LpNCeWy8Y+V9gUlDWiyYwrfMVI0UwNCZZKHs1Unpc11/4HLitQRtvuk0Ot5qwwBxbmtvCDKZvj1aFBid71/mYdGRPYZMIxq1zgP1acePC1zfTG/lvuQ7d0Pe0kaw==' >> ~/.ssh/known_hosts
# git push git#heroku.com:yourproject.git $CIRCLE_SHA1:refs/heads/master
# Optional post-deploy commands
# heroku run python manage.py migrate --app=my-heroku-project
fi
- run: mvn package
- run:
name: Install Docker client
command: |
set -x
VER="17.03.0-ce"
curl -L -o /tmp/docker-$VER.tgz https://get.docker.com/builds/Linux/x86_64/docker-$VER.tgz
tar -xz -C /tmp -f /tmp/docker-$VER.tgz
mv /tmp/docker/* /usr/bin
- run:
name: Build Docker image
command: docker build -t joethecoder2/spring-boot-web:$CIRCLE_SHA1 .
- run:
name: Push to DockerHub
command: |
docker login -u$DOCKERHUB_LOGIN -p$DOCKERHUB_PASSWORD
docker push joethecoder2/spring-boot-web:$CIRCLE_SHA1
- run:
name: Setup Heroku
command: |
curl https://cli-assets.heroku.com/install-ubuntu.sh | sh
chmod +x .circleci/setup-heroku.sh
.circleci/setup-heroku.sh
- run:
name: Deploy to Heroku
command: |
mkdir app
cd app/
heroku create
# git push https://heroku:$HEROKU_API_KEY#git.heroku.com/$HEROKU_APP.git master
echo ${HEROKU_API_KEY}
echo ${HEROKU_APP}
git push https://heroku:${HEROKU_API_KEY}#git.heroku.com/${HEROKU_APP}.git master
- store_test_results:
path: target/surefire-reports
- store_artifacts:
path: target/spring-boot-web-0.0.1-SNAPSHOT.jar
The problem is that CircleCI doesn't seem to know what these variables should be set to, and instead just echos.
${HEROKU_API_KEY}
${HEROKU_APP}
The question, and problem is why aren't these settings being detected automatically?
You need to set the value for the variables: https://circleci.com/docs/2.0/env-vars/
They are being echo'd because you're echoing them.
echo ${HEROKU_API_KEY}
echo ${HEROKU_APP}

Build FAILED but job status is SUCCESS in Gitlab

My Dockerfile:
FROM mm_php:7.1
ADD ./docker/test/source/entrypoint.sh /work/entrypoint.sh
ADD ./docker/wait-for-it.sh /work/wait-for-it.sh
RUN chmod 755 /work/entrypoint.sh \
&& chmod 755 /work/wait-for-it.sh
ENTRYPOINT ["/work/entrypoint.sh"]
entrypoint.sh:
#!/bin/bash -e
/work/wait-for-it.sh db:5432 -- echo "PostgreSQL started"
./vendor/bin/parallel-phpunit --pu-cmd="./vendor/bin/phpunit -c phpunit-docker.xml" tests
docker-compose.yml:
version: '2'
services:
test:
build:
context: .
args:
ssh_prv_key: ${ssh_prv_key}
application_env: ${application_env}
dockerfile: docker/test/source/Dockerfile
links:
- db
db:
build:
context: .
dockerfile: docker/test/postgres/Dockerfile
environment:
PGDATA: /tmp
.gitlab-ci.yml:
image: docker:latest
services:
- name: docker:dind
command: ["--insecure-registry=my.domain:5000 --registry-mirror=http://my.domain"]
before_script:
- apk add --no-cache py-pip
- pip install docker-compose
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- echo "$SSH_KNOWN_HOSTS" > ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
- echo "$SSH_PRIVATE_KEY" > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
test:
stage: test
script:
- export ssh_prv_key="$(cat ~/.ssh/id_rsa)" && export application_env="testing-docker" && docker-compose up --build test
All works good. But if tests are failed, status of job in Gitlab is SUCCESS instead of FAILED.
How to obtain status FAILED if tests are failed?
UPD
If I run docker-compose up locally, it return no error code:
$ export ssh_prv_key="$(cat ~/.ssh/id_rsa)" && export application_env="testing-docker" && docker-compose up --build test
Building db
Step 1/2 : FROM mm_postgres:9.6
...
test_1 | FAILURES!
test_1 | Tests: 1, Assertions: 1, Failures: 1.
test_1 | Success: 2 Fail: 2 Error: 0 Skip: 2 Incomplete: 0
mmadmin_test_1 exited with code 1
$ echo $?
0
It looks to me like it's reporting failed on the test without necessarily reporting failure on the return value of the docker-compose call. Have you tried capturing the return value of docker-compose when tests fail locally?
In order to get docker-compose to return the exit code from a specific service, try this:
docker-compose up --exit-code-from=service
When Gitlab CI runs something, if the process executed returns something different from zero, then, your build will fail.
In your case, you are running a docker-compose and this program returns zero when the container finish, what is correct.
You are trying to get phpunit's failure.
I think that is better you split your build in steps and not use docker-compose in this case:
gitlab.yml:
stages:
- build
- test
build:
image: docker:latest
stage: build
script:
- docker build -t ${NAME_OF_IMAGE} .
- docker push ${NAME_OF_IMAGE}
test:
image: ${NAME_OF_IMAGE}
stage: test
script:
- ./execute_your.sh

Resources