make a url accessible from one job to another on circleci - docker

I have two jobs, 'build' and 'test'. 'build' job builds and runs the docker image. I'm able to access the url http://0.0.0.0:3000 to run my tests from the same job. But when I run my tests from 'test' job, it doesn't recognize http://0.0.0.0:3000.
jobs:
build:
.
.<other code>
.
steps:
- run:
docker build . -t company/project:${CIRCLE_SHA1}
docker run --net=host -e DOPPLER_TOKEN=${DOPPLER_TOKEN} -d company/project:${CIRCLE_SHA1} "npm run dev"
docker run --net=host -v ~/projects/project_name/tests/:/tests -v ~/projects/project_name/node_modules/:/node_modules -it testcafe/testcafe chromium:headless
The above code seems to run perfectly fine.
jobs:
build:
.
.<other code>
.
steps:
- run:
docker build . -t company/project:${CIRCLE_SHA1}
docker run --net=host -e DOPPLER_TOKEN=${DOPPLER_TOKEN} -d company/project:${CIRCLE_SHA1} "npm run dev"
test:
working_directory: ~/projects/project_name
machine: true
steps:
- checkout
- run:
docker run --net=host -v ~/projects/project_name/tests/:/tests -v ~/projects/project_name/node_modules/:/node_modules -it testcafe/testcafe chromium:headless
This doesn't work. It gives an error
A request to "http://0.0.0.0:3000" has failed. You can find troubleshooting information for this issue at "https://go.devexpress.com/TestCafe_FAQ_ARequestHasFailed.aspx". Error details: Failed to find a DNS-record for the resource at "http://0.0.0.0:3000"
I've also tried the curl command to access the url and it gives an error
curl -o - http://0.0.0.0:3000 curl: (7) Failed to connect to 0.0.0.0 port 3000: Connection refused
Any help regarding this would be very helpful.

Related

How to run circleci locally on macOS with a job that uses Docker?

I have run into an issue when trying to run circleci build locally on macOS when trying to build a docker image.
Example .config file
version: 2
jobs:
build:
docker:
- image: cimg/base:2020.01
steps:
- setup_remote_docker
- run:
name: Run Docker
command: docker run cimg/base:2020.01 echo "hello"
After running circleci build
failed to dial gRPC: cannot connect to the Docker daemon. Is 'docker daemon' running on this host?: dial unix /var/run/docker.sock: connect: permission denied
I've looked at several similar questions such as this one but none of their solutions work.
I was able to adapt the solution here to work with circleci.
Simply add the command sudo chown circleci:circleci /var/run/docker.sock to your circle config.
So it will look like:
version: 2
jobs:
build:
docker:
- image: cimg/base:2020.01
steps:
- setup_remote_docker
- run: if [ -e /var/run/docker.sock ]; then sudo chown circleci:circleci /var/run/docker.sock; fi
- run:
name: Run Docker
command: docker run cimg/base:2020.01 echo "hello"
And the result
====>> Run Docker
#!/bin/bash -eo pipefail
docker run cimg/base:2020.01 echo "hello"
hello
Success!

Authenticate sentry-cli inside docker in gitlab ci

I want to run sentry-cli inside my docker image like this:
sentry-frontend:
stage: sentry
services:
- docker:18-dind
before_script:
- docker login -u gitlab-ci-token -p "$CI_JOB_TOKEN" registry.xxx.xx
script:
- export SENTRY_AUTH_TOKEN=xxxxxxxxxxxxxxxxxx
- export IMAGE=$CI_REGISTRY_IMAGE/frontend-builder:$CI_COMMIT_REF_NAME
- export RELEASE_VERSION=$CI_COMMIT_REF_NAME
- docker pull getsentry/sentry-cli
- docker run --rm -v $(pwd):/work getsentry/sentry-cli releases -o org -p frontend new $RELEASE_VERSION
tags:
- dind
However the job fails because
error: API request failed
caused by: sentry reported an error: Authentication credentials were not provided. (http status: 401)
I tried:
- docker run --rm -v $(pwd):/work getsentry/sentry-cli --auth-token xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
however after that I get the same message as I would if I ran
docker run --rm -v $(pwd):/work sentry-cli --help
and after that it fails as if the command was not correct.I can't seem to find any information on how to do that correctly either. How to provide credentials inside that image?
If you want to pass the SENTRY_AUTH_TOKEN environment variable to the container, you can adapt your docker run command like this:
docker run --rm -v "$PWD:/work" -e SENTRY_AUTH_TOKEN="$SENTRY_AUTH_TOKEN" getsentry/sentry-cli releases -o org -p frontend new $RELEASE_VERSION
or more concisely:
docker run --rm -v "$PWD:/work" -e SENTRY_AUTH_TOKEN getsentry/sentry-cli releases -o org -p frontend new $RELEASE_VERSION
(but note that the latter version won't work if docker is an alias of sudo docker)
The relevant documentation page is:
docs.docker.com/engine/reference/commandline/run/
As an aside, note that -v "$PWD:/work" is more efficient than -v "$(pwd):/work" (one less process to spin).

Gitlab CI tries to find 'of:latest' image from public docker hub

I have the following .gitlab-ci.yml task for an elixir project:
image: docker:latest
services:
- docker:dind
stages:
- build
- test
- release
- deploy
variables:
TEMP_IMAGE: registry.gitlab.com/farmmix/homepage/farmmix_homepage:$CI_COMMIT_SHA
before_script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN registry.gitlab.com
build:
stage: build
script:
- cd src
- docker build --pull -t $TEMP_IMAGE .
- docker push $TEMP_IMAGE
test:
stage: test
variables:
DB_DATABASE: test
DB_USERNAME: postgres
DB_PASSWORD: postgres
DB_URL: postgres
script:
- echo $TEMP_IMAGE
- docker pull $TEMP_IMAGE
- docker pull postgres:9.5-alpine
- docker run --name postgres -e POSTGRES_DB=$DB_DATABASE -e POSTGRES_USER=$DB_USERNAME -e POSTGRES_PASSWORD=$DB_PASSWORD -d postgres:9.5-alpine
- docker run --link postgres $TEMP_IMAGE ecto.create ecto.migrate test
The $TEMP_IMAGE is an existing image that gets created at a previous build task.
If I run it locally with gitlab-runner exec docker --docker-privileged test, it works fine.
However, gitlab runner gives me the following:
... AFTER INITIALIZATION ...
Skipping Git submodules setup
$ docker login -u gitlab-ci-token -p $CI_JOB_TOKEN registry.gitlab.com
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Login Succeeded
$ echo $TEMP_IMAGE
registry.gitlab.com/farmmix/homepage/farmmix_homepage:b0c30097a320933f7d5390d7037960e34d2ef7d
$ docker pull $TEMP_IMAGE
b0c30097a320933f7d5390d7037960e34d2ef7d0: Pulling from farmmix/homepage/farmmix_homepage
605ce1bd3f31: Pulling fs layer
...
e0c7f5df971a: Pull complete
Digest: sha256:ce7a1bf2378628902e171a22ee386af6c79e8d2340b6241ab70e83173e32ce28
Status: Downloaded newer image for registry.gitlab.com/farmmix/homepage/farmmix_homepage:b0c30097a320933f7d5390d7037960e34d2ef7d0
$ docker pull postgres:9.5-alpine
9.5-alpine: Pulling from library/postgres
550fe1bea624: Pulling fs layer
04bf519c70df: Pulling fs layer
...
0dca1c6b5036: Pull complete
Digest: sha256:fc3b8fcc8ba568492ce89fd8723a949f586e2919d7884b9b1d8064237ba105d7
Status: Downloaded newer image for postgres:9.5-alpine
$ docker run --name postgres -e POSTGRES_DB=$DB_DATABASE -e POSTGRES_USER=$DB_USERNAME -e POSTGRES_PASSWORD=$DB_PASSWORD -d postgres:9.5-alpine
Unable to find image 'of:latest' locally
docker: Error response from daemon: pull access denied for of, repository does not exist or may require 'docker login'.
See 'docker run --help'.
ERROR: Job failed: exit code 125
I cannot even find anything on the internet of this 'of:latest' error. I tried running the docker run command without the -e arguments but the same error appeared so it's not that the env vars are causing any trouble.
I'm running out of ideas. Do any of you guys suspect what the solution might be?
EDIT: Added complete .gitlab-ci.yml content
EDIT2: Added echo and output of job
$ docker run --name postgres -e POSTGRES_DB=$DB_DATABASE -e POSTGRES_USER=$DB_USERNAME -e POSTGRES_PASSWORD=$DB_PASSWORD -d postgres:9.5-alpine
Unable to find image 'of:latest' locally
docker: Error response from daemon: pull access denied for of, repository does not exist or may require 'docker login'.
One of your variables almost certainly contains the string " of " in it. You could test this by adding a line:
echo docker run --name postgres -e POSTGRES_DB=$DB_DATABASE -e POSTGRES_USER=$DB_USERNAME -e POSTGRES_PASSWORD=$DB_PASSWORD -d postgres:9.5-alpine
first to see what it's trying to run. With variables, it's a good practice to quote them to avoid any issues with special characters or spaces:
docker run --name postgres -e "POSTGRES_DB=$DB_DATABASE" -e "POSTGRES_USER=$DB_USERNAME" -e "POSTGRES_PASSWORD=$DB_PASSWORD" -d postgres:9.5-alpine

Docker container created inside the travis-ci does not serve NGINX port for test

I am using a docker image for functional tests of my project. This image is based on alpine and have the nginx and the php-fpm services running by the supervisord, and my functional test do rest calls to this docker instance.
Basically the .travis.yml:
Build the image
Start the container
Call the PHPUnit to test;
The image is created fine and the container is UP. I added some debug info to verify this:
>> docker run -d --rm --name resttemplate-test-instance -v /home/travis/build/byjg/php-rest-template:/srv/web -p "127.0.0.1:80:80" resttemplate-test
f3986de1c86629123896a0aa7f6ec407f617f261383c3b6a358e9dfcd3d06d77
Exit status : 0
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f3986de1c866 resttemplate-test "docker-php-entryp..." Less than a second ago Up Less than a second 443/tcp, 127.0.0.1:80->80/tcp, 9000/tcp resttemplate-test-instance
$ docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' resttemplate-test-instance
172.17.0.2
But when the PHPUnit run I got the followed message for every single test:
cURL error 56: Recv failure: Connection reset by peer (see http://curl.haxx.se/libcurl/c/libcurl-errors.html)
If I repeat line by line all the steps existing in the .travis.yml in my local environment everything works fine. This error occurs only in travis.
Here is the link for the build is failing:
https://travis-ci.org/byjg/php-rest-template/jobs/305476720#L728
Here have my .travis.yml:
sudo: required
language: php
php:
- "7.1"
- "7.0"
- "5.6"
env:
- APPLICATION_ENV=test
services:
- docker
install:
- composer install
- composer restdocs
- composer migrate -- reset --yes
- composer build
- docker ps -a
- docker inspect resttemplate-test-instance
- docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' resttemplate-test-instance
- sudo chown 82:82 src/sample.db
- sudo chown 82:82 src/
script:
- phpunit
Digging into the internet I found this issue: https://github.com/travis-ci/travis-ci/issues/6461
There are a suggest to wait the docker instance becoming up and running by adding the "sleep 15". It worked!

running docker commands from a bash script has different results

I use the socketplane/openvswitch docker image.
When I follow their instructions to build and execute OVS commands in a running container, everything works fine. However, when I try to build a bash script for running and executing OVS commands the container returns with
db.sock: Database connection failed (Connection refused)
Actually the problem is running the following commands in a terminal:
docker run -itd --cap-add NET_ADMIN [container-name]
docker exec $cid ovs-vsctl show
succeeds, but running same commands in a bash script does not.
This is my bash script:
#!/bin/bash
cid=$(docker run -itd --cap-add NET_ADMIN [container-name])
docker exec $cid ovs-vsctl show
Thanks
My thought would be that the root of your problem is here:
docker run -itd
Because they're contradictory parameters.
-d says 'run in background`.
-it says 'run interactively, attach a tty.
So I would suggest that you try:
#!/bin/bash
cid=$(docker run -d --cap-add NET_ADMIN [container-name])
docker exec $cid ovs-vsctl show
Failing that, my second guess would be - the startup process of the container takes a little while. I get this when firing up kibana containers - it takes a few seconds to start, so I get 'permission denied' errors.
Try sticking a 'sleep' in there, as a simple test, but if that is the problem - you'll need to check the DB startup and see where you've 'got to'.
Failing that, you can "attach" to your container interactively, with docker exec -it <container> bash and run the command and troubleshoot directly.

Resources