For the past 2 days, I have been consistently trying out different methods of deploying my multi-container app to Heroku via Travis CI. Heroku shows a weird error when I deploy my application from Travis CI.
Here's my
docker-compose.yml:
version: '3'
services:
db:
image: mysql:5.7
ports:
- '3306:3306'
environment:
MYSQL_DATABASE: 'mysql'
MYSQL_USER: 'root'
MYSQL_PASSWORD: 'root'
MYSQL_ROOT_PASSWORD: 'root'
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/covid_analysis
ports:
- "8000:8000"
depends_on:
- db
After deploying with this configuration, my Travis CI build shows a weird error:
After some Google search, I found a GitHub issue about this problem, which suggests deploying with entrypoint rather than cmd/command.
Therefore, I did change my command: python manage.py runserver 0.0.0.0:8000 to entrypoint: python manage.py runserver 0.0.0.0:8000.
This time, Travis build errored like this:
Here is my latest docker-compose.yml and Dockerfile
I have Googled a lot of things, and I was not able to find anything that could solve my problem (or even explain why it's not working).
All the builds work fine locally. The code is available on GitHub.
Your error comes from the way you are calling docker-compose run on Travis CI.
In your .travis.yml, the following can be found:
script:
- docker-compose run web python manage.py test
What your docker-compose tries to do here, is to run the following services:
web
python
manage.py
test
The only service that exists in your docker-compose however is web, so the command fails.
UPDATE
My original answer was wrong, I thought docker-compose run had similar behavior to docker-compose up.
The reason the error occurs after refactoring the web service from command: to entrypoint: in the docker-compose.yml is because of the following script in the .travis.yml:
script:
- docker-compose run web python manage.py test
The default behavior of docker-compose run is that it passes all the arguments after the specified service (in this case python manage.py test comes after web), as an override to command.
Because it is now refactored to entrypoint, this does not work anymore. This can be fixed by writing the script like this:
script:
- docker-compose run --entrypoint="python manage.py test" web
Related
I want Travis CI to build my app for test when I push my app to GitHub.
I think there is cooperation between Travis CI and GitHub.
But it didn't work.
docker-compose.yml here.
version: '3'
volumes:
db-data:
services:
web:
build: .
ports:
- '3000:3000'
volumes:
- '.:/product-register'
environment:
- 'DATABASE_PASSWORD=postgres'
tty: true
stdin_open: true
depends_on:
- db
links:
- db
db:
image: postgres
volumes:
- 'db-data:/var/lib/postgresql/data'
environment:
- 'POSTGRES_HOST_AUTH_METHOD=trust'
- 'POSTGRES_USER=postgres'
- 'POSTGRES_PASSWORD=postgres'
.travis.yml here.
sudo: required
services: docker
before_install:
- docker login -u polymetisoutis -p 5fb47200-dd19-4772-a9ad-c98913ef1cb9
- docker-compose up --build -d
script:
- docker-compose exec --env 'RAILS_ENV=test' web rails db:create
- docker-compose exec --env 'RAILS_ENV=test' web rails db:migrate
- docker-compose exec --env 'RAILS_ENV=test' web rails test
Then repository I pushed to GitHub here.⇒https://github.com/PolymetisOutis/product-register
After the next command executed,
git push origin master
I think Travis CI should build the app on the travis-ci.com page for test.
But Travis CI didn't work.
Why?
Is there anyone who have an idea and clue about this?
Your Travis CI Build Request Page [1] always gives the idea of why the build wasn't triggered actually. I can see that at a time that you have triggered some requests your account wasn't confirmed and your build requests were rejected, however, now seems like you are able to trigger builds.
[1] https://app.travis-ci.com/github/PolymetisOutis/product-register/requests
I have a rest api. I want to have a docker-compose setup that:
starts the api server
"waits" until it's up and running
runs some api tests against the endpoints
stops everything once the test job finished.
Now,
The first part I can do.
As for waiting for the backend to be up and runnning, as I understand it, depends_on does not quite cut it. the rest api does have a /ping endpoint tho in case we need it.
struggling to find a minimal example online that:
uses volumes and does not explicitly copy tests files over.
runs the tests through a command in the docker file (as opposed to in the DockerFile)
again not sure if there is an idiomatic way of stopping everything after tests are done, but I did come across a somewhat related solution that suggests using docker-compose up --abort-on-container-exit. is that the best way of achieving this?
currently my docker-compose file looks like this:
docker-compose.yml
version: '3.8'
networks:
development:
driver: bridge
services:
app:
build:
context: ../
dockerfile: ../Dockerfile
command: sbt run
image: sbt
ports:
- "8080:8080"
volumes:
- "../:/root/build"
networks:
- development
tests:
build:
dockerfile: ./Dockerfile
command: npm run test
volumes:
- .:/usr/tests/
- /usr/tests/node_modules
networks:
- development
depends_on:
- app
and the node Dockerfile looking this:
FROM node:16
ADD package*.json /usr/tests/
ADD test.js /usr/tests/
WORKDIR /usr/tests/
RUN npm install
Full repo is here: https://github.com/ShahOdin/dockerise-everything/pull/1
You can wait for another service to become available with docker-compose-wait project.
Add the 'docker-compose-wait' binary to the 'test container' and run the 'docker-compose-wait' binary before testing the API server inside the container's entrypoint.
You can give some time interval before and after checking if the service is ready.
Tl;dr I'm using Docker to run my Postman/Newman tests and my API tests hang when ran in Travis-CI but not when ran locally. Why am I encountering tests that run infinitely?
Howdy guys! I've recently started to learn Docker, Travis-CI and Newman for a full stack application. I started with developing the API and I'm taking a TDD approach. As such, I'm testing my API first. I setup my .travis.yml file to download a specific version of Docker-Compose and then use Docker-Compose to run my tests in a container I name api-test. The container has an image, dannydainton/htmlextra, which is built from the official postman/newman:alpine image like so:
language: node_js
node_js:
- "14.3.0"
env:
global:
- DOCKER_COMPOSE_VERSION: 1.26.2
- PGHOST: db
- PGDATABASE: battle_academia
- secure: "xDZHJ9ZVe3WPXr6WetERMjFnTlMowyEoeckzLcRvWyEIe2qbnWrJRo7cIRxA0FsyJ7ao4QLVv4XhOIeqJupwW3nfnljo35WGcuRBLh76CW6JSuTIbpV1dndOpATW+aY3r6GSwpojnN4/yUVS53pvIeIn03PzQWmnbiJ0xfjStrJzYNpSVIHLn0arujDUMyze8+4ptS1qfekOy2KRifG5+viFarUbWUXaUiJfZCn14S4Wy5N/T+ycltNjX/qPAVZYV3fxY1ZyNX7wzJA+oV71MyApp5PgNW2SBlePkeZTnkbI7FW100MUnE4bvy00Jr/aCoWZYTySz86KT+8HSGzy6d+THO8zjOXKJV5Vn93+XWmxtp/yjBsg+dtFlZUWkN99EBkEjjwJc1Oy5zrOQNjsptNGpl1kid5+bAT4XcP4xn7X5pc7QB8ZE3igbfKTM11LABYN1adcIwgGIjUz1eQnFuibtkVM4oqE92JShUF/6gbwGJsWjQGBNBCOBBueYNB86sk0TiAfS08z2VW9L3pcljA2IwdXclw3f1ON6YelBTJmc88EmxI4TS0hRC5KgMCkegW1ndcTZwqIQGFm+NFbe1hKMmqTfgOg5M8OQZBtUkF60Lox09ECg59IrYj+BIa9J303+bo+IMgZ1JVYlL7FA2qc0bE8J/9A1C2wCRjDLLE="
- secure: "F/Ru7QZvA+zWjQ7K7vhA3M2ZrYKXVIlkIF1H7v2dPv/lsc18eWGpOQep4uAjX4IMyLY/6n7uYRLnSlbvOWulVUW8U52zWiQkYFF9OwosuTdIlVTAQGp3B0CAA+RCxMtDQay6fN9H6e2bL3KwjT//VUHd1E6BPu+O1/RyX+0+0KvTmExmMSuioSpDPcI20Mym2vRCgNPb1gfajr5QfWKPJlrPjfyNhDxWMhM94nwTuLYIVZwZPTZ0Ro5D6hhXFVZOFIlHr5VDbbFa+Xo0TIdP/ZudxZ7p3Mn7ncA8seLx2Q5/zH6tJ4DSUpEm67l5IqUrvd9qp0CNCjlTcl3kOJK4qIB1WtLm6oW2rBqDyvthhuprPpqEcs7C9z2604VLybdOmJ0+Y/7uIo6po388avGN4ZwZbWQ1xiiW+Ja8kkHZYEKo4m0AbKdX9pn8otcNO+1xlDtUU7CZey2QA8WrFlfHWqRapIgNfT5tTSTAul3yWAFCRw09PHYELuO7oQCqFZi7zu3HKWknbkzjf+Cz3TfIFTX/3saiqyquhieOPbnGC5xgTmTrA2ShfNxQ6nkDJPU0/qmaCNJt9CwpNS2ArqcK3xYijiNi+SHaKwEsYh0VqiUqSCWn05eYKNAe3MUQDsyKFEkykJW60yEkN7JsvO1WpI53VKmOnZlRHLzJyc5WkZw="
- PGPORT: 5432
services:
- docker
before_install:
- npm rebuild
- sudo rm /usr/local/bin/docker-compose
- curl -L https://github.com/docker/compose/releases/download/${DOCKER_COMPOSE_VERSION}/docker-compose-`uname -s`-`uname -m` > docker-compose
- chmod +x docker-compose
- sudo mv docker-compose /usr/local/bin
jobs:
include:
- stage: api tests
script:
- docker --version
- docker-compose --version
- >
docker-compose run api-test
newman run battle-academia_placement-exam_api-test.postman-collection.json
-e battle-academia_placement-exam_docker.postman-environment.json
-r htmlextra,cli
And, my docker-compose.yml file has 4 containers:
client is the React front end,
api is the NodeJs/Express back end,
db is the database that the API pulls data from in the test environment,
api-test is the container with Newman/Postman and some reporters which I believe is built from NodeJs.
I hardcode in the environment variables when running locally, but the file is as follows:
version: '3.8'
services:
client:
build: ./client
ports:
- "80:80"
depends_on:
- api
api:
build: ./server
environment:
- PGHOST=${PGHOST}
- PGDATABASE=${PGDATABASE}
- PGUSER=${PGUSER}
- PGPASSWORD=${PGPASSWORD}
- PGPORT=${PGPORT}
ports:
- "3000:3000"
depends_on:
- db
db:
image: postgres:12.3-alpine
restart: always
environment:
- POSTGRES_DB=${PGDATABASE}
- POSTGRES_USER=${PGUSER}
- POSTGRES_PASSWORD=${PGPASSWORD}
ports:
- "5432:5432"
volumes:
- ./server/db/scripts:/docker-entrypoint-initdb.d
api-test:
image: dannydainton/htmlextra
entrypoint: [""]
command: newman run -v
volumes:
- ./server/api/postman-collections:/etc/newman
depends_on:
- api
Now that the setup is out of the way, my issue is that this config works locally when I cut out .travis.yml and run the commands myself, however, putting Travis-CI in the mix stirs up an issue where my first test just... runs.
I appreciate any advice or insight towards this issue that anyone provides. Thanks in advance!
The issue did not come from where I had expected. After debugging, I thought that the issue originally came from permission errors since I discovered that the /docker-entrypoint-initdb.d directory got ignored during container startup. After looking at the Postgres Dockerfile, I learned that the files are given permission for a user called postgres. The actual issue stemmed from me foolishly adding the database initialization scripts to my .gitignore.
Edit
Also the Newman tests were hanging because they were trying to access database tables that did not exist.
I've created a simple Sonatype API client in Elixir that returns the repositories and the components of the repositories.
I now need to create tests in Elixir so that I can verify the repo. I am using docker-compose to start the sonatype container. I need the tests to start with a fresh Docker(sonatype) repo to work with, via docker-compose up, then verify that it doesn't have any containers in it. Then from there add one or more images, then validate that the images I added are present. As cleanup, I could delete those images. It must be an automated set of tests that can run in CI or a user can run on their local machine.
My question is how would I be able to do that by either a .exs test file or bash script file?
You can build a docker-compose.yml file with something similar to this:
version: "2.2"
services:
my_app:
build:
context: .
ports:
- 4000:4000
command: >
bash -c 'wait-for-it -t 60 sonatype:1234
&& _build/prod/rel/my_app/bin/my_app start'
tests:
extends:
service: my_app
environment:
MIX_ENV: test
LOG_LEVEL: "warn"
working_dir: /my_app
depends_on:
- sonatype
command:
bash -c 'mix test'
sonatype:
image: sonatype/nexus3:3.19.1
ports:
- "1234:1234"
Then you have a bash script like test.sh:
docker-compose build tests
docker-compose run tests
EXIT=$?
docker-compose down --volumes
exit $EXIT
I'm not familiar with Sonatype, so this might not make sense, and you need to adapt.
I'm using docker-compose to deploy into a remote host. This is what my config looks like:
# stacks/web.yml
version: '2'
services:
postgres:
image: postgres:9.6
restart: always
volumes:
- db:/var/lib/postgresql/data
redis:
image: redis:3.2.3
restart: always
web_server:
depends_on: [postgres]
build: ../sources/myapp
links: [postgres]
restart: always
volumes:
- nginx_socks:/tmp/socks
- static_assets:/source/public
sidekiq:
depends_on: [postgres, redis]
build: ../sources/myapp
links: [postgres, redis]
restart: always
volumes:
- static_assets:/source/public
nginx:
depends_on: [web_server]
build: ../sources/nginx
ports:
- "80:80"
volumes:
- nginx_socks:/tmp/socks
- static_assets:/public
restart: always
volumes:
db:
nginx_socks:
static_assets:
# stacks/web.production.yml
version: '2'
services:
web_server:
command: bundle exec puma -e production -b unix:///tmp/socks/puma.production.sock
env_file: ../env/production.env
sidekiq:
command: bundle exec sidekiq -e production -c 2 -q default -q carrierwave
env_file: ../env/production.env
nginx:
build:
args:
ENV_NAME: production
DOMAIN: production.yavende.com
I deploy using:
eval $(docker-machine env myapp-production)`
docker-compose -f stacks/web.yml -f stacks/web.production.yml -p myapp_production build -no-deps web_server sidekiq
docker-compose -f stacks/web.yml -f stacks/web.production.yml -p myapp_production up -d
Although this works perfectly locally, and I did couple successful deploys in the past with this method, now it hangs when building the "web_server" service and finally show some timeout error, like I describe in this issue.
I think that the problem originates from the combination of my slow connection (Argentina -> DigitalOcean servers on USA) and me trying to build images and push them instead of using hub hosted images.
I've been able to do deploy by cloning my compose config into the server and running docker-compose directly there.
The question is: is there a better way to automate this process? Is a good practice to use docker-compose to build images on the fly?
I've been thinking about automating this process of cloning sources into the server and docker-composeing stuff, but there may be better tooling to solve this matter.
I was remote building images. This implies pushing the whole source needed to build the image over the net. For some images that was over 400MB of data sent from Argentina to some virtual servers in USA, and proved to be terribly slow.
The solution is to totally change the approach to dockerizing my stack:
Instead of building images on the fly using Dockerfile ARGs, I've modified my source apps and it's docker images to accept options via environment variables on runtime.
Used DockerHub automated build, integrated with GitHub.
This means I only push changes -no the whole source- via git. Then DockerHub builds the image.
Then I docker-compose pull and docker-compose up -d my site.
Free alternatives are running your own self-hosted docker registry and/or possibly GitLab, since it recently released it's own docker image registry: https://about.gitlab.com/2016/05/23/gitlab-container-registry/.