I'm using gradle-6.5 and when I build my app on my laptop all builds well, but if I try to run the same command on docker, some tests are failed or something is going wrong.
I have an exception like the following:
Exception is:
org.gradle.api.tasks.TaskExecutionException: Execution failed for task ':test'.
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.lambda$executeIfValid$1(ExecuteActionsTaskExecuter.java:207)
at org.gradle.internal.Try$Failure.ifSuccessfulOrElse(Try.java:263)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeIfValid(ExecuteActionsTaskExecuter.java:205)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:186)
...
Caused by: org.gradle.api.GradleException: There were failing tests. See the report at: file:///tmp/reports/tests/test/index.html
at org.gradle.api.tasks.testing.AbstractTestTask.handleTestFailures(AbstractTestTask.java:628)
at org.gradle.api.tasks.testing.AbstractTestTask.executeTests(AbstractTestTask.java:499)
at org.gradle.api.tasks.testing.Test.executeTests(Test.java:646)
and I want to know if some way to write a text that placed in an index.html file to console or maybe copy this file to my laptop.
For the build my app in the docker I use the following command:
docker build -t myapp .
You want to get the build/reports/tests/test/ directory which contains the test reports (e.g., index.html) onto your local machine. You must use a docker-compose.yml to mirror the the relevant directory:
version: '3.8'
services:
chat:
build:
dockerfile: Dockerfile
context: .
command: gradle run
working_dir: /home/gradle/project
volumes:
- type: bind
source: ./build/reports/tests/test
target: /home/gradle/project/build/reports/tests/test
There is a simpler way of getting the logs without extra docker-compose file.
First, you should find the id of the stopped container. Type docker ps -a in your terminal to get the list of all containers including the ones that have the status "exited". Find the one that you are interested in and copy the container id.
Second, copy the files from the container to your host. Type docker cp {copied container id}:home/gradle/src/build/reports/tests/test/ ./{location where you want to save your logs on your machine}.
Third, open the location you specified previously, open the index.html file and enjoy full logs output.
Related
I am trying to use Docker Desktop to run this tutorial to install wazuh in a docker container (single-node deployment). I make a new container in the docker desktop and then try to run the docker compose command in vscode but get the error mentioned in the title. I have tried to change the project directory but it always points to the root directory by /config/certs.yml. my command is
docker-compose --project-directory /com.docker.devenvironments.code/single-node --file /com.docker.devenvironments.code/single-node/generate-indexer-certs.yml run --rm generator
my directory structure is as follows:
where certs.yml is in the config folder, but upon running this command the error always points to the root folder, which is not my project folder. The only folder i want to run this from is the com.docker.devenvironments.code folder, or somehow change where the command finds the certs.yml file. I have also tried cd into different folders and trying to run the command, but get the same error.
Thank you very much in advance for your help!
Looking quickly at the documentation link provided in the question, you can try the following thing:
Move the docker-compose definition from the folder wazuh-docker/single-node/docker-compose.yml to outer directory which is the main definition wazuh-docker in your case it will be com.docker.devenvironments.code I believe into a separate <your_compose.yaml> with the same definition, but change the volume mounts as:
Wazuh App Copyright (C) 2021 Wazuh Inc. (License GPLv2)
version: '3'
services:
generator:
image: wazuh/wazuh-certs-generator:0.0.1
hostname: wazuh-certs-generator
volumes:
- ./single-node/config/wazuh_indexer_ssl_certs/:/certificates/
- ./single-node/config/certs.yml:/config/certs.yml
...
Then, docker-compose -f <your_compose.yaml> up should do. Note: your_defined.yaml is the yaml with the volume mount changes and is created at the root of the project. Also, looking at the image provided in the question, you might want to revisit the documentation and run the command to generate the certificates docker-compose -f generate-indexer-certs.yml run --rm generator from the single-node folder.
You can change the directory of the certs.yml file in the following section of the generate-indexer-certs.yml file:
volumes:
- ./config/certs.yml:/config/certs.yml
You can replace ./config/certs.yml with the path where the certs.yml file is located.
In your case:
volumes:
- /com.docker.devenvironments.code/single-node/config/certs.yml:/config/certs.yml
I've been running a dev-setup for a while without issue. I'm using Docker for Windows with Windows Subsystem for Linux 2. It's been working very well. Today when trying to spin up docker-compose, it failed with the following error:
frederik#desktop:~/projects/caselab$ docker-compose -f docker-test.yml up
Recreating f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_caselab_db_1 ...
Recreating f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_caselab_db_1 ... error
ERROR: for f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_caselab_db_1 Cannot create container for service db: mkdir 07ff2055c618dedc240ca3275de3f8c41d091136dc659cf463ee9fc62eed1853: permission denied
ERROR: for db Cannot create container for service db: mkdir 07ff2055c618dedc240ca3275de3f8c41d091136dc659cf463ee9fc62eed1853: permission denied
ERROR: Encountered errors while bringing up the project.
frederik#desktop:~/projects/caselab$
I shaved the contents of docker-test.yml down to simply:
version: '3'
services:
db:
image: postgres
logging:
driver: none
I tried running docker run postgres which worked without issue. I then tried copying all the contents of my folder to another folder. Now, running docker-compose -f docker-test.yml works without issues.
I think it's somehow related to permissions, though I can see no difference in permissions between the original folder and the new one.
As I do most of my editing in Visual Studio Code, running in Windows I'm thinking it may be related to the Windows / Linux boundary, though I'm not completely sure how. And - again - this setup has been running for months without issue so I'm at a loss for what I could have changed.
Any ideas?
I managed to solve it.
I noticed that running docker-compose up prepended a hash to the image name every single time the command was run. This resulted in a comically long image name.
Running docker-compose images showed this image being present.
Simply running docker-compose rm removed the image, which allowed the right image to be created and run.
I have filed this as a bug in docker-compose.
I am very (read very) new to Docker so experimenting. I have created a very basic Dockerfile to pull in Laravel:
FROM composer:latest
RUN composer_version="$(composer --version)" && echo $composer_version
RUN composer global require laravel/installer
WORKDIR /var/www
RUN composer create-project --prefer-dist laravel/laravel site
My docker-compose.yml file looks like:
version: '3.7'
services:
laravel:
build:
context: .
dockerfile: laravel.dockerfile
container_name: my_laravel
network_mode: host
restart: on-failure
volumes:
- ./site:/var/www/site
When I run docker-compose up, the ./site directory is created but the contents are empty. I've put this in docker-compose as I plan on on including other things like nginx, mysql, php etc
The command:
docker run -v "/where/i/want/data/site:/var/www/site" my_laravel
Results in the same behaviour.
I know the install is successful as I modified my dockerfile with the follwing two lines appended to it:
WORKDIR /var/www/site
RUN ls -la
Which gives me the correct listing.
Clearly misunderstanding something here. Any help appreciated.
EDIT: So, I was able to get this to work... although, it slightly more difficult than just specifying a path..
You can accomplish this by specifying a volume in docker-compose.yml.. The path to the directory (on the host) is labeled as device in the compose file.. It appears that the root of the path has to be an actual volume (possibly a share would work) but the 'destination' of the path can be a directory on the specified volume..
I created a new volume called docker on my machine but I suppose you could do this with your existing disk/volume..
I am on a Mac and this docker-compose.yml file worked for me:
version: '3.7'
services:
nodemon-test:
container_name: my-nodemon-test
image: oze4/nodemon-docker-test
ports:
- "1337:1337"
volumes:
- docker_test_app:/app # see comment below on which name to use here
volumes:
docker_test_app: # use this name under `volumes:` for the service
name: docker_test_app
driver: local
driver_opts:
o: bind
type: none
device: /Volumes/docker/docker_test_app
The container specified exists in my DockerHub.. this is the source code for it, just in case you are worried about anything malicious. I created it like two weeks ago to help someone else on StackOverflow.
Shows files from the container on my machine (the host)..
You can read more about Docker Volume configs here if you would like.
ORIGINAL ANSWER:
It looks like you are trying to share the build directory with your host machine.. After some testing, it appears Docker will overwrite the specified path on the container with the contents of the path on the host.
If you run docker logs my_laravel you should see an error about missing files at /var/www/site.. So, even though the build is successful - once Docker mounts the directory from your machine (./site) onto the container (/var/www/site) it overwrites the path within the container (/var/www/site) with the contents of the path on your host (./site) - which is empty.
To test and make sure the contents of /var/www/site are in fact being overwritten, you can run docker exec -it /bin/bash (you may need to replace /bin/bash with /bash).. This will give you command line access inside of the container. From there you can do ls -a /var/www/site..
Furthermore, you can also pre-stage ./site to have a random test file in it (test.txt or whatever), then docker-compose up -d, then run the same commands from the step above docker exec -it ... and see if the staged test.txt file is now inside the container - this gives you definitive evidence that when you run volumes, the data on your host overwrites data in the container.
With that being said, doing something like this and sharing a log directory will work... the volume path specified on the container is still overwritten, the difference is the container is writing to that path.. it doesn't rely on it for config files/app files.
Hope this helps.
I'm trying to deploy an app that's built with docker-compose, but it feels like I'm going in completely the wrong direction.
I have everything working locally—docker-compose up brings up my app with the appropriate networks and hosts in place.
I want to be able to run the same configuration of containers and networks on a production machine, just using a different .env file.
My current workflow looks something like this:
docker save [web image] [db image] > containers.tar
zip deploy.zip containers.tar docker-compose.yml
rsync deploy.zip user#server
ssh user#server
unzip deploy.zip ./
docker load -i containers.tar
docker-compose up
At this point, I was hoping to be able to run docker-compose up again when they get there, but that tries to rebuild the containers as per the docker-compose.yml file.
I'm getting the distinct feeling that I'm missing something. Should I be shipping over my full application then building the images at the server instead? How would you start composed containers if you were storing/loading the images from a registry?
The problem was that I was using the same docker-compose.yml file in development and production.
The app service didn't specify a repository name or tag, so when I ran docker-compose up on the server, it just tried to build the Dockerfile in my app's source code directory (which doesn't exist on the server).
I ended up solving the problem by adding an explicit image field to my local docker-compose.yml.
version: '2'
services:
web:
image: 'my-private-docker-registry:latest'
build: ./app
Then created an alternative compose file for production:
version: '2'
services:
web:
image: 'my-private-docker-registry:latest'
# no build field!
After running docker-compose build locally, the web service image is built with the repository name my-private-docker-registry and the tag latest.
Then it's just a case of pushing the image up to the repository.
docker push 'my-private-docker-registry:latest'
And running docker pull, it's safe to stop and recreate the running containers, with the new images.
I have a docker-compose-staging.yml file which I am using to define a PHP application. I have defined a data volume container (app) in which my application code lives, and is shared with other containers using volumes_from.
docker-compose-staging.yml:
version: '2'
services:
nginx:
build:
context: ./
dockerfile: docker/staging/nginx/Dockerfile
ports:
- 80:80
links:
- php
volumes_from:
- app
php:
build:
context: ./
dockerfile: docker/staging/php/Dockerfile
expose:
- 9000
volumes_from:
- app
app:
build:
context: ./
dockerfile: docker/staging/app/Dockerfile
volumes:
- /var/www/html
entrypoint: /bin/bash
This particular docker-compose-staging.yml is used to deploy the application to a cloud provider (DigitalOcean), and the Dockerfile for the app container has COPY commands which copy over folders from the local directory to the volume defined in the config.
docker/staging/app/Dockerfile:
FROM php:7.1-fpm
COPY ./public /var/www/html/public
COPY ./code /var/www/html/code
This works when I first build and deploy the application. The code in my public and code directories are present and correct on the remote server. I deploy using the following command:
docker-compose -f docker-compose-staging.yml up -d
However, next I try adding a file to my local public directory, then run the following command to rebuild the updated code:
docker-compose -f docker-compose-staging.yml build app
The output from this rebuild suggests that the COPY commands were successful:
Building app
Step 1 : FROM php:7.1-fpm
---> 6ed35665f88f
Step 2 : COPY ./public /var/www/html/public
---> 4df40d48e6a5
Removing intermediate container 7c0fbbb7f8b6
Step 3 : COPY ./code /var/www/html/code
---> 643d8745a479
Removing intermediate container cfb4f1a4f208
Successfully built 643d8745a479
I then deploy using:
docker-compose -f docker-compose-staging.yml up -d
With the following output:
Recreating docker_app_1
Recreating docker_php_1
Recreating docker_nginx_1
However when I log into the remote containers, the file changes are not present.
I'm relatively new to Docker so I'm not sure if I've misunderstood any part of this process! Any guidance would be appreciated.
This is because of cache.
Run,
docker-compose build --no-cache
This will rebuild images without using any cache.
And then,
docker-compose -f docker-compose-staging.yml up -d
I was struggling with the fact that migrations were not detected nor done. Found this thread and noticed that the root cause was, indeed, files not being updated in the container. The force-recreate solution suggested above solved the problem for me, but I find it cumbersome to have to try to remember when to do it and when not. E.g. Vue related files seem to work just fine but Django related files don't.
So I figured why not try adjusting the Docker file to clean up the previous files before the copy:
RUN rm -rf path/to/your/app
COPY . path/to/your/app
Worked like a charm. Now it's part of the build and all you need is run the docker-compose up -d --build again. Files are up to date and you can run make migrations and migrate against your containers.
I had similar issue if not same while working on dotnet core application.
What I was trying to do was rebuild my application and get it update my docker image so that I can see my changes reflected in the containerized copy.
So I got going by removing the underlying image generated by docker-compose up using the command to get my changes reflected:
docker rmi *[imageId]*
I believe there should be support for this in docker-compose but this was enough for my need at the moment.
Just leaving this here for when I come back to this page in two weeks.
You may not want to use docker system prune -f in this block.
docker-compose down --rmi all -v \
&& docker-compose build --no-cache \
&& docker-compose -f docker-compose-staging.yml up -d --force-recreate
I had the same issue because of shared volumes. For me the solution was to remove shared container using this command:
docker volume rm [VOLUME_ID]
Volume id or name you can find in "Mount" section using this command:
docker inspect [CONTAINER_ID]
None of the above solutions worked for me, but what did finally work was the following steps:
Copy/Move file outside of docker app folder
Delete File you want to update
Rebuild the docker img without updated file
Move copied file back into docker app folder
Rebuild again the docker image
Now the image will contain the updates to the file.
I'm relatively new to Docker myself and found this thread after experiencing a similar issue with an updated YAML file not seeming to be copied into a rebuilt container, despite having turned off caching.
My build process differs slightly as I use Docker Hub's GitHub integration for automating image builds when new commits to the master branch are made. The build happens on Docker's servers rather than the locally built and pushed container image workflow.
What ended up working for me was to do a docker-compose pull to bring down into my local environment the most up-to-date versions of the containers defined in my .env file. Not sure if the pull command defers from the up command with a --force-recreate flag set, but I figured I'd share anyway in case it might help someone.
I'd also note that this process allowed me to turn auto-caching back on because the edited file was actually being detected by the Docker build process. I just wasn't seeing it because I was still running docker-compose up on outdated image versions locally.
I am not sure it is caching, because (a) it is usually noted in the build output, whether cache was used or not and (b) 'build' should sense the changed content in your directory and nullify the cache.
I would try to bring up the container on the same machine used to build it to see if that is updated or not. if it is, the changed image is not propagated. I do not see any version used in your files (build -t XXXX:0.1 or build -t XXXX:latest) so it might be that your staging machine uses a stale image. Or, are you pushing the new image so the staging server will pull it from somewhere?
You are trying to update an existing volume with the contents from a new image, that does not work.
https://docs.docker.com/engine/tutorials/dockervolumes/#/data-volumes
States:
Changes to a data volume will not be included when you update an image.