Application works with docker command but not with docker compose - docker

I created tests for my application using docker inside the dotnet tests. It pulls an image from rabbitmq and starts it before testing and it works fine.
Now I'm trying to add these tests to docker, but to run docker inside a container I'm using volumes in docker.sock to transmit the commands to the host machine.
-v /var/run/docker.sock:/var/run/docker.sock
and to make it easier I would like to add this command in docker compose.
the problem is that the tests run correctly using the command
docker run -v /var/run/docker.sock:/var/run/docker.sock -ti rabbitmq-dotnet-app-tests
but it gives the following error using docker-compose
rabbitmq-dotnet-app-tests-1 | Failed
tests.DummyTests.RabbitMqTest1.test1 [1 ms]
rabbitmq-dotnet-app-tests-1 | Error Message:
rabbitmq-dotnet-app-tests-1 | System.AggregateException : One or
more errors occurred. (One or more errors occurred. (Initialization
has been cancelled.)) (The following constructor parameters did not
have matching fixture data: RabbitMqFixture rabbitMqFixture)
rabbitmq-dotnet-app-tests-1 | ---- System.AggregateException : One or
more errors occurred. (Initialization has been cancelled.)
rabbitmq-dotnet-app-tests-1 | --------
DotNet.Testcontainers.Containers.ResourceReaperException :
Initialization has been cancelled. rabbitmq-dotnet-app-tests-1 | ----
The following constructor parameters did not have matching fixture
data: RabbitMqFixture rabbitMqFixture
my docker compose file
version: "3.8"
services:
app-tests:
image: rabbitmq-dotnet-app-tests
build:
context: .
dockerfile: Dockerfile.Tests
volumes:
- /var/run/docker.sock:/var/run/docker.sock
here is the github link of this application
Im using docker desktop for windows and the test connectionstring is here

Related

Creating a Docker Compose for my Blazor WASM / nginx docker container

I've got the following problem.
I created a Blazor WASM application (a simple tapcounter)
I added Docker Support and followed this tutorial Containerising a BLazor Webassembly
So this is my dockerfile:
So when i build and run my dockerfile using those commands everything works just fine.
docker build -t tapcounter .
docker run -p 8070:80 tapcounter
Then I pushed my docker image to docker hub to make it available for everybody.
docker push darkatek7/tapcounter:latest
Now I created a docker-compose.yml
---
version: '3.4'
services:
tapcounter:
container_name: tapcounter
image: darkatek7/tapcounter # docker image
ports:
- 8070:80 # port used for localhost ip
restart: unless-stopped # restart policy
But when I run my file I get following error:
Attaching to tapcounter
tapcounter | The command could not be loaded, possibly because:
tapcounter | * You intended to execute a .NET application:
tapcounter | The application 'TapCounter.Web.dll' does not exist.
tapcounter | * You intended to execute a .NET SDK command:
tapcounter | No .NET SDKs were found.
tapcounter |
tapcounter | Download a .NET SDK:
tapcounter | https://aka.ms/dotnet-download
tapcounter |
tapcounter | Learn about SDK resolution:
tapcounter | https://aka.ms/dotnet/sdk-not-found
I guess it's because a nginx container is missing, but I couldn't get anything to work.
There's my code: TapCounter on Github
There's my Docker Hub page: TacpCounter Dockerhub
Help would be appreciated!

FileNotFoundException in aspnet core web docker compose environment

I'm working to dockerize my aspnet core identity server web application. I have already tested running application from docker container and it works.
$ docker run -d ^
-e "KeyFilePath"="/app/certs/authFile.cer" ^
-p 5000:5000 ^
-v e:/certs:/app/certs/ ^
--name identity ^
identity:0.1-docker
Now, I'm wiring this in my docker-compose.yml where my application is unable to find the file specified at /app/certs/. Here is my docker-compose section of identity server
services:
identity:
image: identity.api:${PLATFORM:-linux}-${TAG:-latest}
build:
context: .
dockerfile: Identity/Dockerfile
depends_on:
- sqlserver
ports:
- 5000:5000
volumes:
- "e:/certs:/app/certs"
environment:
- KeyFilePath="/app/certs/authFile.cer"
I've been troubleshooting this for quite sometime now. Volume is mapped smoothly when running using docker run with -v but unable to achieve the same from docker-compose up identity
With docker exec -it <container-id> sh , I can see the file existence in the required directory. Still, my app is unable to access the file.
In code, simply checking the existence using File.Exists(keyFilePath) which
returns true when container is started via docker run command.
returns false when started using docker-compose up identity. Log for the check says File.Exists ["/app/certs/authFile.cer"] => False. even when the file is present.
Any idea about the root cause for this weird issue?
I'm running
Aspnet Core 2.1 Web Application
Docker Desktop version: Docker version 20.10.2, build 2291f61
Docker Compose version: docker-compose version 1.27.4, build 40524192
Thanks for any help in advance
Have you try to change
from
environment:
- KeyFilePath="/app/certs/authFile.cer"
to
environment:
- KeyFilePath=/app/certs/authFile.cer

Keycloak Docker container fails to start after restarting the container

I have a Keycloak installation running as docker container in a docker-compose environment. Every night, my backup stops relevant containers, performs a DB and volume backup and restarts the containers again. For most it works, but Keycloak seems to have a problem with it and does not come up again afterwards. Looking at the logs, the error message is:
The batch failed with the following error: :
keycloak | WFLYCTL0062: Composite operation failed and was rolled back. Steps that failed:
keycloak | Step: step-9
keycloak | Operation: /subsystem=datasources/jdbc-driver=postgresql:add(driver-name=postgresql, driver-module-name=org.postgresql.jdbc, driver-xa-datasource-class-name=org.postgresql.xa.PGXADataSource)
keycloak | Failure: WFLYCTL0212: Duplicate resource [
keycloak | ("subsystem" => "datasources"),
keycloak | ("jdbc-driver" => "postgresql")
keycloak | ]
...
The batch failed with the following error: :
keycloak | WFLYCTL0062: Composite operation failed and was rolled back. Steps that failed:
keycloak | Step: step-9
keycloak | Operation: /subsystem=datasources/jdbc-driver=postgresql:add(driver-name=postgresql, driver-module-name=org.postgresql.jdbc, driver-xa-datasource-class-name=org.postgresql.xa.PGXADataSource)
keycloak | Failure: WFLYCTL0212: Duplicate resource [
keycloak | ("subsystem" => "datasources"),
keycloak | ("jdbc-driver" => "postgresql")
keycloak | ]
The docker-compose.yml entry for Keycloak looks as follows, important data obviously removed
keycloak:
image: jboss/keycloak:8.0.1
container_name: keycloak
environment:
- PROXY_ADDRESS_FORWARDING=true
- DB_VENDOR=postgres
- DB_ADDR=db
- DB_DATABASE=keycloak
- DB_USER=keycloak
- DB_PASSWORD=<password>
- VIRTUAL_HOST=<url>
- VIRTUAL_PORT=8080
- LETSENCRYPT_HOST=<url>
volumes:
- /opt/docker/keycloak-startup:/opt/jboss/startup-scripts
The volume I'm mapping is there to make some changes to WildFly to make sure it behaves well with the reverse proxy:
embed-server --std-out=echo
# Enable https listener for the new security realm
/subsystem=undertow/ \
server=default-server/ \
http-listener=default \
:write-attribute(name=proxy-address-forwarding, \
value=true)
# Create new socket binding with proxy https port
/socket-binding-group=standard-sockets/ \
socket-binding=proxy-https \
:add(port=443)
# Enable https listener for the new security realm
/subsystem=undertow/ \
server=default-server/ \
http-listener=default \
:write-attribute(name=redirect-socket, \
value="proxy-https")
After stopping the container, its not starting anymore with the messages shown above. Removing the container and re-creating it works fine however. I tried to remove the volume after the initial start, this doesn't really make a difference either. I already learned that I have to remove the KEYCLOAK_USER=admin and KEYCLOAK_PASSWORD environment variables after the initial boot as otherwise the container complains that the user already exists and doesn't start anymore. Any idea how to fix that?
Update on 23rd of May 2021:
The issue has been resolved on RedHats Jira, it seems to be resolved in version 12. The related GitHub pull request can be found here: https://github.com/keycloak/keycloak-containers/pull/286
According to RedHat support, this is a known "issue" and not supposed to be fixed. They want to concentrate on a workflow where a container is removed and recreated, not started and stopped. They agreed with the general problem, but stated that currently there are no resources available. Stopping and starting the container is a operation which is currently not supported.
See for example https://issues.redhat.com/browse/KEYCLOAK-13094?jql=project%20%3D%20KEYCLOAK%20AND%20text%20~%20%22docker%20restart%22 for reference
A legitimate use case for restarting is to add debug logging. For example to debug authentication with an external identity provider.
I ended up creating a shell script that does:
docker stop [container]
docker rm [container]
recreates the image i want with changes to the logging configuration
docker run [options] [container]
However a nice feature of docker is the ability to restart a stopped container automatically, decreasing downtime. This Keycloak bug takes that feature away.
I had the same problem here, and my solution was:
Export docker container to a .tar file:
docker export CONTAINER_NAME > latest.tar
2- Create a new volume in a docker
docker volume create VOLUME_NAME
3 - Start a new docker container mapping the volume created to a container db path, something like this:
docker run --name keycloak2 -v keycloak_db:/opt/jboss/keycloak/standalone/data/ -p 8080:8080 -e PROXY_ADDRESS_FORWARDING=true -e KEYCLOAK_USER=admin -e KEYCLOAK_PASSWORD=root jboss/keycloak
4 - Stop the container
5 - Unpack the tar file and find the database path, something like this:
tar unpack path: /opt/jboss/keycloak/standalone/data
6 - Move the path content to docker volume, if you dont know where is the physical path use docker inspect volume VOLUME_NAME to find the path
7 - Start the stoped container
This works for me, I hope its so helpfull to the next person to fix this problem.

How to "docker-compose up" for images from Docker Hub

Is it possible to pull and start all containers defined in docker-compose.yml? I'm trying to execute docker-compose up -d my/image, where my/image is a repo on DockerHub, but it says "Can't find docker-compose.yml". I also tried first to pull the image using docker pull my/image with the same result
UPD: The image https://hub.docker.com/r/gplcart/core/, source - https://github.com/gplcart/docker-core
SOLVED: It seems docker-compose does not work that way I want. I need to create a directory manually, place docker-compose.yml there, then run docker-compose up.
https://docs.docker.com/compose/wordpress/#define-the-project
I expected that running docker-compose up -d repo/image is enough to download and run all defined containers
To pull an image use docker-compose pull <service_name>, where service_name is one of the services listed in your docker-compose.yml file
The docker pull my/image fails, but should fail with a different error than you noted (you posted a compose error)
In your example, my/name is not a valid service name because you can't use a / in the service name. Compose would give you a different error.
It's unclear to me what my/name represents (assuming you replaced it with something locally).
If you post your docker-compose.yml it would help determine what the correct docker and docker-compose commands should be.
Try logging in to Docker Hub so that Docker Compose knows you want to pull images from there.
From the command line ...
docker login
You will be prompted for a username and password. Once authenticated, compose should pull your images from Docker Hub when running docker-compose up.
Also, you need to run docker-compose up from the same directory where your docker-compose.yml file is. Looking at your docker-compose.yml file on Github, it looks like you are missing a few lines. You need to specify the version, and gplcart, db and phpmyadmin should be under services.
version: '3'
services:
gplcart:
build: .
links:
- db
ports:
- 8080:80
db:
image: mariadb:10.2
environment:
MYSQL_DATABASE: test
MYSQL_ROOT_PASSWORD: test
ports:
- 3306:3306
phpmyadmin:
image: phpmyadmin/phpmyadmin:4.7
links:
- db
ports:
- 8181:80
Make sure docker and docker-compose binaries coming from the same package manager. E.g.
$ which -a docker docker-compose
/snap/bin/docker
/snap/bin/docker-compose
In other words, if you've installed Docker from a Snap, docker-compose binary should be already included (ls /snap/docker/current/bin/). When using Apt repository, docker-compose can be installed separately, so don't interchange binaries between Snap with Apt, as well don't mix docker with docker.io package on Apt.
The error Can't find docker-compose.yml indicates that you are not currently in the directory with your docker-compose.yml file, or that you have named the file something different. If you have named the file something different, including a different case or extension, you can either rename the file, or run docker-compose -f your_filename.yml up to pass a different file for docker-compose to parse. If you are not in the directory, make sure to cd into that directory before running docker-compose commands.
docker-compose acts based on your local docker-compose.yml file. Pulling a third-party image with docker-compose is usually useful when, instead of executing separate docker commands (in order to pull an image or deploy your app, etc etc), you want to define your architecture in a more structured way like:
My docker-compose.yml file:
version: '3'
services:
containerA:
image: gplcart/core
containerB:
build: .
# go on defining the rest ...
and deploying with:
docker-compose build && docker-compose up -d
Here is the simplest working example of docker-compose.yml file:
version: '3'
services:
hello-world:
image: hello-world
Which should produce the following results:
$ docker-compose up
Creating network "docker_default" with the default driver
Creating docker_hello-world_1 ... done
Attaching to docker_hello-world_1
hello-world_1 |
hello-world_1 | Hello from Docker!
hello-world_1 | This message shows that your installation appears to be working correctly.
hello-world_1 |
hello-world_1 | To generate this message, Docker took the following steps:
hello-world_1 | 1. The Docker client contacted the Docker daemon.
hello-world_1 | 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
hello-world_1 | (amd64)
hello-world_1 | 3. The Docker daemon created a new container from that image which runs the
hello-world_1 | executable that produces the output you are currently reading.
hello-world_1 | 4. The Docker daemon streamed that output to the Docker client, which sent it
hello-world_1 | to your terminal.
hello-world_1 |
hello-world_1 | To try something more ambitious, you can run an Ubuntu container with:
hello-world_1 | $ docker run -it ubuntu bash
hello-world_1 |
hello-world_1 | Share images, automate workflows, and more with a free Docker ID:
hello-world_1 | https://hub.docker.com/
hello-world_1 |
hello-world_1 | For more examples and ideas, visit:
hello-world_1 | https://docs.docker.com/get-started/
hello-world_1 |
docker_hello-world_1 exited with code 0
In case of problems, use the following commands which can help to track down the problem:
docker-compose config - Validate configuration file.
docker-compose logs - Check the latest logs.
docker info - Check system-wide information.
sudo strace -fe network docker-compose up - Debug the network issues.
journalctl -u docker.service - Check the logs of Docker service.

How do I output or log to docker-compose output?

When I run docker-compose up it logs some information to the terminal and I would like to know where this information is coming from and how I might log to it.For example I would like to output each request in a php application within the container. I have tried to look online including the docker docs but have had no luck.
The output in docker-compose is the stdout/stderr from the command the container runs. You see this with docker run if you don't detach, and you can get this from docker logs on a container you're detached from or docker-compose logs from a compose submitted group of containers.
Edit: evidence of this behavior:
$ cat docker-compose.hello-world.yml
version: '2'
services:
hello-world:
image: busybox
command: "echo hello world"
$ docker-compose -f docker-compose.hello-world.yml up
Creating test_hello-world_1
Attaching to test_hello-world_1
hello-world_1 | hello world
test_hello-world_1 exited with code 0

Resources