I'm working to dockerize my aspnet core identity server web application. I have already tested running application from docker container and it works.
$ docker run -d ^
-e "KeyFilePath"="/app/certs/authFile.cer" ^
-p 5000:5000 ^
-v e:/certs:/app/certs/ ^
--name identity ^
identity:0.1-docker
Now, I'm wiring this in my docker-compose.yml where my application is unable to find the file specified at /app/certs/. Here is my docker-compose section of identity server
services:
identity:
image: identity.api:${PLATFORM:-linux}-${TAG:-latest}
build:
context: .
dockerfile: Identity/Dockerfile
depends_on:
- sqlserver
ports:
- 5000:5000
volumes:
- "e:/certs:/app/certs"
environment:
- KeyFilePath="/app/certs/authFile.cer"
I've been troubleshooting this for quite sometime now. Volume is mapped smoothly when running using docker run with -v but unable to achieve the same from docker-compose up identity
With docker exec -it <container-id> sh , I can see the file existence in the required directory. Still, my app is unable to access the file.
In code, simply checking the existence using File.Exists(keyFilePath) which
returns true when container is started via docker run command.
returns false when started using docker-compose up identity. Log for the check says File.Exists ["/app/certs/authFile.cer"] => False. even when the file is present.
Any idea about the root cause for this weird issue?
I'm running
Aspnet Core 2.1 Web Application
Docker Desktop version: Docker version 20.10.2, build 2291f61
Docker Compose version: docker-compose version 1.27.4, build 40524192
Thanks for any help in advance
Have you try to change
from
environment:
- KeyFilePath="/app/certs/authFile.cer"
to
environment:
- KeyFilePath=/app/certs/authFile.cer
Related
I am creating an airflow docker container using the docker image "puckel/docker-airflow".
I have created a docker-compose file that uses this image and links 2 volumes, one for dags and other for the wheel package.
When I start the container and go to airflow UI it throws "No module named 'custPkg'" error. So I exec into the container using the command
docker exec -ti <container_id> bash
and then installed it using pip. I can use the package if I run a python shell using the command
from custPkg.abc import Base
but it's still not working on the airflow.
The airflow webserver which even refreshes after some time is still showing the same error on the terminal on which I started the conatainer using
docker-compose up
my docker-compose file looks like this
version: "3"
services:
webserver:
image: puckel/docker-airflow:latest
container_name: test_container
volumes:
- /home/ubuntu/dags1/:/usr/local/airflow/dags
- /home/ubuntu/dist/:/usr/local/airflow/dist
ports:
- 8080:8080
restart: always
--------------------NEW UPDATE----------------------------
I just restarted the container and it is working now, but I don't want to go into the container and run the exec command manually. Can I somehow do this using the docker-compose file only?
I am trying to use Docker volume/bind mount so that I don't need to build my project again and again after every small change. I do not get any error but changes in the local files are not visible in container thus I still have to rebuild the project for the new files system snapshot.
Following solution seemed to work for some people.Therefore,
I have tried restarting Docker and Reset Credentials at Docker Desktop-->Setting-->Shared Drives
Here is my docker-compose.yml file
version: '3'
services:
web:
build:
context: .
dockerfile: Dockerfile.dev
ports:
- "3000:3000"
volumes:
- /app/node_modules
- .:/app
I have tried through Docker CLI too. but problem persists
docker build -f Dockerfile.dev .
docker run -p 3000:3000 -v /app/node_modules -v ${pwd}:/app image-id
Windows does copy the files in current directory to container but they are
not in sync
I am using Windows 10 power shell and docker version 18.09.2
UPDATE:
I have checked container contents
using command
docker exec -t -i container-id sh
and the printed file contents using command
cat filename
And from this it is clear that the files container references to has/have changed/updated but I still don't understand why do i have to restart container to see the changes in browser.
Should not they be apparent after just refreshing the tab?
I am getting Error HttpException: -404 Failed to connect to remote server while running jar file from docker execute a command docker exec -it Test_docker java -jar TestDocker.jar.
Note: I have created docker on windows, Where my docker machine IP is 192.168.99.100 and my docker exec command running successfully.I am accessing SPARQL endpoint on windows using URL: http://192.168.99.100:8890/sparql this will work perfectly. But when I am using same on mac it will give me an error which I mention above. I have also try to change SPARQL endpoint on my code as http://localhost:8890/sparql but not work well though it will work fine on chrome browser on mac while executing through command it will giving me an error.
Here my docker-compose file,
version: "3"
services:
jardemo_test:
container_name: Test_docker
image: "java:latest"
working_dir: /usr/src/myapp
volumes:
- /docker/test:/usr/src/myapp
tty: true
depends_on:
- virtuoso
virtuoso:
container_name: virtuoso_docker
image: openlink/virtuoso_opensource
ports:
- "8890:8890"
- "1111:1111"
environment:
DB_USER_NAME: dba
DBA_PASSWORD: dba
DEFAULT_GRAPH: http://localhost:8890/test
volumes:
- /docker/virtuoso-test/:/data
Note: I have tried all the way to set the environment variable on docker-compose file default graph URL with all the IP address but it won't allow.Which IP I have tried all combination listed below.
Though I am getting the same error.
DEFAULT_GRAPH: http://localhost:8890/test
DEFAULT_GRAPH: http://127.0.0.1:8890/test
DEFAULT_GRAPH: http://0.0.0.0:8890/test
below is my docker-compose ps result,
$ docker-compose ps
Name Command State Ports
---------------------------------------------------------------------------------------------------------
Test_docker /bin/bash Up
virtuoso_docker /opt/virtuoso-opensource/b ... Up 0.0.0.0:1111->1111/tcp, 0.0.0.0:8890->8890/tcp
Below is my code which I am using,
QueryExecution qexec = QueryExecutionFactory.sparqlService("http://localhost:8890/sparql", queryString);
ResultSet results1 = qexec.execSelect();
Info: After running successful docker I have accessed the http://localhost:8890/sparql. it will successfully run on the mac.
Can anybody please help me to solve this issue?Also, welcome your suggestions and thought.Thanks for the help and your time in advance.
As per my colleague suggested, The problem is that the code in de docker file sees the docker as the local host. The IP-address 192.168.99.100 is also not known because mac doesn't have it. To solve the problem of connections, docker uses its own network.The docker-compose service names are used as the reference. So instead of using http://localhost:8890/sparql, you should use http://virtuoso:8890/sparql as virtuoso is the service name.
I tried this and it will solve my problem.
I'm developing a server and its client simultaneously and I'm designing them in Docker containers. I'm using Docker Compose to link them up and it works just fine for production but I can't figure out how to make it work with a development workflow in which I've got a shell running for each one.
My docker-compose-devel.yml:
server:
image: node:0.10
client:
image: node:0.10
links:
- server
I can do docker-compose up client or even docker-compose run client but what I want is a shell running for both server and client so I can make rapid changes to both as I develop iteratively.
I want to be able to do docker-compose run server bash in one window and docker-compose run --no-deps client bash in another window. The problem with this is that no address for the server is added to /etc/hosts on the client because I'm using docker-compose run instead of up.
The only solution I can figure out is to use docker run and give up on Docker Compose for development. Is there a better way?
Here's a solution I came up with that's hackish; please let me know if you can do better.
docker-compose-devel.yml:
server:
image: node:0.10
command: sleep infinity
client:
image: node:0.10
links:
- server
In window 1:
docker-compose --file docker-compose-dev.yml up -d server
docker exec --interactive --tty $(docker-compose --file docker-compose-dev.yml ps -q server) bash
In window 2:
docker-compose --file docker-compose-dev.yml run client bash
I guess your main problem is about restarting the application when there are changes in the code.
Personnaly, I launch my applications in development containers using forever.
forever -w -o log/out.log -e log/err.log app.js
The w option restarts the server when there is a change in the code.
I use a .foreverignore file to exclude the changes on some files:
**/.tmp/**
**/views/**
**/assets/**
**/log/**
If needed, I can also launch a shell in a running container:
docker exec -it my-container-name bash
This way, your two applications could restart independently without the need to launch the commands yourself. And you have the possibility to open a shell to do whatever you want.
Edit: New proposition considering that you need two interactive shells and not simply the possibility to relaunch the apps on code changes.
Having two distinct applications, you could have a docker-compose configuration for each one.
The docker-compose.yml from the "server" app could contain this kind of information (I added different kind of configurations for the example):
server:
image: node:0.10
links:
- db
ports:
- "8080:80"
volumes:
- ./src:/src
db:
image: postgres
environment:
POSTGRES_USER: dev
POSTGRES_PASSWORD: dev
The docker-compose.yml from the "client" app could use external_links to be able to connect to the server.
client:
image: node:0.10
external_links:
- project_server_1:server # Use "docker ps" to know the name of the server's container
ports:
- "80:80"
volumes:
- ./src:/src
Then, use docker-compose run --service-ports service-name bash to launch each configuration with an interactive shell.
Alternatively, the extra-hosts key may also do the trick by calling the server app threw a port exposed on the host machine.
With this solution, each docker-compose.yml file could be commited in the repository of the related app.
First thing to mention, for development environment you want to utilize volumes from docker-compose to mount your app to the container when it's started (at the runtime). Sorry if you're already doing it and I mention this, but it's not clear from your definition of docker-compose.yml
To answer your specific question - start your containers normally, then when doing docker-compose ps, you'll see a name of your container. For example 'web_server' and 'web_client' (where web is the directory of your docker-compose.yml file or name of the project).
When you got name of the container you want to connect to, you can run this command to run bash exactly in the container that's running your server:
docker exec -it web_server bash.
If you want to learn more about setting up development environment for reasonably complex app, checkout this article on development with docker-compose
What I'm trying to do
I want to run a yesod web application in one docker container, linked to a postgres database in another docker container.
What I've tried
I have the following file hierarchy:
/
api/
Dockerfile
database/
Dockerfile
docker-compose.yml
The docker-compose.yml looks like this:
database:
build: database
api:
build: api
command: .cabal/bin/yesod devel # dev setting
environment:
- HOST=0.0.0.0
- PGHOST=database
- PGPORT=5432
- PGUSER=postgres
- PGPASS
- PGDATABASE=postgres
links:
- database
volumes:
- api:/home/haskell/
ports:
- "3000:3000"
Running sudo docker-compose up fails either to start the api container at all or, just as often, with the following error:
api_1 | Yesod devel server. Press ENTER to quit
api_1 | yesod: <stdin>: hGetLine: end of file
personal_api_1 exited with code 1
If, however, I run sudo docker-compose database up & then start up the api container without using compose but instead using
sudo docker run -p 3000:3000 -itv /home/me/projects/personal/api/:/home/haskell --link personal_database_1:database personal_api /bin/bash
I can export the environment variables being set up in the docker-compose.yml file then manually run yesod devel and visit my site successfully on localhost.
Finally, I obtain a third different behaviour if I run sudo docker-compose run api on its own. This seems to start successfully but I can't access the page in my browser. By running sudo docker-compose run api /bin/bash I've been able to explore this container and I can confirm the environment variables being set in docker-compose.yml are all set correctly.
Desired behaviour
I would like to get the result I achieve from running the database in the background then manually setting the environment in the api container's shell simply by running sudo docker-compose up.
Question
Clearly the three different approaches I'm trying do slightly different things. But from my understanding of docker and docker-compose I would expect them to be essentially equivalent. Please could someone explain how and why they differ and, if possible, how I might achieve my desired result?
The error-message suggests the API container is expecting input from the command-line, which expects a TTY to be present in your container.
In your "manual" start, you tell docker to create a TTY in the container via the -t flag (-itv is shorthand for -i -t -v), so the API container runs successfully.
To achieve the same in docker-compose, you'll have to add a tty key to the API service in your docker-compose.yml and set it to true;
database:
build: database
api:
build: api
tty: true # <--- enable TTY for this service
command: .cabal/bin/yesod devel # dev setting