I can successfully bring up a CosmosDb Emulator instance within docker-compose, but the data I am trying to seed has more than 25 static containers, which is more than the default emulator allows. Per https://learn.microsoft.com/en-us/azure/cosmos-db/emulator-command-line-parameters#set-partitioncount you can set this partition count higher with a parameter, but I am unable to find a proper entrypoint into the compose that accepts that parameter.
I have found nothing in my searches that affords any insight into this as most people have either not used compose or not even used Docker for their Cosmos Emulator instance. Any insight would be appreciated.
Here is my docker-compose.yml for CosmosDb:
services:
cosmosdb:
container_name: "azurecosmosemulator"
hostname: "azurecosmosemulator"
image: 'mcr.microsoft.com/cosmosdb/windows/azure-cosmos-emulator'
platform: windows
tty: true
mem_limit: 2GB
ports:
- '8081:8081'
- '8900:8900'
- '8901:8901'
- '8902:8902'
- '10250:10250'
- '10251:10251'
- '10252:10252'
- '10253:10253'
- '10254:10254'
- '10255:10255'
- '10256:10256'
- '10350:10350'
networks:
default:
ipv4_address: 172.16.238.246
volumes:
- '${hostDirectory}:C:\CosmosDB.Emulator\bind-mount'
I have attempted to add a command in there for starting the container, but it does not accept any arguments I have tried.
My answer for this was a work around. Ultimately, running windows and linux containers side-by-side was a sizeable pain. Recently, Microsoft put out a linux container version of the emulator, which allowed me to provide an environment variable for partition counts, and run the process far more efficiently.
Reference here: https://learn.microsoft.com/en-us/azure/cosmos-db/linux-emulator?tabs=ssl-netstd21
Related
My server is having some weird bug I cannot find so I just deleted all docker images and downloaded them again. Weirdly the same bug now also appears in the updated version of the server. My hunch is, that docker does not download the exact same images but rather some updated versions which cause this bug.
Question is, how do I force docker to use the exact same versions as before?
Looking at my docker-compose.yml I can see that rabbitmq and mongo have differen "created" dates although their version number is specified in the docker-compose file:
services:
messageq:
image: rabbitmq:3
container_name: annotator_message_q
environment:
- RABBITMQ_DEFAULT_USER=user
- RABBITMQ_DEFAULT_PASS=password
networks:
- cocoannotator
database:
image: mongo:4.0
container_name: annotator_mongodb
restart: always
environment:
- MONGO_DATA_DIR=/data/db
- MONGO_LOG_DIR=/dev/null
volumes:
- "mongodb_data:/data/db"
command: "mongod --smallfiles --logpath=/dev/null"
Is the specification rabbitmq:3 and mongo:4.0 not specific enough?
Is the specification rabbitmq:3 and mongo:4.0 not specific enough?
It is not. Tags are mutable in Docker Hub and in other docker registries by default. This means, that you may have unlimited number of actual images - all registered as rabbitmq:3.
Full proof specific version variant is to use sha256 digests. This is the only recommended way for live systems. I.e. instead of rabbitmq:3, use
rabbitmq:3#sha256:fddabeb47970c60912b70eba079aae96ae242fe3a12da3f086a1571e5e8c921d
Unfortunately for your case, if you have already deleted all your images, you may not be able to recover what was the exact version. If you still have them somewhere, then do something like docker images | grep rabbitmq and then docker image inspect on matching images to find out their sha256 digests.
I want to run a webapp and a db using Docker, is there any way to connect 2 dockers(webApp Docker Container in One Machine and DB Docker container in another Machine) using docker-compose file without docker-swarm-mode
I mean 2 separate server
This is my Mongodb docker-compose file
version: '2'
services:
mongodb_container:
image: mongo:latest
restart: unless-stopped
ports:
- 27017:27017
volumes:
- mongodb_data_container:/data/db
Here is my demowebapp docker-compose file
version: '2'
services:
demowebapp:
image: demoapp:latest
restart: unless-stopped
volumes:
- ./uploads:/app/uploads
environment:
- PORT=3000
- ROOT_URL=http://localhost
- MONGO_URL=mongodb://35.168.21.133/demodb
ports:
- 3000:3000
Can any one suggest me How to do
Using only one docker-compose.yml with compose version: 2 there is no way to deploy 2 services on two different machines. That's what version: 3 using a stack.yml and swarm-mode are used for.
You can however deploy to two different machines using two docker-compose.yml version 2, but will have to connect them using different hostnames/ips than the service-name from the compose-file.
You shouldn't need to change anything in the sample files you show: you have to connect to the other host's IP address (or DNS name) and the published ports:.
Once you're on a different machine (or in a different VM) none of the details around Docker are visible any more. From the point of view of the system running the Web application, the first system is running MongoDB on port 27017; it might be running on bare metal, or in a container, or port-forwarded from a VM, or using something like HAProxy to pass through from another system; there's literally no way to tell.
The configuration you have to connect to the first server's IP address will work. I'd set up a DNS system if you don't already have one (BIND, AWS Route 53, ...) to avoid needing to hard-code the IP address. You also might look at a service-discovery system (I have had good luck with Hashicorp's Consul in the past) which can send you to "the host system running MongoDB" without needing to know which one that is.
I am trying to deploy a Docker Registry with custom storage location. The container runs well but I see no file whatsoever at the specified location. Here is my docker-compose.yaml:
version: "3"
services:
registry:
image: registry:2.7.1
deploy:
replicas: 1
restart_policy:
condition: always
ports:
- "85:5000"
volumes:
- "D:/Personal/Docker/Registry/data:/var/lib/registry"
For volumes, I have tried:
"data:/var/lib/registry"
./data:/var/lib/registry
"D:/Personal/Docker/Registry/data:/var/lib/registry"
The yaml file and docker-compose up is run at D:\Personal\Docker\Registry. I tried to push and pull an image to the localhost:85, everything works well, so it must store the data somewhere.
Please tell me where I did wrong.
I solved it but for my very specific case and with a different image, so I will just post here in case someone like me needs it. This question still need answer for the official Docker image.
I have just realized the image is for Linux only and turn out I couldn't run it on Windows Server so I switched to stefanscherer/registry-windows image. I changed the volumes declarations to:
volumes:
- ./data:c:\registry
- ./certs:c:\certs
Both the storage and the certs works correctly. I am not sure how to fix it on Linux though, as I have never used Linux before.
Hi I am finding it very confusing how I can create a docker file that would run a rabbitmq container, where I can expose the port so I can navigate to the management console via localhost and a port number.
I see someone has provided this dockerfile example, but unsure how to run it?
version: "3"
services:
rabbitmq:
image: "rabbitmq:3-management"
ports:
- "5672:5672"
- "15672:15672"
volumes:
- "rabbitmq_data:/data"
volumes:
rabbitmq_data:
I have got rabbit working locally fine, but everyone tells me docker is the future, at this rate I dont get it.
Does the above look like a valid way to run a rabbitmq container? where can I find a full understandable example?
Do I need a docker file or am I misunderstanding it?
How can I specify the port? in the example above what are first numbers 5672:5672 and what are the last ones?
How can I be sure that when I run the container again, say after a machine restart that I get the same container?
Many thanks
Andrew
Docker-compose
What you posted is not a Dockerfile. It is a docker-compose file.
To run that, you need to
1) Create a file called docker-compose.yml and paste the following inside:
version: "3"
services:
rabbitmq:
image: "rabbitmq:3-management"
ports:
- "5672:5672"
- "15672:15672"
volumes:
- "rabbitmq_data:/data"
volumes:
rabbitmq_data:
2) Download docker-compose (https://docs.docker.com/compose/install/)
3) (Re-)start Docker.
4) On a console run:
cd <location of docker-compose.yml>
docker-compose up
Do I need a docker file or am I misunderstanding it?
You have a docker-compose file. The rabbitmq:3-management is the Docker image built using the RabbitMQ Dockerfile (which you don't need. The image will be downloaded the first time you run docker-compose up.
How can I specify the port? In the example above what are the first numbers 5672:5672 and what are the last ones?
"5672:5672" specifies the port of the queue.
"15672:15672" specifies the port of the management plugin.
The numbers on the left-hand-side are the ports you can access from outside of the container. So, if you want to work with different ports, change the ones on the left. The right ones are defined internally.
This means you can access the management plugin after at http:\\localhost:15672 (or more generically http:\\<host-ip>:<port exposed linked to 15672>).
You can see more info on the RabbitMQ Image on the Docker Hub.
How can I be sure that when I rerun the container, say after a machine restart that I get the same container?
I assume you want the same container because you want to persist the data. You can use docker-compose stop restart your machine, then run docker-compose start. Then the same container is used. However, if the container is ever deleted you lose the data inside it.
That is why you are using Volumes. The data collected in your container gets also stored in your host machine. So, if you remove your container and start a new one, the data is still there because it was stored in the host machine.
This is a follow up to an earlier question I asked on Stack Overflow. I am building a Spring Boot Cloud based service. I am running it in a container using Docker. I finally got it running on my local machine.
# Use postgres/example user/password credentials
version: '3.2'
services:
db:
image: postgres
ports:
- 5000:5432
environment:
POSTGRES_PASSWORD: example
volumes:
- type: volume
source: psql_data
target: /var/lib/postgresql/data
networks:
- app
restart: always
config:
image: kellymarchewa/config_server
ports:
- 8888:8888
networks:
- app
volumes:
- /home/kelly/.ssh:/root/.ssh
restart: always
search:
image: kellymarchewa/search_api
networks:
- app
restart: always
ports:
- 8081:8081
depends_on:
- db
- config
- inventory
inventory:
image: kellymarchewa/inventory_api
depends_on:
- db
- config
# command: ["/home/kelly/workspace/git/wait-for-it/wait-for-it.sh", "config:8888"]
ports:
- 8082:8082
networks:
- app
restart: always
volumes:
psql_data:
networks:
app:
Earlier, I was having difficulty relating to the dependency of the clients on the config server (the configuration server was not fully started at the time the clients tried to access it). However, I resolved this issue using spring-retry. Now, although I can run it using docker-compose up on my local machine, running the same command (using the same Docker file) fails on a virtual machine hosted by Google Cloud's Service.
inventory_1 | java.lang.IllegalStateException: Could not locate PropertySource and the fail fast property is set, failing
However, it appears to be querying the appropriate location:
inventory_1 | 2018-02-10 00:23:00.945 INFO 1 --- [ main] c.c.c.ConfigServicePropertySourceLocator : Fetching config from server at: http://config:8888
I am not sure what the issue, since both are running using the same docker-compose file and the config server itself it starting.
The config server's application.properties:
server.port=8888
management.security.enabled=false
spring.cloud.config.server.git.uri=git#gitlab.com:leisurely-diversion/configuration.git
# spring.cloud.config.server.git.uri=${HOME}/workspace/eclipse/leisurely_diversion_config
Client bootstrap.properties:
spring.application.name=inventory-client
#spring.cloud.config.uri=http://localhost:8888
spring.cloud.config.uri=http://config:8888
management.security.enabled=false
spring.cloud.config.fail-fast=true
spring.cloud.config.retry.max-attempts=10
spring.cloud.config.retry.initial-interval=2000
EDIT:
Upon further examination, it appears as if the config server is failing to pull the git repository that stores the application properties. However, I am not sure why this behavior is present because of the following:
I have added SSH keys for GitLab to my VM.
I can pull the repository from my VM.
I am using volumes to reference /home/kelly/.ssh in my docker-compose file. The known_hosts file is included in this directory.
The above (using volumes for my SSH keys) worked fine on my development machine.
Any help would be appreciated.
Eventually, I was able to resolve the issue. While this was actually resolved a couple of days ago, I am posting the general solution in hopes that it may prove useful in the future.
First, I was able to confirm (by using curl to call one of my server's endpoints) that the underlying issue was the inability of the config server to pull the git repo.
Initially, I was a bit perplexed - my SSH keys were set-up and I was able to git clone the repo from the VM. However, while looking over the Spring Cloud documention, I discovered that, the known_hosts file must be in ssh-rsa format. However, the VM's ssh utility was saving them in a different format (even though both my development machine and VM are running Debian 9). To resolve the issue, add the corresponding GitLab (or other host) entry in ssh-rsa format. Checking one's /etc/ssh/sshd_config may also be of value.