I am running couchbase using docker[on windows]. When I start it ask for doing cluster setup.
I want to customize docker-compose file to do below setup
1. set a cluster name
2. set up a admin account
3. create a empty bucket.
My docker compose file
version: '3'
services:
couchbase:
image: couchbase/server
ports:
- 11210:11210
- "8091-8094:8091-8094"
volumes:
- /opt/couchbase/data:/opt/couchbase/var
env_file: .env
You can read about a solution here using Couchbase with Docker Compose - behind the scenes it uses REST API calls to initialize the database and create buckets.
His docker file includes a configuration script to this:
FROM couchbase/server:enterprise-4.5.0-DP1
COPY configure-node.sh /opt/couchbase
CMD ["/opt/couchbase/configure-node.sh"]
You can find contents of the script here on Github as a good starting point.
Related
I'm trying to write a docker-compose file that will build and push a versioned (1.0, 1.1...) and latest build of my image to my local v2 docker registry. However when I run docker-compose build I get the following error:
ERROR: Couldn't connect to Docker daemon - you might need to run `docker-machine start default`.
I found a lot of people complaining about this error for many different reasons, in my case it has nothing to do with permissions or weather or not the docker service is running, I narrowed it down to my image naming having a URL on it (the URL of my local registry), I know that because if I name my image normally (like '/app:latest'), then the commands runs fine. So how can I have a URL as the image name?
Here is what I'm trying to do (docker-compose.yaml):
version: "3.8"
x-marvin-backend: &default-marvin-backend
container_name: marvin_backend
build: ./marvin-api
image: "http://my_registry_url:5000/marvin/backend:latest"
ports:
- "3000:3000"
networks:
- backend
x-marvin-frontend: &default-marvin-frontend
container_name: marvin_frontend
image: http://my_registry_url:5000/marvin/frontend:latest
build:
context: ./marvin-front
args:
- REACT_APP_SERVICES_HOST=http://marvin_backend:3000/
ports:
- "80:80"
networks:
- backend
depends_on:
- backend
services:
backend: *default-marvin-backend
backend_versioned:
<< : *default-marvin-backend
image: http://my_registry_url:5000/marvin/backend:1.0
frontend: *default-marvin-frontend
frontend_versioned:
<< : *default-marvin-frontend
image: http://my_registry_url:5000/marvin/frontend:1.0
networks:
backend:
I'm new to docker in general, my main goal here is to have a simple, preferably one command (e.g docker-compose build), that will build and tag both my front end and back end images so that I can just execute docker-compose push to push those newly created images to my registry running on AWS. With that I also want to be able to override the latest version of those images in the registry while also adding a versioned image for backup purposes, in case I want to revisit any of those version in the future.
Then in the AWS EC2 machine I have another docker-compose.yaml file that just fetches the latest versions of both images and run their containers.
So to summarize I would develop the application on my local machine, then add the new version manually to the versioned services in the local docker-compose.yaml file, then run docker-compose build followed by docker-compose push; then ssh into my AWS machine and run docker-compose up to fetch the latest and newly updated images and run them.
This could later evolve into a CI/CD pipeline, but right now I'm taking baby steps and trying to get my image name to have a URL in it.
Thank you.
Edit
I tried using a .env with REGISTRY=http://my_registry_url:5000/marvin and then using image: "${REGISTRY}/frontend:latest" or image: "$${REGISTRY}/frontend:latest" but that also didn't work
Just remove the http:// part from your images.
So I need rolling-updates with docker on my single node server. Until now, I was using docker-compose but unfortunately, I can't achieve what I need with it. Reading the web, docker-swarm seems to be the way to go.
I have found how to run an app with multiple replicas on a single node using swarm:
docker service create --replicas 3 --name myapp-staging myapp_app:latest
myapp:latest being built from my docker-compose.yml:
version: "3.6"
services:
postgres:
env_file:
- ".env"
image: "postgres:11.0-alpine"
volumes:
- "/var/run/postgresql:/var/run/postgresql"
app:
build: "."
working_dir: /app
depends_on:
- "postgres"
env_file:
- ".env"
command: iex -S mix phx.server
volumes:
- ".:/app"
volumes:
postgres: {}
static:
driver_opts:
device: "tmpfs"
type: "tmpfs"
Unfortunately, this doesn't work since it doesn't get the config from the docker-compose.yml file: .env file, command entry etc.
Searching deeper, I find that using
docker stack deploy -c docker-compose.yml <name>
will create a service using my docker-compose.yml config.
But then I get the following error message:
failed to update service myapp-staging_postgres: Error response from daemon: rpc error: code = InvalidArgument desc = ContainerSpec: image reference must be provided
So it seems I have to use the registry and push my image there so that it works. I understand this need in case of a multiple node architecture, but in my case I don't want to do that. (Carrying images are heavy, I don't want my image to be public, and after all, image is here, so why should I move it to the internet?)
How can I set up my docker service using local image and config written in docker-compose.yml?
I could probably manage my way using docker service create options, but that wouldn't use my docker-compose.yml file so it would not be DRY nor maintainable, which is important to me.
docker-compose is a great tool for developers, it is sad that we have to dive into DevOps tools to achieve such common features as rolling updates. This whole swarm architecture seems too complicated for my needs at this stage.
You don't have to use registeries in your single node setup. you can build your "app" image on your node from a local docker file using this command -cd to the directory of you docker file-
docker build . -t my-app:latest
This will create a local docker image on your node, this image is only visible to your single node which is benefitial in your use case but i wouldn't recommend this in a production setup.
You can now edit the compose file to be:
version: "3.6"
services:
postgres:
env_file:
- ".env"
image: "postgres:11.0-alpine"
volumes:
- "/var/run/postgresql:/var/run/postgresql"
app:
image: "my-app:latest"
depends_on:
- "postgres"
env_file:
- ".env"
volumes:
- ".:/app"
volumes:
postgres: {}
static:
driver_opts:
device: "tmpfs"
type: "tmpfs"
And now you can run your stack from this node and it will use your local app image and benefit from the usage of the image [updates - rollbacks ...etc]
I do have a side note though on your stack file. You are using the same env file for both services, please mind that swarm will look for the ".env" file relative/next to the ".yml" file, so if this is not intentional please revise the location of your env files.
Also on a side note this solution is only feasable on a single node cluster and if you scale your cluster you will have to use a registery and registeries dont have to be public, you can deploy a private registery on your cluster and only your nodes can access it -or you can make it public- the accessibility of your registery is your choice.
Hope this will help with your issue.
Instead of docker images, you can directly use the docker file there. please check the below example.
version: "3.7"
services:
webapp:
build: ./dir
The error is because of compose unable to find an image on the Docker public registry.
Above method should solve your issue.
Basically you need to use docker images in order to make the rolling update to work in docker swarm. Also I would like to clarify that you can host a private registry and use it instead of public one.
Detailed Explanation:
When you try out rolling update how docker swarm works is that it sees whether there is a change in the image which is used for the service if so then docker swarm schedules service updation based on the updation criteria's set up and will work on it.
Let us say there is no change to the image then what happens? Simply docker will not apply the rolling update. Technically you can specify --force flag to make it force update the service but it will just redeploy the service.
Hence create a local repo and store the images into that and use that image name in docker-compose file to be used for a swarm. You can secure the repo by using SSL, user credentials, firewall restrictions which is up to you. Refer this for more details on deploying docker registry server.
Corrections in your compose file:
Since docker stack uses the image to create service you need to specify image: "<image name>" in app service like done in postgres service. AS you have mentioned build instruction image-name is mandatory as docker-compose doesn't know what tho name the image as.Reference.
Registry server is needed if you are going to deploy the application in multi-server. Since you have mentioned it's a single node deployment just having the image pulled/built on the server is enough. But private registry approach is the recommended.
My recommendation is that don't club all the services into a single docker-compose file. The reason is that when you deploy/destroy using docker-compose file all the services will be taken down. This is a kind of tight coupling. Of course, I understand that all the other services depend on DB. in such cases make sure DB service is brought up first before other services.
Instead of specifying the env file make it as a part of Docker file instruction. either copy the env file and source it in entry point or use ENV variable to define it.
Also just an update:
Stack is just to group the services in swarm.
So your compose file should be:
version: "3.6"
services:
postgres:
env_file:
- ".env"
image: "postgres:11.0-alpine"
volumes:
- "/var/run/postgresql:/var/run/postgresql"
app:
build: "."
image: "image-name:tag" #the image built will be tagged as image-name:tag
working_dir: /app # note here I've removed .env file
depends_on:
- "postgres"
command: iex -S mix phx.server
volumes:
- ".:/app"
volumes:
postgres: {}
static:
driver_opts:
device: "tmpfs"
type: "tmpfs"
Dockerfile:
from baseimage:tag
COPY .env /somelocation
# your further instructions go here
RUN ... & \
... & \
... && chmod a+x /somelocation/.env
ENTRYPOINT source /somelocation/.env && ./file-to-run
Alternative Dockerfile:
from baseimage:tag
ENV a $a
ENV b $b
ENV c $c # here a,b,c has to be exported in the shell befire building the image.
ENTRYPOINT ./file-to-run
And you may need to run
docker-compose build
docker-compose push (optional needed to push the image into registry in case registry is used)]
docker stack deploy -c docker-compose.yml <stackname>
NOTE:
Even though you can create the services as mentioned here by #M.Hassan I've explained the ideal recommended way.
I am new to Docker and have docker-compose.yml which is containing many services and iI need to start one particular service. I have docker-compose.yml file with information:
version: '2'
services:
postgres:
image: ${ARTIFACTORY_URL}/datahub/postgres:${BUILD_NUMBER}
restart: "no"
volumes:
- /etc/passwd:/etc/passwd
volumes_from:
- libs
depends_on:
- libs
setup:
image: ${ARTIFACTORY_URL}/setup:${B_N}
restart: "no"
volumes:
- ${HOME}:/usr/local/
I am able to call docker-compose.yml file using command:
docker-compose -f docker-compose.yml up -d --no-build
But I need to start "setup service" in docker-compose file:
How can I do this?
It's very easy:
docker compose up <service-name>
In your case:
docker compose -f docker-compose.yml up setup -d
To stop the service, then you don't need to specify the service name:
docker compose down
will do.
Little side note: if you are in the directory where the docker-compose.yml file is located, then docker-compose will use it implicitly, there's no need to add it as a parameter.
You need to provide it in the following situations:
the file is not in your current directory
the file name is different from the default one, eg. myconfig.yml
As far as I understand your question, you have multiple services in docker-compose but want to deploy only one.
docker-compose should be used for multi-container Docker applications. From official docs :
Compose is a tool for defining and running multi-container Docker
applications.
IMHO, you should run your service image separately with docker run command.
PS: If you are asking about recreating only the container whose image is changed among the multiple services in your docker-compose file, then docker-compose handles that for you.
I have Docker commands to create a container and then use that container's name with --volumes-from to run another container and it works fine -
docker create -v /home/dev/docker/my/config:/home/myuser/4.0/config --name shared-config my/configurator:4.0.0
The above would create a new container by name shared-config from image my/configurator:4.0.0
and when trying to run any other container (say my/oms:4.0.0) I can simply use volume from container named shared-config using --volumes-from
docker run --volumes-from shared-config -p 8083:8080 -d my/oms:4.0.0
using --volumes-from we can use the volume multiple times in which ever container it is required.
Till here everything seems fine.
Now, I am trying to do the above in docker-compose using file-format version "3" and not able to understand how will I be able to re-use data-volume once it is created. Since as per docker-compose in version 3 they have discontinued use of --volumes-from.
They say -
To share a volume between services, define it using the top-level volumes option and reference it from each service that shares it using the service-level volumes option.
In above statement they are referring to named volumes, please refer from here.
But I just want to mount a host directory as a data volume and re-use that data volume. My question is how do I reuse this data-volume through docker-compose file version "3".
To the simplest, for each service I want to run through docker-compose I can use volume key at service level
version: "3"
services:
my-oms:
image: my/oms:4.0.0
ports:
- "8083:8080"
volumes:
- /home/dev/docker/my/config:/home/myuser/4.0/config
But what if I want to use my host's directory (/home/dev/docker/my/config) as a data volume in different services. Should I have the volume key for each service or actually there is a better way in docker-compose version "3" where I can re-use the data-volume in other services (how we did using --volumes-from).
Any pointers or suggestions or something that I missed?
The best option to avoid repeating syntax is to extend your docker-compose.yml using the extends option:
So you can have a common-services.yml that looks like:
version: "3"
services:
generic-vol:
volumes:
- /home/dev/docker/my/config:/home/myuser/4.0/config
And then your docker-compose.yml gets updated to look like:
version: "3"
services:
my-oms:
extends:
file: common-services.yml
service: generic-vol
image: my/oms:4.0.0
ports:
- "8083:8080"
Note that docker stack deploy -c docker-compose.yml may not support all these options, I've encountered issues using variables and multiple docker-compose files for my project. The solution to that is to use docker-compose to parse the file into something the stack deploy can use with docker-compose config >docker-compose.stack.yml and then pass that yml file to your stack deploy.
A second option is to utilize the features of the yml syntax itself. It allows anchors and references to those anchors. That syntax looks like:
version: "3"
services:
my-oms:
image: my/oms:4.0.0
ports:
- "8083:8080"
volumes: &common-vol
- /home/dev/docker/my/config:/home/myuser/4.0/config
my-xyz:
image: my/xyz:4.0.0
ports:
- "8888:8080"
volumes: *common-vol
The first &common-vol creates an anchor, and the later *common-vol is a reference to that same part of yml data.
I use docker-compose to describe the deployment of one of my application. The application is composed of a
mongodb database,
a nodejs application
a nginx front end the static file of nodejs.
If i scale the nodejs application, i would like nginx autoscale to the three application.
Recently i use the following code snippet :
https://gist.github.com/cmoore4/4659db35ec9432a70bca
This is based on the fact that some environment variable are created on link, and change when new server are present.
But now with the version 2 of the docker-compse file and the new link system of docker, the environment variable doesn't exist anymore.
How my nginx can now detect the scaling of my application ?
version: '2'
services:
nodejs:
build:
context: ./
dockerfile: Dockerfile.nodejs
image: docker.shadoware.org/passprotect-server:1.0.0
expose:
- 3000
links:
- mongodb
environment:
- MONGODB_HOST=mongodb://mongodb:27017/passprotect
- NODE_ENV=production
- DEBUG=App:*
nginx:
image: docker.shadoware.org/nginx:1.2
links:
- nodejs
environment:
- APPLICATION_HOST=nodejs
- APPLICATION_PORT=3000
mongodb:
image: docker.shadoware.org/database/mongodb:3.2.7
Documentation states here that:
Containers for the linked service will be reachable at a hostname identical to the alias, or the service name if no alias was specified.
So I believe that you could just set your services names in that nginx conf file like:
upstream myservice {
yourservice1;
yourservice2;
}
as they would be exported as host entries in /etc/hosts for each container.
But if you really want to have that host:port information as environment variables you could write a script to parse that docker-compose.yml and define an .env file, or doing it manually.
UPDATE:
You can get that port information from outside the container, this will return you the ports
docker inspect --format='{{range $p, $conf := .NetworkSettings.Ports}} {{$p}} -> {{(index $conf 0).HostPort}} {{end}}' your_container_id
But if you want to do it from the inside of a containers then what you want is a service discovery system like zookeeper
There's a long feature request thread in docker's repo, about that.
One workaround solution caught my attention. You could try building your own nginx image based on that.