couchbase configuration on docker - docker

Hello I am very new on Docker and I want to make some initial configuration in the couchbase like that:
Create a bucket
Set admin password
I want to automate these two process. Because:
When I first run couchbase-db on docker (docker-compose up -d couchbase db) I am going localhost:8091 and I am setting admin password. If I didnt do this when first running, I could not run couchbase properly.
What are ways to do this? Is there any images for doing this? Can I change Docker file for initial configuration?
Thanks.

Running Couchbase inside Docker container is quite trivial. You just need to run below command and you are done. And, yes as you mentioned once below command is run just launch url and configure the Boubhbase through web console.
$sudo docker run -d --name cb1 couchbase
Above command runs Couchbase in detached mode (-d) and name is cb1.
I have provided more details about it on my blog, here.

I have been searching myself and have struck gold, I'd like to share it here for the record.
There is indeed an image for what the OP seeks to do: pre-configure server when the container is created.
This method uses the Couchbase API in a custom image that runs this shell script to setup the server and any worker nodes, if needed.
Here's an example to set credentials:
curl -v http://127.0.0.1:8091/settings/web -d port=8091 -d username=Administrator -d password=password
https://github.com/arun-gupta/docker-images/blob/master/couchbase/configure-node.sh
Here's how to use the image with docker-compose
https://github.com/arun-gupta/docker-images/tree/master/couchbase
After the image is pulled, modify the configure-node.sh to fit your needs. Then run in single or in swarm mode.
Sources:
https://blog.couchbase.com/couchbase-using-docker-compose/
https://docs.couchbase.com/server/4.0/rest-api/rest-bucket-create.html

Related

Docker add --insecure-registry for current session only

Is it possible to add an --insecure-registry=docker.my-registry to the current session only with environment variables or similar?
I'd like to do some testing without changing my current Docker setup (for example I might not be able to restart the service).
Or any similar idea?
Sounds like a bad idea from a security point of view. If that were possible you (or any user) will be able to download images from an insecure registry that's not allowed by docker's sysadmin.
There is no concept of per session images in docker, any downloaded image will be available for all users
edit:
And to answer your question: No, it is not possible.
I was able to solve this issue by using the docker:18.02.0-dind Docker image (Docker in Docker).
I start the DID container:
$ docker run -d --name did --privileged docker:18.02.0-dind --insecure-registry=my.insecure.reg
Then I go into the running container:
$ docker exec -it did /bin/sh
And inside the running container I login to my insecure registry:
/ # docker login -u me -p mypass my.insecure.reg
Login Succeeded
In the running container I can now do some tests against my insecure registry.

Any way to retrieve the command originally used to create a Docker container?

This question seems to have been often asked, but I cannot find any answer that correctly and clearly specifies how to accomplish this.
I often create test docker containers that I run for a while. Eventually I stop the container and restart it simply using docker start <name>. However, sometimes I am looking to upgrade to a newer image, which means deleting the existing container and creating a new one from the updated image.
I've been looking for a reliable way to retrieve the original 'docker run' command that was used to create the container in the first place. Most responses indicate to simply use docker inspect and look at the Config.Cmd element, but that is not correct.
For instance, creating a container as:
docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=Qwerty123<(*' -e TZ=America/Toronto -p 1433:1433 -v c:/dev/docker/mssql:/var/opt/mssql --name mssql -d microsoft/mssql-server-linux
using docker inspect will show:
$ docker inspect mssql | jq -r '.[0]["Config"]["Cmd"]'
[
"/bin/sh",
"-c",
"/opt/mssql/bin/sqlservr"
]
There are many issues created on github for this same request, but all have been closed since the info is already in the inspect output - one just has to know how to read it.
Has anyone created a utility to easily rebuild the command from the output of the inspect command? All the responses that I've seen all refer to the wrong info, notably inspecting the Config.Cmd element, but ignoring the Mounts, the Config.Env, Config.ExposedPorts, Config.Volumes, etc elements.
There are few utilities out there which can help you.
Give it a try
https://github.com/bcicen/docker-replay
https://github.com/lavie/runlike
If you want to know more such cool tools around docker check this https://github.com/veggiemonk/awesome-docker
Of course docker inspect is the way to go, but if you just want to "reconstruct" the docker run command, you have
https://github.com/nexdrew/rekcod
it says
Reverse engineer a docker run command from an existing container (via docker inspect).
Another way is Christian G answer at
How to show the run command of a docker container
using bash-preexec
I've had the same issue and ended up looking at .bash_history file to find the command I used.
This would give you all the docker create commands you'd run;
grep 'docker create' .bash_history
Note: if you ran docker create in that same session you'll need to logout/login for the .bash_history to flush to disk.

Docker containers-ID got change on each startup of container

I have started working with docker and I faced a problem whenever i start a container it provides a ID but when the container goes down and after new startup this starts with new ID, in this case the data/logs belongs to last start-up gets lost. Is this possible to fix the containers-ID ?
Do you mean the container name? Use the option --name
Here is the sample that you can reserve the same name when start the container. But you need make sure that no container with same name is running.
docker run --name some-mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:tag
Second, if you need manage logs, then
create a seperate log volume and mount it with container.
export the log to ELK/splunk
If you need some solutions now, try this repository
https://github.com/gliderlabs/logspout
Log routing for Docker container logs

How to refresh a container links

I've two dockers: one is a nginx frontend and the other is an expressjs application. Nginx is the entry point and it does a proxy to expressjs.
I do:
docker run -d --name 'expressjs' geographica/expressjs
docker run -d --name 'nginx' --link expressjs nginx
After that when I update the image geographica/expressjs I need to recreated the expressjs container:
docker stop expressjs && docker rm expressjs && docker run -d --name 'expressjs' geographica/expressjs
At this point, I also need to recreate the nginx container. How can I do it without recreating the nginx container?
It's a simplification of our problem, our real server has a nginx frontend and N applications, so each time we update one of the application we need to restart the nginx and stop the service for other applications.
Please, avoid docker-compose solutions. I wouldn't like to have a unique/huge docker-compose file for all the applications.
UPDATED:
I also think that something like that would be useful. https://github.com/docker/docker/issues/7468. having a docker link command to change container links at runtime. Unfortunately, it's not still available in 1.8.2.
This was discussed in issue 6350:
If I explicitly do a docker restart the IP is correctly updated, however I was using "systemctl restart" which does a stop, kill and rm before a run
In that case ("stop - rm - run"), links are not refreshed:
docker does not assume that a container with the same name should be linked to
It doesn't always make sense to keep that "link", after all the new container could be completely unrelated.
My solution and my advice, is that:
you look into something a bit more robust like the Ambassador pattern that is just a fancy way of saying you link to a proxy that you never restart - to keep the docker links active.
(also introduced here)
Another solution is to just docker create, docker start and docker stop instead of docker rm.
Lastly, my actual solution was to use something like SkyDNS or docker-gen to keep a central DNS with all the container names. This last solution it's the best for me because it allows me to move containers between hosts and docker linking can't work like that.
With next versions of docker, libnetwork will actually the way to go.
(see "The Container Network Model (CNM)", and "Docker Online Meetup #22: Docker Networking - video")

Installing Gitlab CI using Docker for the Ci and the Runners, and make it persistent after reboot

I have a server running Gitlab. Let's say that the address is https://gitlab.mydomain.com.
Now what I want to achieve is to install a Continuous Integration system. Being that I am using Gitlab, I opt for Gitlab CI, as it feels the more natural way to go. So I go to the Docker repo and I found this image.
So I run the image to create a container with the following
docker run --restart=always -d -p 9000:9000 -e GITLAB_URLS="https://gitlab.mydomain.com" anapsix/gitlab-ci
I give it a minute to boot up and I can now access to the CI through the URL http://gitlab.mydomain.com:9000. So far so good.
I log in the CI and I am greeted by this message:
Now you need Runners to process your builds.
So I come back to the Docker Hub and I find this other image. Apparently to boot up this image I have to do it interactively. I follow the instructions and it will create configuration files:
mkdir -p /opt/gitlab-ci-runner
docker run --name gitlab-ci-runner -it --rm -v /opt/gitlab-ci-runner:/home/gitlab_ci_runner/data sameersbn/gitlab-ci-runner:5.0.0-1 app:setup
The interactive setup will ask me for the proper data that it needs:
Please enter the gitlab-ci coordinator URL (e.g. http://gitlab-ci.org:3000/ )
http://gitlab.mydomain.com:9000/
Please enter the gitlab-ci token for this runner:
12345678901234567890
Registering runner with registration token: 12345678901234567890, url: http://gitlab.mydomain.com:9000/.
Runner token: aaaaaabbbbbbcccccccdddddd
Runner registered successfully. Feel free to start it!
I go to http://gitlab.mydomain:9000/admin/runners, and hooray, the runner appears on stage.
All seems like to work great, but here comes the problem:
If I restart the machine, due to an update or whatever reason, the runner is not there anymore. I could maybe add --restart=always to the command when I run the image of the runner, but this would be problematic because:
The setup is interactive, so the token to register runners have to be input manually
Every time the container with Gitlab CI is re-run the token to register new runners is different.
How could I solve this problem?
I have a way of pointing you in the right direction but im trying to make it myself, hope we both manage to get it up heres my situation.
im using coreOS + docker trying to do exactly what youre trying to do, and in coreOS you can setup a service that starts the CI everytime you restart the machine (as well as gitlab and the others) my problem is trying to make that same installation automatic.
after some digging i found this: https://registry.hub.docker.com/u/ubergarm/gitlab-ci-runner/
in this documentation they state that they can do it in 2 ways:
1-Mount in a .dockercfg file containing credentials into the /root
directory
2-start your container with this info:
-e CI_SERVER_URL=https://my.ciserver.com \
-e REGISTRATION_TOKEN=12345678901234567890 \
Meaning you can setup to auto start the CI with your configs, ive been trying this for 2 days if you manage to do it tell me how =(

Resources