How to get number from docker-compose up --scale - docker

I am trying to create scale-able docker container running an Elixir node. Currently I have this:
client:
image: elixir:alpine
command:
elixir --name client#somewhere.localdomain --cookie pass
-S mix run --no-halt -e Connect.main ${CONTROLLER}
depends_on:
- system_control
The elixir node is just using Node.connect to inform the controller of its existence. However, if I try creating more client nodes with docker-compose up --scale client=5 then only the first one is able to connect and the rest are refused (presumably) because of name clashes caused by the hardcoded --name. Any idea how to circumvent it? Is there a way of getting some unique id to be used instead of somewhere?
Edit: my Connect.main elixir script is (abridged) this:
defmodule Connect do
def main do
[server] = System.argv
IO.puts "#{Node.self} - Connecting to #{server} - #{ Node.connect(:'#{server}') }
end
end

Assigning a name with a randomized component should solve your problem. Make sure it is random enough that duplicates are highly improbable. You might want to avoid sequential IDs in some cases as suggested in this Docker-Compose issue
As #fl9 suggested, using some kind of command line UUID generator like uuidgen might be able to do the trick.

Related

My docker container keeps instantly closing when trying to run an image for bigcode-tools

I'm new to Docker, and I'm not sure how to quite deal with this situation.
So I'm trying to run a docker container in order to replicate some results from a research paper, specifically from here: https://github.com/danhper/bigcode-tools/blob/master/doc/tutorial.md
(image link: https://hub.docker.com/r/tuvistavie/bigcode-tools/).
I'm using a windows machine, and every time I try to run the docker image (via: docker run -p 80:80 tuvistavie/bigcode-tools), it instantly closes. I've tried running other images, such as the getting-started, but that image doesn't close instantly.
I've looked at some other potential workarounds, like using -dit, but since the instructions require setting an alias/doskey for a docker run command, using the alias and chaining it with other commands multiple times results in creating a queue for the docker container since the port is tied to the alias.
Like in the instructions from the GitHub link, I'm trying to set an alias/doskey to make api calls to pull data, but I am unable to get any data nor am I getting any errors when performing the calls on the command prompt.
Sorry for the long question, and thank you for your time!
Going in order of the instructions:
0. I can run this, it added the image to my Docker Desktop
1.
Since I'm using a windows machine, I had to use 'set' instead of 'export'
I'm not exactly sure what the $ is meant for in UNIX, and whether or not it has significant meaning, but from my understanding, the whole purpose is to create a directory named 'bigcode-workspace'
Instead of 'alias,' I needed to use doskey.
Since -dit prevented my image from instantly closing, I added that in as well, but I'm not 100% sure what it means. Running docker run (...) resulted in the docker image instantly closing.
When it came to using the doskey alias + another command, I've tried:
(doskey macro) (another command)
(doskey macro) ^& (another command)
(doskey macro) $T (another command)
This also seemed to be using github api call, so I also added a --token=(github_token), but that didn't change anything either
Because the later steps require expected data pulled from here, I am unable to progress any further.
Looks like this image is designed to be used as a command-line utility. So it should not be running continuously, but you run it via alias docker-bigcode for your tasks.
$BIGCODE_WORKSPACE is an environment variable expansion here. So on a Windows machine it's %BIGCODE_WORKSPACE%. You might want to set this variable in Settings->System->About->Advanced System Settings, because variables set with SET command will apply to the current command prompt session only. Or you can specify the path directly, without environment variable.
As for alias then I would just create a batch file with the following content:
docker run -p 6006:6006 -v %BIGCODE_WORKSPACE%:/bigcode-tools/workspace tuvistavie/bigcode-tools %*
This will run the specified command appending the batch file parameters at the end. You might need to add double quotes if BIGCODE_WORKSPACE path contains spaces.

How to "redirect" old docker-compose commands to new docker compose version?

since the new version of docker compose has arrived, the naming of the command has changed from
$ docker-compose [...]
to
$ docker compose [...]
Is there any way to "redirect" commands that already exist, written in the old way, to the new one?
It would be very useful for instances to make install scripts, or makefiles not failing?
If you have any info or idea, it would be great,
Thanks in advance.
(I have already tried to put some aliases, that would just be "docker-compose", and takes as an argument the rest of the query, to transform it to "docker compose" new version, but it is not very concluant and seems to give me a lot of problems..)

Looking for a convenient way to start and stop applications with docker-compose

For each of my projects, I have configured a docker development environment consisting of several containers. I often switch between projects. That requires stopping one set of containers and starting another. I currently do it like this:
$ cd project1
$ docker-compose stop
$ cd ../project2
$ docker-compose up -d
So I need to remember which application is currently running, cd into the directory where its docker-compose.yml is, stop it, then remember what other project I want to run, cd there and start it.
Is there a better way? Like a utility that remembers which multicontainer applications I have, can stop the currently running one and run another one without manual cding and docker-composeing?
(By the way, what's the correct term for a set of containers hosting parts of a single application?)
Hope docker-compose-ui will help you in managing applications.
I think the real problem here is this:
That requires stopping one set of containers and starting another.
You shouldn't need to stop one project to start another.
Instead of mapping to the same host ports I would not map any ports at all. Then use a script to lookup the IP of the container, and connect directly to that:
#!/bin/bash
cip=$(docker inspect -f '{{range $key, $value := .NetworkSettings.Networks}} {{ $value.IPAddress}} {{end}}' $1)
This will look up the container ip. Combine that with a command to open the url:
url=http://cip:8080/
xdg-open $url || open $url
All together this will let you run the application without having to map any host ports. When host ports don't exist, you don't have to stop other projects.
If you are ruby proven a bit, you can use scaffolding for this.
A barebone example using thread ( to start different docker-compose session without one process and then stop them all together )
require 'docker-compose'
threads = []
project_paths = %w(/project/path1 /project/path2 /project/path3 /project/path)
project_paths.each do |path|
threads.push Docker::Compose::Session.new(dir:compose_base_path1)
end
begin
threads.each do |thread|
thread.join
end
rescue SystemExit, Interrupt
threads.each do |thread|
thread.kill
end
rescue Exception => e
handle_exception e
end
source
It uses
docker-compose gem
threads
Just set project_paths to the folders of your projects. And if you want to end them all, use CTRL+c
You can of course go beyond that, using a daemon and try to start / stop some of them giving "names" and such, but i guess as a starting point for scaffolding, that should be enaugh

On Bluemix - handling volume for container group instances

When i create a container group with 2 desired instances with a command containing the volume specification as follows:
> ... -v log_vol:/opt/ibm/logs --env
> LOG_LOCATIONS=/opt/ibm/logs/messages.log,/opt/ibm/logs/debug.log,/opt/ibm/logs/trace.log
> -e TRACE_LEVEL=*~info -e MAX_LOG_FILES=5 -e MAX_LOG_FILE_SIZE=20 ...
In this case each individual running-container-instance of the group will have a similar directory /opt/ibm/logs/ to store logs.
When the application within the individual container instance generates logs, the log data is lost as it is mounted to a shared volume called log_vol. The logs get replaced on every new entry.
Can someone suggest me on how to handle it?
Are there any ways that we can attach a volume specification post container instance creation?
In this case, it's best to think of the volume as something similar to a shared network drive, with the separate containers running on different hosts. If the processes are assuming they're the only one writing to the file, and caching/overwriting on each write, this will be the result.
Perhaps instead have the containers/programs write to something like /opt/ibm/logs/messages.$HOSTNAME.log so that the assumption they own their own logfile is correct? Or similarly, have the container create for itself /opt/ibm/logs/$HOSTNAME/ on boot, and then write to messages/debug/trace.log under there?

running multiple instance of mongod as service

i try to start multiple Instance of MongoDB as a Service. Under the commandline i can start more than one Mongo Instances, for the first instance i append "--install" to the Command and now it run as service. But now i try to append "--install" to the second Instance and get a Error:
first command runs well:
c:\data\bin\mongod --nohttpinterface --port 27201 --dbpath c:\data\cluster\db1 --master --logpath c:\var\log\mongodb_db1.log --serviceName MongoDB_1 --install
but the second one gives a error:
c:\data\bin\mongod --nohttpinterface --port 28000 --dbpath c:\data\cluster\db2 --master --logpath c:\var\log\mongodb_db2.log --serviceName MongoDB_2 --install
error:
Creating service MongoDB_2. Error creating service. Der Name wird bereits als Dienstname oder als Dienstinstanzname verwendet. (1078)
I think that MongoDB use an internal Servicename that is always the same and differ to the shown servicename. But i don't know how to fix it?
Any suggestions?
Regards
Rene
You can do a polite installation of a 2nd instance using the proper command line switches. Just read my answer here https://stackoverflow.com/a/9273816/249992
I ran into this same issue. My workaround is kind of hacky, but it seems to work:
Create the first mongod service using monogd --install
Open regedit and navigate to HKLM\SYSTEM\CurrentControlSet\services\NameOfMongoService
Export this key
Edit exported reg file in text editor, updating service name and mongod params.
Import into reg (and possibly reboot).
To get mongos running as a service I took a different approach and used instsrv and srvany from the Windows NT Resource Kit:
http://support.microsoft.com/kb/137890
This kb doesn't mention that after intstalling srvany using instsrv you have to add a Parameters sub-key under the newly created service in the registry. This key should contain a REG_SZ named "Application" with the path to the app to start as a service.

Resources