I have an application which has the following requirement.
During the running of my Erlang App. on the fly I need to start one or more remote nodes either on the local host or a remote host.
I have looked at the following options
1) For starting a remote node on the local host either use the slave module or the net_kernel:start() API.
However with the latter , there seems to be no way to specify options like boot script file name etc.
2) In any case I don't need the slave configuration as I need to mimic similar behaviour of nodes spawned on local as
well as remote hosts. In my current setup, I dont have permissions to rsh to the remote host. The workaround i can think of is to have
a default node running on the remote host so as to enable remote node creation either through spawn or rpc:async_call and os:cmd
combination
Is there any other API interface to start erl ?
I am not sure this is the best or the cleanest way to solve this problem and I would like to know the Erlang approach to the same?
Thanks in advance
There is pool module which might help you, however it utilizes slave module (thereof rsh).
Related
In Docker I wish to run multiple instances of the same application. These applications all need their own configuration (db name/port) preferably fed via a file but any solution will do really.
I think I should run this in swarm mode. But I can't figure out how to pass different configurations to all the different tasks spawned by the service. Does docker swarm support this use case?
Thank you
I think you are getting it wrong. The replication mode in docker is to bring up more nodes for the same task with a built in loadbalancer. So having different configurations for each instance would deny this use case partially.
You option is to use different service per different configuration.
I have a bit unusual environment. In order to connect to the machine B via ssh, I need connect to the machine A and from that box, connect to B, and execute a number of commands there.
Local --ssh--> Machine A --ssh--> Machine B (some commands to execute here)
Generally speaking, Machine A is my entry point to all servers.
I am trying to automate the deployment process with Jenkins and wondering, if it supports such unusual scenario.
So far, I installed the SSH plugin and able to connect to Machine A, yet I am struggling with a connection to Machine B. The jenkins process freezes on the ssh command to Machine B and nothing happens.
Does anyone have any ideas how I can make such scenario work?
The term for Machine A is a "bastion host", which might help your googling.
This link calls it a "jump host", and describes a number of ways to use SSH's ProxyCommand setting to setup all manner of inter-host SSH communication:
https://www.cyberciti.biz/faq/linux-unix-ssh-proxycommand-passing-through-one-host-gateway-server/
I'm testing gui apps in slave machine . so I want Jenkins slave should be connected with master whenever I logged in. Right now I did some batch scripts to invoke slave connection. Is there any other way to do this same work then please let me know.
Guessing from your "batch scripts" I assume you're using Windows. See Install Slave as a Windows service.
BUT: See also Windows Service needs desktop and network access:
Windows service can run under either a network authenticated user, or the local system. The network user services do not interact with the desktop, and local system services do not have access to network resources. To my knowledge, there is no way around this without spawning sub-processes as different users. However, there is a work-around. Split your service into two services one runs under the local system, and can interact with the desktop. The other will run under the network user account with access to the desired network resources. Set up these services to communicate with each other, and each can supply the functionality that it has access to. NOTE: when setting up the services in your install package you may want to make one of the services dependent on the other to make sure that both run together.
I couldn't have said it better.
UPDATE
In other words: When needing Desktop and network access batch scripts launched via:
the Autostart folder
setting the appropriate group policy
logging in programmatically: Windows Vista/7 Programmatically login, C# - Programmatically Log-off and Log-on a user
are the ways to go.
I have experimented with packaging my site-deployment script in a Docker container. The idea is that my services will all be inside containers and then using the special management container to manage the other containers.
The idea is that my host machine should be as dumb as absolutely possible (currently I use CoreOS with the only state being a systemd config starting my management container).
The management container be used as a push target for creating new containers based on the source code I send to it (using SSH, I think, at least that is what I use now). The script also manages persistent data (database files, logs and so on) in a separate container and manages back-ups for it, so that I can tear down and rebuild everything without ever touching any data. To accomplish this I forward the Docker Unix socket using the -v option when starting the management container.
Is this a good or a bad idea? Can I run into problems by doing this? I did not read anywhere that it is discouraged, but I also did not find a lot of examples of others doing this.
This is totally OK, and you're not the only one to do it :-)
Another example of use is to use the management container to hande authentication for the Docker REST API. It would accept connections on an EXPOSE'd TCP port, itself published with -p, and proxy requests to the UNIX socket.
As this question is still of relevance today, I want to answer with a bit more detail:
It is possible to work with this setup, where you pass the docker socket into a running container. This is done by many solutions and works well. BUT you have to think about the problems, that come with this:
If you want to use the socket, you have to be root inside the container. This allows the execution of any command inside the container. So for example if an intruder controlls this container, he controls all other docker containers.
If you expose the socket with a TCP Port as sugested by jpetzzo, you will have the same problem even worse, because now you won't even have to compromise the container but just the network. If you filter the connections (like sugested in his comment) the first problem stays.
TLDR;
You could do this and it will work, but then you have to think about security for a bit.
I'm new to rabbitmq and by association new to erlang. I'm running into a problem where I cannot start rabbitmq as the 'home' location for the .erlang.cookie has been changed. I've run the command
init:get_argument(home).
which returns
{ok,[["H:\\"]]}
this is an issue, as this is a network drive I do not always have access to. I need to be able to change the 'home' directory to something local.
when I run
rabbitmqctl status
it gives me the following error:
{error_logger,{{2013,7,5},{14,47,10}},"Failed to create cookie file 'h:/.erlang.cookie': enoent",[]}
which again leads me to believe that there is an issue with the home argument. I need to be able to change this location to something local.
Versions:
Erlang R16B01 32 bit
RabbitMQ 3.1.3
Running on Win7
I have uninstalled and reinstalled multiple times hoping to resolve this. I am looking for a way to change the 'home' location in erlang so rabbitmq can properly start.
The solution I came up with was to not bother with the installed service. I used the rabbitmq-server.bat to start the service, SET HOMEDRIVE=C: at the start of the file. I'm planing to run this from a parent service so that I can install this on servers.
Final note to earlang and rabbitMQ developers; using pre-existing environment variables for you own purposes is just wrong. You should create your own, or better yet put this stuff in a configuration file. Telling people to talk to their system administrators to change the HOMEDRIVE and APPDATA variables is arrogant to say the least.
You need to set the correct values for variables $HOMEDRIVE and $HOMEPATH. These links should help:
Permanently Change Environment Variables in Windows
Overriding HOMEDRIVE and HOMEPATH as a Windows 7 user