Is there a way to bypass start-up scripts on HP-UX B.11.31 from rc directories. I am looking for some commands or boot mode which would do this.
Thank you,
Alex
The correct answer is it depends on which state the system should be after and what you want it to be able to do. The rc directories contain useful services that need to be started in order to have a fully working OS.
One possibility is to boot into a single user state or mode (runlevel S or 1) described here http://www.flagword.net/2010/03/runlevels-in-hp-ux-and-solaris/
I recommend using 2 runlevels, first one to boot into by default (ex. runlevel 3) which will start the mandatory things and scripts from "/sbin/rc3.d" (ex. syslog, sshd etc.) and second one to switch to (ex. runlevel 5) which will start the applications from "/sbin/rc5.d" which are optional for the OS but required in a final state of the machine.
So the machine should boot by default in runlevel 3 (specified as default in /etc/inittab), then you login to the machine and use the command telinit 5 to switch the system to runlevel 5 starting the applications.
Related
I have a Docker image that needs to be run in an environment where I have no admin privileges, using Slurm 17.11.8 in RHEL. I am using udocker to run the container.
In this container, there are two applications that needs to run:
[1] ROS simulation (there is a rosnode that is a TCP client talking to [2])
[2] An executable (TCP server)
So [1] and [2] needs to run together and they shared some common files as well. Usually, I run them in separate terminals. But I have no idea how to do this with SLURM.
Possible Solution:
(A) Use two containers of the same image, but their files will be stored locally. Could use volumes instead. But this requires me to change my code significantly and maybe break compatibility when I am not running it as containers (e.g in Eclipse).
(B) Use a bash script to launch two terminals and run [1] and [2]. Then srun this script.
I am looking at (B) but have no idea how to approach it. I looked into other approaches but they address sequential executions of multiple processes. I need these to be concurrent.
If it helps, I am using xfce-terminal though I can switch to other terminals such as Gnome, Konsole.
This is a shot in the dark since I don't work with udocker.
In your slurm submit script, to be submitted with sbatch, you could allocate enough resources for both jobs to run on the same node(so you just need to reference localhost for your client/server). Start your first process in the background with something like:
udocker container_name container_args &
The & should start the first container in the background.
You would then start the second container:
udocker 2nd_container_name more_args
This would run without & to keep the process in the foreground. Ideally, when the second container completes the script would complete and slurm cleanup would kill the first container. If both containers will come to an end cleanly you can put a wait at the end of the script.
Caveats:
Depending on how Slurm is configured, processes may not be properly cleaned up at the end. You may need to capture the PID of the first udocker as a variable and kill it before you exit.
The first container may still be processing when the second completes. You may need to add a sleep command at the end of your submission script to give it time to finish.
Any number of other gotchas may exist that you will need to find and hopefully work around.
I have a Rails application which is using Puma. I'm using nginx for load balancing. I would like to dockerize and deploy to a DigitalOcean (Docker) droplet.
After reading lots of blogs and examples (most of which are a year old and that's a long time in the Docker world), I'm still confused about 2 things. Let's say that I select a DigitalOcean box with 4 CPUs. How am I supposed to set up the Rails containers? Should I set up 4 different containers, where Puma is configured with 1 worker process? Or should I set up 1 container where Puma is configured with 4 worker processes?
And the second thing I'm confused about: should I run nginx inside the Rails container, or should I run them in separate containers?
These 2 questions allow 4 permutations that I diagramed below.
option 1
option 2
option 3
option 4
Docker likes to push the single process per container style of design. When running multiple processes in a single container there is the extra layer of a service manager in between Docker and the underlying processes which causes Docker to lose visibility of the real service status. This more often than not makes services harder to manage with Docker and it's associated tools. Puma managing workers will not be as bad as a generic service manager running multiple processes.
You may also need to consider the next step in the application, hosting across multiple droplets/hosts and how to easy it will be to move to that next step.
Option 1 and 3 follow Dockers preferred design. If you are using MRI, Puma can run in clustered mode so it just depends on whether you want to manage the Ruby processes yourself (1) or have Puma do the worker management (3). There will be differences between how nginx and Puma distribute requests between workers. Puma can also schedule zero down time updates which would require a bit of effort to get working via Docker. If you are using Rubinius or JRuby you would probably lean towards option 3 and let threads do the work.
Option 1 may allow you to more easily scale across different sized hosts with Docker tools.
Option 2 looks like it adds an unnecessary application hop and Docker no longer maintains the service state in your app tier as you need something else in the container to launch both nginx and Puma.
Option 4 might perform a bit better than others due to the local sockets, but again Docker is no longer aware of the service state.
In any case try a couple of solutions and benchmark both with something like JMeter. You will quickly get an idea of what works and what doesn't, both in performance and management.
Suppose I have installation instructions as follows:
Do something.
Reboot your machine.
Do something else.
How do I express that in a Dockerfile?
This entirely depends on why they require a reboot. For Linux, rebooting a machine would typically indicate a kernel modification, though it's possible it's for something more simple like a change in user permissions (which would be handled by logging out and back in again). If the install is trying to make an OS level change to the kernel, it should fail if done inside of a container. By default, containers isolate and restrict what the application can do to the running host OS which would impact the host or other running containers.
If, the reboot is to force the application service to restart, you should realize that this design doesn't map well to a container since each RUN command runs just that command in an isolated environment. And by running only that command, this also indicates that any OS services that would normally be started on OS bootup (cron, sendmail, or your application) will not be started in the container. Therefore, you'll need to find a way to run the installation command in addition to restarting any dependent services.
The last scenario I can think of they want different user permissions to take effect to the logged in user. In that case, the next RUN command will run the requested command with any changed access from prior RUN commands. So there's no need to take any specific action of your own to do a reboot, simply perform the install steps as if there's a complete restart between each step.
I have a DockerFile that starts 2 processes in a single docker container using a jar file and a config file as an argument
java -jar process1.jar process1.cfg &
java -jar process2.jar process2.cfg
process1.cfg and process2.cfg are residing in mounted directories. Now whenever there is a change in any of the cfg files, I would need to restart the corresponding process for the new change to take effect. All these to be done programmatically using Java in a REST microservice that updates the config file and restarts the process. Any idea on how to go about it ?
The problem can be generically solved by your Java app starting a config change monitoring service/thread, which manages the actual business service/thread(s) by starting it in the beginning and restarting on any change (if the change actually needs a restart). File change monitoring is standard Java functionality. The solution does not need any REST, it is not bound to microservice architecture (although it is more sensible within it) and it is not limited by or to docker containers.
If you do not want any file-based configs, do the same, but the monitoring bit can be e.g. a vert.x-based web server listening for external REST requests supplying configs, on start or for any update. The rest remains the same.
In my current workplace we actually have a module that functions in exactly this way, it is deployed to a docker and uses both file system monitoring and vert.x web server for config changes.
You can even go further and make the monitoring bit start multiple instances internally if multiple configs need to be supported.
I am building a web service where users submit pdf files and from these files the content in text is extracted using Tika. I am using Tika in server mode on the same machine that I host my Django website.
My question is, is there a way to automate the restart of the Tika server when it shuts down for any reason? How can I build a script and run this so whenever the Tika server goes down this gets traced and the server restarts again? My ultimate goal for this is not to check every day from the console if Tika is down, neither to realize that the service is down when a user complains that her pdf does get extracted.
Since you're using a recent copy of Ubuntu, your easiest option is probably to create a custom Upstart job for it. On other unixes, you'd want something similar for their init system, and on Windows I think something with Apache Commons Daemon to wrap it as a Windows service is likely the best bet.
As covered in this post over on Ask Ubuntu, the key thing you'll want is the respawn option, to tell upstart to re-launch the Tika server if it happens to fail, and a limit in case it gets really broken for some reason.
You'll want to create a file /etc/init/tika-server.conf, with contents along the lines of:
description "Apache Tika Server"
start on filesystem or runlevel [2345]
stop on shutdown
respawn
respawn limit 3 12
exec java -jar /path/to/tika/tika-server-1.10-SNAPSHOT.jar
Tweak the path to your Tika Server jar, and add any options / parameters you want to the end.
With that done, to init-checkconf /etc/init/tika-server.conf to check it's valid, then service tika-server start to start it.
At that point, you can head to http://localhost:9998/ and see it running! If it dies, upstart will restart it for you.