How to interact with already running instance via terminal in Mongooseim? - erlang

I am using Mongooseim 3.2.0 from the source code on the ubuntu server. Below are concern:
What is the best way to run mongooseim as a service so that it automatically restarts if mongooseim crashes or system restarts?
How to interact via terminal with already running mongooseim instance on the ubuntu server like "mongooseimctl live". My guess is running "mongooseimctl live" will try to create another instance. I just want to see the live logs and interaction and don't want to keep scrolling the long log files for this purpose.
I apologize if the answer to above is obvious but just want to follow the best guidance.

mongooseimctl live or mongooseimctl foreground is mostly useful for development or smoke testing a deployment (unless you're running inside a container). For real world use cases you should start the server in the background with mongooseimctl start.
Back to the container - the best approach for containerised applications is to run them in the foreground, therefore in a container startup script use mongooseimctl foreground.
Once the server is running (no matter how it was started) attaching a shell to troubleshoot issues can be done with mongooseimctl debug. This is the command to use when you get the Protocol 'inet_tcp': the name mongooseim#localhost seems to be in use by another Erlang node error. Be careful if it's a production environment - you can easily take the server down with access to this shell.
If you're just interested in watching logs, with no interactive access to the server internals that the shell offers, a simple tail -f /your-configured-mongooseim-log-dir/* should be enough.
Ubuntu nowadays uses systemd for managing its services' lifetimes. A systemd .service file can be found at https://github.com/esl/MongooseIM/blob/master/tools/pkg/platforms/debian_stretch/files/build/mongooseim.service - we use it for packaging into Debian/Ubuntu .deb packages.

Related

How can I open VS code "in container" without it restarting itself and losing shell settings when "Reopen in container" is invoked?

I have a development use-case where I use a script to configure a shell with docker-machine or other environment and then open a directory containing source and settings (/.vscode/, .devcontainer/) that I can edit/build/debug in the VS code Remote Containers extension.
In short, I'm looking to implement the following sequence when a "start-development.sh" script/hook runs:
Set up host-side env or remote resources (reverse sshfs to mount source to a remote docker-machine, modprobe, docker buildx, xhost for x-passthrough, etc.)
Run VS Code in that shell so settings aren't thrown away with a specified directory (may be mounted via sshfs or other means) in container, not just open on the host
Run cleanup scripts to clean-up and/or destroy real resources (unmount, modprobe -r, etc.) when the development container is stopped (by either closing VS Code or rebuilding the container).
See this script for a simple example of auto-configuring a shell with an AWS instance via docker-machine. I'll be adding a few more examples to this repository over the coming day or so.
It's easy enough to open VS Code in that directory (code -w -n --folder-uri /path/here) and wait for it to quit (so I can perform cleanup steps like taking down the remote docker-machine, un-mounting reverse-sshfs mounted code or disabling kernel mods I use for development, etc.).
However, VS code currently opens in "host mode" and when I choose "Reopen in container" or "Rebuild container" via the UI or command palette, it kills that process and opens another top-level(?) process, quitting the shell & throwing away my configuration and/or prematurely running my cleanup portion of the script so it has the wrong env. when it finally launches in-container. Sadness.
So finally, my question is:
Is there a way to tell VS code to open a folder "in-container"? This would solve a ton of problems for me, instead of a janky dev. cycle where I have to ensure that the code instance isn't restarting itself and messing things up - whenever I rebuild the container, for example.
Alternatively, it'd be great to not quit the top-level code process I started altogether, enabling me to wait on that, or perhaps monitor it in other ways I'm not aware of to prevent erasure of my settings and premature run of my cleanup script?
Thanks in advance!
PS: Please read the entire question before flagging it as "not related to development". If the idea of a zero-install development environment for a complex native project, live on-device development/debugging or deep learning using cloud instances with giant GPUs for Docker where you don't have to manually manage everything and write pages of readmes appeals to you - this is very much about programming.
After all weekend of trying different things, I finally figured it out! The key was this section in the awesome articles about advanced container configuration.
I put that into a bash script and used jq to merge docker.host and other docker env settings into .vscode/settings.json. See this example here.
After running a script that generates this file, the user will only need to reload/relaunch VS code in that workspace folder (where the settings were created) and yay, everything works as expected.
I plan to add some actual samples now that I have the basics working. Unfortunately, I had to separate my create and teardown as separate activate and deactivate hooks. Not a bad workflow still, IMO.

What's the purpose of the node-modules container in wolkenkit?

That container is built when deploying the application.
Looks like its purpose is to share dependencies across modules.
It looks like it is started as a container but nothing is apparently running, a bit like an init container.
Console says it starts/stops that component when using respective wolkenkit start and wolkenkit stop command.
On startup:
On shutdown:
When you docker ps, that container cannot be found:
Can someone explain these components?
When starting a wolkenkit application, the application is boxed in a number of Docker containers, and these containers are then started along with a few other containers that provide the infrastructure, such as databases, a message queue, ...
The reason why the application is split into several Docker containers is because wolkenkit builds upon the CQRS pattern, which suggests separating the read side of an application from the application's write side, and hence there is one container for the read side, and one for the write side (actually there are a few more, but you get the picture).
Now, since you may develop on an operating system other than Linux, the wolkenkit application may run under a different operating system than when you develop it, as within Docker it's always Linux. This means that the start command can not simply copy over the node_modules folder into the containers, as they may contain binary modules, which are then not compatible (imagine installing on Windows on the host, but running on Linux within Docker).
To avoid issues here, wolkenkit runs an npm install when starting the application inside of the containers. The problem now is that if wolkenkit did this in every single container, the start would be super slow (it's not the fastest thing on earth anyway, due to all the Docker building and starting that's happening under the hood). So wolkenkit tries to optimize this as much as possible.
One concept here is to run npm install only once, inside of a container of its own. This is the node-modules container you encountered. This container is then linked as a volume to all the containers that contain the application's code. This way you only have to run npm install once, but multiple containers can use the outcome of this command.
Since this container now contains data, but no code, it only has to be there, it doesn't actually do anything. This is why it gets created, but is not run.
I hope this makes it a little bit clearer, and I was able to answer your question :-)
PS: Please note that I am one of the core developers of wolkenkit, so take my answer with a grain of salt.

How do you install something that needs restart in a Dockerfile?

Suppose I have installation instructions as follows:
Do something.
Reboot your machine.
Do something else.
How do I express that in a Dockerfile?
This entirely depends on why they require a reboot. For Linux, rebooting a machine would typically indicate a kernel modification, though it's possible it's for something more simple like a change in user permissions (which would be handled by logging out and back in again). If the install is trying to make an OS level change to the kernel, it should fail if done inside of a container. By default, containers isolate and restrict what the application can do to the running host OS which would impact the host or other running containers.
If, the reboot is to force the application service to restart, you should realize that this design doesn't map well to a container since each RUN command runs just that command in an isolated environment. And by running only that command, this also indicates that any OS services that would normally be started on OS bootup (cron, sendmail, or your application) will not be started in the container. Therefore, you'll need to find a way to run the installation command in addition to restarting any dependent services.
The last scenario I can think of they want different user permissions to take effect to the logged in user. In that case, the next RUN command will run the requested command with any changed access from prior RUN commands. So there's no need to take any specific action of your own to do a reboot, simply perform the install steps as if there's a complete restart between each step.

Moving from Docker Containers to Cloud Foundry containers

Recently I started to practice Dockers. Basically, I am running a C application on Docker container. Now, I want to try cloud foundry, therefore, trying to understand the difference between the two.
I'll describe the application as a novice because I am.
The application I start as a service(from /etc/init.d) and it reads a config file during startup, which specifies what all modules to load and IP of other services and it's own (0.0.0.0 does not work, so I have to give actual IP).
I had to manually update the IP and some details in the config file when the container starts. So, I wrote a startup script which did all the changes when the container starts and then the service start command.
Now, moving on to Cloud Foundry, the first thing I was not able to find is 'How to deploy C application' then I found a C build pack and a binary build pack option. I still have to try those but what I am not able to understand how I can provide a startup script to a cloud foundry container or in brief how to achieve what I was doing with Dockers.
The last option I have is to use docker containers in Cloud foundry, but I want to understand if I can achieve what I described above.
I hope I was clear enough to explain my doubt.
Help appreciated.
An old question, but a lot has changed since this was posted:
Recently I started to practice Dockers. Basically, I am running a C application on Docker container. Now, I want to try cloud foundry, therefore, trying to understand the difference between the two.
...
The last option I have is to use docker containers in Cloud foundry, but I want to understand if I can achieve what I described above.
There's nothing wrong with using Docker containers on CF. If you've already got everything set up to run inside a Docker container, being able to run that on CF give you yet another place you can easily deploy your workload.
While these are pretty minor, there are a couple requirements for your Docker container, so it's worth checking those to make sure it's possible to run on CF.
https://docs.cloudfoundry.org/devguide/deploy-apps/push-docker.html#requirements
Anyways, I am not working on this now as CF is not suitable for the project. It's an SIP application and CF only accepts HTTP/S requests.
OK, the elephant in the room. This is no longer true. CF has support for TCP routes. These allow you to receive TCP traffic directly to your application. This means, it's no longer just HTTP/S apps that are suitable for running on CF.
Instructions to set up your CF environment with TCP routing: https://docs.cloudfoundry.org/adminguide/enabling-tcp-routing.html
Instructions to use TCP routes as a developer: https://docs.cloudfoundry.org/devguide/deploy-apps/routes-domains.html#create-route-with-port
Now, moving on to Cloud Foundry, the first thing I was not able to find is 'How to deploy C application' then I found a C build pack and a binary build pack option.
Picking a buildpack is an important step. The buildpack takes your app and prepares it to run on CF. A C buildpack might sound nice as it would take your source code, build and run it, but it's going to get tricky because your C app likely depends on libraries. Libraries that may or may not be installed.
If you're going to go this route, you'll probably need to use CF's multi-buildpack support. This lets you run multiple buildpacks. If you pair this with the Apt buildpack, you can install the packages that you need so that any required libraries are available for your app as it's compiled.
https://docs.cloudfoundry.org/buildpacks/use-multiple-buildpacks.html
https://github.com/cloudfoundry/apt-buildpack
Using the binary buildpack is another option. In this case, you'd build your app locally. Perhaps in a docker container or on an Ubuntu VM (it needs to match the stack being used by your CF provider, i.e. cf stacks, currently Ubuntu Trusty or Ubuntu Bionic). Once you have a binary or binary + set of libraries, you can simply cf push the compiled artifacts. The binary buildpack will "run" (it actually does nothing) and then your app will be started with the command you specify.
My $0.02 only, but the binary buildpack is probably the easier of the two options.
what I am not able to understand how I can provide a startup script to a cloud foundry container or in brief how to achieve what I was doing with Dockers.
There's a few ways you can do this. The first is to specify a custom start command. You do this with cf push -c 'command'. This would normally be used to just start your app, like './my-app', but you could also use this to do other things.
Ex: cf push -c './prep-my-app.sh && ./my-app'
Or even just call your start script:
Ex: cf push -c './start-my-app.sh'.
CF also has support for a .profile script. This can be pushed with your app (at the root of the files you push), and it will be executed by the platform prior to your application starting up.
https://docs.cloudfoundry.org/devguide/deploy-apps/deploy-app.html#profile
Normally, you'd want to use a .profile script as you'd want to let the buildpack decide how to start your app (setting -c will override the buildpack), but in your case with the C or binary buildpack's, it's unlikely the buildpack will be able to do that, so you'll end up having to set a custom start command anyway.
For this specific case, I'd suggest using cf push -c as it's slightly easier, but for all other cases and apps deployed with other buildpacks, I'd suggest a .profile script.
Hope that helps!

Sandbox command execution with docker via Ajax

I'm looking For help in this matter, what options do I have if I want to sandbox the execution of commands that are typed in a website? I would like to create an online interpreter for a programming language.
I've been looking at docker, how would I use it? Is this the best option?
codecube.io does this. It's open source: https://github.com/hmarr/codecube
The author wrote up his rationale and process. Here's how the system works:
A user types some code in to a box on the website, and specifies the language the code is written in
They click “Run”, the code is POSTed to the server
The server writes the code to a temporary directory, and boots a docker container with the temporary directory mounted
The container runs the code in the mounted directory (how it does this varies according to the code’s language)
The server tails the logs of the running container, and pushes them down to the browser via server-sent events
The code finishes running (or is killed if it runs for too long), and the server destroys the container
The Docker container's entrypoint is entrypoint.sh, which inside a container runs:
prog=$1
<...create user and set permissions...>
sudo -u codecube /bin/bash /run-code.sh $prog
Then run-code.sh checks the extension and runs the relevant compiler or interpreter:
extension="${prog##*.}"
case "$extension" in
"c")
gcc $prog && ./a.out
;;
"go")
go run $prog
;;
<...cut...>
The server that accepts the code examples from the web, and orchestrates the Docker containers was written in Go. Go turned out to be a pretty good choice for this, as much of the server relied on concurrency (tailing logs to the browser, waiting for containers to die so cleanup could happen), which Go makes joyfully simple.
The author also details how he implemented resource limiting, isolation and thoughts of security.

Resources