Suppose I have installation instructions as follows:
Do something.
Reboot your machine.
Do something else.
How do I express that in a Dockerfile?
This entirely depends on why they require a reboot. For Linux, rebooting a machine would typically indicate a kernel modification, though it's possible it's for something more simple like a change in user permissions (which would be handled by logging out and back in again). If the install is trying to make an OS level change to the kernel, it should fail if done inside of a container. By default, containers isolate and restrict what the application can do to the running host OS which would impact the host or other running containers.
If, the reboot is to force the application service to restart, you should realize that this design doesn't map well to a container since each RUN command runs just that command in an isolated environment. And by running only that command, this also indicates that any OS services that would normally be started on OS bootup (cron, sendmail, or your application) will not be started in the container. Therefore, you'll need to find a way to run the installation command in addition to restarting any dependent services.
The last scenario I can think of they want different user permissions to take effect to the logged in user. In that case, the next RUN command will run the requested command with any changed access from prior RUN commands. So there's no need to take any specific action of your own to do a reboot, simply perform the install steps as if there's a complete restart between each step.
Related
I recently started using docker on raspberry pi. I am using a set of docker containers that are run constantly e.g. pihole, node red, mosquitto etc. I know that if you have to restart raspberry pi then you should stop the containers first and then restart them once the pi has rebooted. But in some other tutorials I see they check for update by using the command sudo apt updates for the raspberry pi OS and install the updates if they are available. I want to know the following:
Should I stop the containers before checking and installing updates for the raspberry pi OS?
do you need to stop all the containers that depend on each other before updating any one of the container or just stop the container that needs to be updated?
Should I stop the containers before checking and installing updates for the raspberry pi OS?
Generally, no, updates on the host won't have any impact on your containers. The exception is if you install an update to Docker, which may restart the Docker daemon. This can cause all your containers to exit, although they will come back up automatically if you have configured them with a restart policy. And of course a kernel update will require a host reboot before it becomes active.
do you need to stop all the containers that depend on each other before updating any one of the container or just stop the container that needs to be updated?
This really depends on how you have designed your applications.
Consider a web application that talks to a database. If you were to remove the database container and create a new one with updated software or configuration, depending on how the web application is written:
It might crash, requiring you to restart it manually.
If you have configured it with a restart policy, Docker might take care of restarting it for you.
If the application has reconnect/retry logic, it might simply wait until the database is back up and available.
The application itself might require an update before it is able to run correctly.
It's up to you to understand that dependencies in your software stack and determine how best to handle component upgrades.
I am trying to test a software that exports a file in certain periods of time.
I thought of using a docker container to give the desired time and not use system time.
The thing is that I am lacking permissions to change the containers time with the following error message.
PS C:\usr\src\app> Set-Date -Date (Get-Date).AddDays(3)
Set-Date : A required privilege is not held by the client
At line:1 char:1
Is it possible to do on a windows docker container?
My base image is mcr.microsoft.com/dotnet/framework/sdk:4.8-windowsservercore-ltsc2019
Thank you in advance!
What I suspect you are running into here, is that recent windows container versions now set the user as a normal user, not an admin, so when the container is running your user context does not have permission to modify the system date.
You should be able to get around this, by modifying your Dockerfile, to tell it to use the admin user instead like so:
USER ContainerAdministrator
Keep in mind of course, that this does increase the security risks, as now all processes are run as admin, but if you are only using the container locally (not running a web server or anything) you should be fine.
That container is built when deploying the application.
Looks like its purpose is to share dependencies across modules.
It looks like it is started as a container but nothing is apparently running, a bit like an init container.
Console says it starts/stops that component when using respective wolkenkit start and wolkenkit stop command.
On startup:
On shutdown:
When you docker ps, that container cannot be found:
Can someone explain these components?
When starting a wolkenkit application, the application is boxed in a number of Docker containers, and these containers are then started along with a few other containers that provide the infrastructure, such as databases, a message queue, ...
The reason why the application is split into several Docker containers is because wolkenkit builds upon the CQRS pattern, which suggests separating the read side of an application from the application's write side, and hence there is one container for the read side, and one for the write side (actually there are a few more, but you get the picture).
Now, since you may develop on an operating system other than Linux, the wolkenkit application may run under a different operating system than when you develop it, as within Docker it's always Linux. This means that the start command can not simply copy over the node_modules folder into the containers, as they may contain binary modules, which are then not compatible (imagine installing on Windows on the host, but running on Linux within Docker).
To avoid issues here, wolkenkit runs an npm install when starting the application inside of the containers. The problem now is that if wolkenkit did this in every single container, the start would be super slow (it's not the fastest thing on earth anyway, due to all the Docker building and starting that's happening under the hood). So wolkenkit tries to optimize this as much as possible.
One concept here is to run npm install only once, inside of a container of its own. This is the node-modules container you encountered. This container is then linked as a volume to all the containers that contain the application's code. This way you only have to run npm install once, but multiple containers can use the outcome of this command.
Since this container now contains data, but no code, it only has to be there, it doesn't actually do anything. This is why it gets created, but is not run.
I hope this makes it a little bit clearer, and I was able to answer your question :-)
PS: Please note that I am one of the core developers of wolkenkit, so take my answer with a grain of salt.
I am using Mongooseim 3.2.0 from the source code on the ubuntu server. Below are concern:
What is the best way to run mongooseim as a service so that it automatically restarts if mongooseim crashes or system restarts?
How to interact via terminal with already running mongooseim instance on the ubuntu server like "mongooseimctl live". My guess is running "mongooseimctl live" will try to create another instance. I just want to see the live logs and interaction and don't want to keep scrolling the long log files for this purpose.
I apologize if the answer to above is obvious but just want to follow the best guidance.
mongooseimctl live or mongooseimctl foreground is mostly useful for development or smoke testing a deployment (unless you're running inside a container). For real world use cases you should start the server in the background with mongooseimctl start.
Back to the container - the best approach for containerised applications is to run them in the foreground, therefore in a container startup script use mongooseimctl foreground.
Once the server is running (no matter how it was started) attaching a shell to troubleshoot issues can be done with mongooseimctl debug. This is the command to use when you get the Protocol 'inet_tcp': the name mongooseim#localhost seems to be in use by another Erlang node error. Be careful if it's a production environment - you can easily take the server down with access to this shell.
If you're just interested in watching logs, with no interactive access to the server internals that the shell offers, a simple tail -f /your-configured-mongooseim-log-dir/* should be enough.
Ubuntu nowadays uses systemd for managing its services' lifetimes. A systemd .service file can be found at https://github.com/esl/MongooseIM/blob/master/tools/pkg/platforms/debian_stretch/files/build/mongooseim.service - we use it for packaging into Debian/Ubuntu .deb packages.
I'm new to Docker and was wondering if it was possible (and a good idea) to develop within a docker container.
I mean create a container, execute bash, install and configure everything I need and start developping inside the container.
The container becomes then my main machine (for CLI related works).
When I'm on the go (or when I buy a new machine), I can just push the container, and pull it on my laptop.
This sort the problem of having to keep and synchronize your dotfile.
I haven't started using docker yet, so is it something realistic or to avoid (spacke disk problem and/or pull/push timing issue).
Yes. It is a good idea, with the correct set-up. You'll be running code as if it was a virtual machine.
The Dockerfile configurations to create a build system is not polished and will not expand shell variables, so pre-installing applications may be a bit tedious. On the other hand after building your own image to create new users and working environment, it won't be necessary to build it again, plus you can mount your own file system with the -v parameter of the run command, so you can have the files you are going to need both in your host and container machine. It's versatile.
> sudo docker run -t -i -v
/home/user_name/Workspace/project:/home/user_name/Workspace/myproject <container-ID>
I'll play the contrarian and say it's a bad idea. I've done work where I've tried to keep a container "long running" and have modified it, but then accidentally lost it or deleted it.
In my opinion containers aren't meant to be long running VMs. They are just meant to be instances of an image. Start it, stop it, kill it, start it again.
As Alex mentioned, it's certainly possible, but in my opinion goes against the "Docker" way.
I'd rather use VirtualBox and Vagrant to create VMs to develop in.
Docker container for development can be very handy. Depending on your stack and preferred IDE you might want to keep the editing part outside, at host, and mount the directory with the sources from host to the container instead, as per Alex's suggestion. If you do so, beware potential performance issue on macos x with boot2docker.
I would not expect much from the workflow with pushing the images to sync between dev environments. IMHO keeping Dockerfiles together with the code and synching by SCM means is more straightforward direction to start with. I also carry supporting Makefiles to build image(s) / run container(s) same place.