JVM list empty after selecting remote docker container (no kubernetes) - docker

I have the same problem as described in JProfiler remote process list empty after selecting container. But because I don't use kubernetes but plain docker, I posted a new question.
JProfiler 12 does not list the available jvm in my docker container. There are no error messages at all, the list is simply empty.
I have multiple docker containers hosting a java process and interestingly the ones that were built with the gradle jib plugin are not shown in the list, the ones that built differrently are shown. Is this just by coincidence?
[UPDATE 27. Oct. 21]
No it does not relate to jib. I built the same spring boot application with a good old Dockerfile and docker build but jprofiler is still not able to find the jvm inside the docker container.

Ok, what worked for me is to use a different JDK inside the container. It seems to be the distroless JDK that I used (although another one I used does not work either). Using the eclipse-temurin:11 as base image of the dokcer container fixes the problem for me.

Related

Self updating docker stack

I have a docker stack deployed with 20+ services which comprise my application. I would like to know that is there a way to update this stack with the latest changes to the software from within one of the containers running as a part of the stack?
Approach i have tried:
In one of the containers for a service, mounted the docker socket and the /usr/bin/docker file and downloaded the latest compose file from the server.
Instantiated a script which downloads the latest images
Initiate a docker stack deploy with the new compose file
Everything works fine this way but if the service which is running this process itself has an update and if that docker stack deploy tries to create this service before any other service in the stack, then the stack update fails.
Any suggestion or alternative approaches for this?
There is no out of the box solution for docker swarm mode (something like watchtower for single docker). I think you already found the best solution for doing this automatically. I would suggest you put the update container (the one that is updating the services) on a ignore list. Then on one of your master nodes, create a cron that updates that one container. I know this is not a prefect solution, but it should work.
The standard way to do this is to build a new Docker image that contains your new application code. Tag it (as in the docker build -t argument) with some unique version, like a source control tag or date stamp. Start a new container with the new application code, then stop and delete the old container.
As a general rule you do not upgrade the software inside a running container. Delete the old container and start a new container with the software and version you want. Also, this is generally managed by an operator, a continuous deployment system, or an orchestration system, not by the container itself. (Mounting the Docker socket into a container is a significant security exposure.)
(Imagine setting up a second copy of your cluster that works exactly the same way as your production cluster, except that it has the software you want to deploy tomorrow. You don't want your production cluster picking that up on its own until you've tested it. This scheme should give you a reproducible deployment setup so that it's easy to start that pre-production cluster, but also give you control over which specific versions are running where.)

Can't find a good docker image for windows version 14393

I am trying to setup a docker image for an mvc5 website to deploy to my service fabric Windows server 2016 with containers based cluster.
It seems that every image with IIS configured is based on a different windows build than 14393, and when I deploy those to service fabric they fail to start on my windows servers.
Am I missing something here? Does it matter what server the dockerfile runs on? So far is seems impossible to get a simple site up and running in a docker container on my service fabric cluster. I spent over a day with microsoft/windowsservercore and it just won't work, and there seems to be no way to enable failed request tracing on it because attempting to install Web-Server with all submodules fails.
If you go to docker registry, find the image, and navigate to the TAGS tab, you can find all image versions and the respective build.
For ASPNET MVC, the image microsoft/aspnet with tag 4.7.1-windowsservercore-10.0.14393.1884 is probably the one you need.
For IIS image, the image microsoft/iis with tag windowsservercore-10.0.14393.1944 might be suitable for you, you might have to add the missing packages for your application.
The problem is likely you trying to use the latest image, that won't be compatible. In your docker image, when you create the docker file,
Instead of using FROM microsoft/aspnet
you should use FROM microsoft/aspnet:4.7.1-windowsservercore-10.0.14393.1884
with the image tag after the name, otherwise you will use the latest version, that is not always compatible and should be avoided

Show configuration of Docker container

So, I ran a docker image with certain settings a while ago. In the meantime I updated my container settings via "docker update".
Now I want to see, what options/configurations (e.g. cpuset, stack, swap) are currently configured for my container.
Is there a docker command to check this?
If not, (why the hell isn't there and) where exactly can I find this information?
I am running docker 18.03.1-ce on debian 9.4.
Greetings,
Johannes
I found it out by myself.
To get detailed information about a containers settings one can use:
docker inspect 'options' 'containerid'

How to automate application deployment when using LXD containers?

How should applications be scripted/automatically deployed when in LXD containers?
For example is best way to deploy applications in LXD containers to use a bash script (which deploys an application)? How to execute this bash script inside the container by executing a command on the host?
Are there any tools/methods of doing this in a similar way to Docker recipes?
In my case, I use Ansible to:
build the LXD containers (web, database, redis for example).
connect to the containers and deploy the services and code needed.
you can build your own images for example with the services and/or code already deployed and build specific containers from this images.
I was doing this from before LXD had Ansible support (Ansible 2.2) i prefer to use ssh instead of lxd connection, when i connect to the containers to deploy services/code. they comes with a profile where i had setup my ssh public key (to have direct ssh connection by keys ... no passwords)
Take a look at my open source project on bitbucket devops_lxd_containers It includes:
Scripts to build lxd image templates including Apache, tomcat, haproxy.
Scripts to demonstrate custom application image builds such as Apache hosting and key/value content and haproxy configured as a router.
Code to launch the containers and map ports so they are accessible to the larger network
Code to configure haproxy as layer 7 proxy to route http requests between boxes and containers based on uri prefix routing. Based on where it previously deployed and mapped ports.
At the higher level it accepts a data drive spec and will deploy an entire environment compose of many containers spread across many hosts and hook them all up to act as a cohesive whole via a layer 7 proxy.
Extensive documentation showing how I accomplished each major step using code snippets before automating.
Code to support zero-outage upgrades using the layer7 ability to gracefully bleed off old connections while accepting new connections at the new layer.
The entire system is built on the premise that image building is best done in layers. We build a updated Ubuntu image. From it we build a hardened Ubuntu image. From it we build a basic Apache image. From it we build an application specific image like our apacheKV sample. The goal is to never rebuild any more than once and to re-use the common functionality such as the basicJDK as the source for all JDK dependent images so we can avoid having duplicate code in any location. I have strived to keep Image or template creation completely separate from deployment and port mapping. The exception is that I could not complete creation of the layer 7 routing image until we knew everything about how other images would be mapped.
I've been using Hashicorp Packer with the ansible provisioner using ansible_connection = lxd
Some notes here for constructing a template
When iterating through local files on your host system you may need to be using ansible_connection = local (e.g for stat & friends)
Using local_action in ansible with the lxd connection is still
action inside the container when using stat (but not with include_vars & lookup function for files)
Using lots of debug messages in Ansible is helpful to know which local environment ansible is actually operating in.
I'm surprised no one here mentioned Canonicals own tool for managing LXD.
https://juju.is
it is super simple, well supported, and the only caveat is it requires you turn off ipv6 at the LXD/LXC side of things (in the network bridge)
snap install juju --classic
juju bootstrap localhost
from there you can learn about juju models, deploy machines or prebaked images like ubuntuOS
juju deploy ubuntu

Docker, I have one folder that contains the application server. What can be used as a container?

I want to ask, if I have one folder that contains the application server (Axis2, Tomcat, WSO2, mongodb, and jms-consumer) What can be used as a container?
Is Docker as an application installer? Which classifies the entire application so 1 is then used as installer file, for example: server.exe for windows, server.deb for ubuntu
Could help to explain it?
Docker as an application installer?
No, docker is a a platform which manages containers (isolated user/process/disk machines running with the host kernel), around building, shipping and running (Containers as a Service).
The best practice is to isolate each part of your global service in its own container, both because of the PID1 zombie reaping issue (detailed in "Use of Supervisor in docker"), but also in term of ease of management and update.
If each component only represents a Tomcat, a MongoDB, a..., each one is easier to manage/debug, instead of having one giant container.
Also you can stop/update one without necessarily ipacting all the other ones.
The installation-like part is rather the description of your environment (both in term of OS and of applications you want to add to a container) with the Dockerfile: a description of what your environment will need to run.
That helps building an image (sort of archive of all the files you need), from which you docker run a container.
Right now, those containers only runs as Linux machines on Linux kernel hosts (or on Windows, through a Linux VM).
You don't have yet pure Windows images/containers that runs on Windows (it is in progress, with Windows Server 2016).
So can you just take what you have in one giant folder and put it in a docker container?
Not directly. The goal of Dockerfile is to describe how you would install what you need.
Then you docker build, and from the image you get, you docker run.
But in order for docker to manage correctly the lifecycle of that container, it is best if the container is limited to one process (instead of trynig to run everything like a webapp server, a mongodb, and so on in the same container space)
That means:
describing in separate Dockerfile (building separate images) for each of the components of your system
running those containers in a way they see each others and communicate with each others.
You have an example of a complex multi-component system in my project: b2d.

Resources