Remote debugging of Docker containers using IntelliJ - docker

I have a number of Docker containers (10) each running a Java service that makes up my system. To create these containers I use a couple of docker-compose files. Using the Docker Integration plugin for IntelliJ, I can now spool up these services to my remote server using the Docker-compose option (the images used are built outside of IntelliJ, using Gradle). Here are the steps I have done to achieve this:
I have added a Docker server using the Docker Machine option to connect to the remote Docker daemon (message says Connection Successful).
I have added a new Docker Compose configuration, using the server, specifying my compose files, and the services I want to start.
Now that I have the system controlled through IntelliJ, I have been trying to figure out how to attach the remote debugger to each of these services so that IntelliJ will hit my breakpoints.
Will I need to add the JVM args (-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005) to each service (container) and add the usual remote debug configuration for each service? Do I need to use a different address for each service? If so, how do I add these args? Surely with the Docker Integration plugin, there is an easier way to do this.
IntelliJ Idea v2018.1.5 (Community Edition)
Docker Integration v181.5087.20

Related

How do you manage the variation between local and cloud dependencies within Docker?

I have a Docker image with an application server running in it.
When I'm running in a development environment, I want to run a database server within the same Docker image.
However, in production, I want to use my cloud provider's database service to host my database server.
What is the best (preferably officially supported) way to enable this distinction?
You Don't
You don't run the DB in the same container. You run it in a separate container next to your application container (Probably with docker-compose, but not required)
You run the same version as the cloud provider (or as close as you can get because they will no doubt configure it specifically for their env)

Manage VSCode Remote Container as Docker-Compose Service

for the development of my Python project I have setup a Remote Development Container. The project uses, for example, MariaDB and RabbitMQ. Until recently I built and started containers for those services outside of VSCode. A few days ago, I reworked the project using Docker Compose so that I can manage the other containers using the remarkable Docker extension (Id: ms-azuretools.vscode-docker, Version: 1.22.0). That is working fine, besides one issue I cannot figure out:
I can start all containers using compose up, however, the Python project Remote Development Container is not staying up. Currently, I open the project folder in a second VSCode window and use the "Reopen in Container" command.
However, it would be nice if the Python project container is staying up and I could just use the "Attach Visual Studio Code" command from the Docker extension Containers menu.
I am wondering, if there is something I can add to the .devcontainer.json or some other configuration file to realize this scenario?
Any help is much appreciated!
If it helps I can post the docker-compose.yml, Dockerfile's or the .devcontainer.json, please let me know what is required.

Gitlab CI with docker+machine - Using multiple containers to test app

I'm using Gitlab CI, configured with a docker+machine executor, to build and test my app on spot instances.
My main app requires a few microservices to be available on production as well as in the test step. All of these microservices are built and tested in the same Gitlab CI server (each in his own pipeline). The output of all microservices are docker images that are pushed to the Gitlab Docker Registry.
The test step I'm trying to build:
Provision a spot instance (if there's no idle one), installed with the microservice
docker
Test step
2.1. Provision a spot instance (if there's no idle one), installed with app docker
2.2. Testing script
2.3. Stop the app container, release the spot instance
Stops the microservice container, release the spot instance
I've got 2.1, 2.2, 2.3 to work by following the instructions here, but I'm not sure how to achieve the rest. I can run docker-machine explicitly in the yaml, but I'd like to use gitlab's docker+machine executor as it's configured with the credentials, limitations, offpeak settings, etc.
Is this possible to with gitlab's executor? How?
What's the "correct" way to go about doing something like this? I'm sure I'm not the first one testing with microservices but I couldn't find any info of how to do so.
You are probably looking for the CI Services functionality. They have a couple of examples of how to use a service (MySQL, PostgreSQL, Redis) or if you were using another docker image, the docker service will have the same hostname as the docker image name (eg, tutum/wordpress will have a dns hostname of tutum-wordpress and tutum__wordpress, for more info, refer to the details about hostnames).
There are also details about running the postgres in the shell executor if you were so inclined and there is a presentation on Testing things with Gitlab CI and docker.

How to restart Oracle WebLogic on Docker containers

I'm using official WebLogic images to run the domain in Docker containers. In the development environment I have to experiment with settings, which requires the WebLogic to be restarted. However, stopping the admin server will also stop the container and all the data will be lost. How to come around with this?
Seems like my only painful choice is to experiment with the settings is in the build phase WLST scripts.

How to automate application deployment when using LXD containers?

How should applications be scripted/automatically deployed when in LXD containers?
For example is best way to deploy applications in LXD containers to use a bash script (which deploys an application)? How to execute this bash script inside the container by executing a command on the host?
Are there any tools/methods of doing this in a similar way to Docker recipes?
In my case, I use Ansible to:
build the LXD containers (web, database, redis for example).
connect to the containers and deploy the services and code needed.
you can build your own images for example with the services and/or code already deployed and build specific containers from this images.
I was doing this from before LXD had Ansible support (Ansible 2.2) i prefer to use ssh instead of lxd connection, when i connect to the containers to deploy services/code. they comes with a profile where i had setup my ssh public key (to have direct ssh connection by keys ... no passwords)
Take a look at my open source project on bitbucket devops_lxd_containers It includes:
Scripts to build lxd image templates including Apache, tomcat, haproxy.
Scripts to demonstrate custom application image builds such as Apache hosting and key/value content and haproxy configured as a router.
Code to launch the containers and map ports so they are accessible to the larger network
Code to configure haproxy as layer 7 proxy to route http requests between boxes and containers based on uri prefix routing. Based on where it previously deployed and mapped ports.
At the higher level it accepts a data drive spec and will deploy an entire environment compose of many containers spread across many hosts and hook them all up to act as a cohesive whole via a layer 7 proxy.
Extensive documentation showing how I accomplished each major step using code snippets before automating.
Code to support zero-outage upgrades using the layer7 ability to gracefully bleed off old connections while accepting new connections at the new layer.
The entire system is built on the premise that image building is best done in layers. We build a updated Ubuntu image. From it we build a hardened Ubuntu image. From it we build a basic Apache image. From it we build an application specific image like our apacheKV sample. The goal is to never rebuild any more than once and to re-use the common functionality such as the basicJDK as the source for all JDK dependent images so we can avoid having duplicate code in any location. I have strived to keep Image or template creation completely separate from deployment and port mapping. The exception is that I could not complete creation of the layer 7 routing image until we knew everything about how other images would be mapped.
I've been using Hashicorp Packer with the ansible provisioner using ansible_connection = lxd
Some notes here for constructing a template
When iterating through local files on your host system you may need to be using ansible_connection = local (e.g for stat & friends)
Using local_action in ansible with the lxd connection is still
action inside the container when using stat (but not with include_vars & lookup function for files)
Using lots of debug messages in Ansible is helpful to know which local environment ansible is actually operating in.
I'm surprised no one here mentioned Canonicals own tool for managing LXD.
https://juju.is
it is super simple, well supported, and the only caveat is it requires you turn off ipv6 at the LXD/LXC side of things (in the network bridge)
snap install juju --classic
juju bootstrap localhost
from there you can learn about juju models, deploy machines or prebaked images like ubuntuOS
juju deploy ubuntu

Resources