Docker volume plugin: Where are the logs? - docker

I am trying to learn how to utilize the RClone Docker plugin to declutter my mounting strategy. Since most of my storage is remote and not on-device, I had previously just used bind mounts to the actual Linux mounts which were provided via RClone through fstab.
So in order to make that a little cleaner and store configurations better, I am largely using Docker Compose and now I am starting to add the RClone plugin to the configurations.
Problem is: How do I get the logs? So far, I couldn't interact with the RClone plugin at all aside from en- or disabling it and setting a few default arguments. That said, passing --log-file ... caused an error. However, docker logs is already a command and I am pretty sure I am able to query plugin logs too.
But how? I installed the plugin by aliasing it as rclone.

Related

How can I keep connector's configuration between restarts when using confluentinc/cp-server-connect docker image extended with new connector

I'm using confluentinc/cp-server-connect (https://hub.docker.com/r/confluentinc/cp-server-connect) with elasticsearch sink connector (https://www.confluent.io/hub/confluentinc/kafka-connect-elasticsearch) i added trough dockerfile and rebuilding the image and it works just fine. I'm configuring the connector using http requests like it's done in this tutorial https://www.confluent.io/blog/kafka-elasticsearch-connector-tutorial/.
My problem is that I couldn't find a way to keep the connector configuration i set during removing and stopping again the docker container with this image.
I couldn't find any mentions of keeping configuration in docker image's documentation on docker hub or by googling it. I also tried manually searching in the image for where this configuration may be stored but i had no luck. Where should I point with docker volume to save this configuration, or maybe the configuration is kept somewhere else like in a specific topic in kafka?
Yes, the configurations are kept on Kafka topic. The Connect container doesn't store them.
Therefore, don't restart the Kafka (or Zookeeper) container(s), and your configs will be maintained.

Use VSCode remote development on docker image without local files

Motivation
As of now, we are using five docker containers (MySQL, PHP, static...) managed by docker-compose. We do only need to access one of them. We now have a local copy of all data inside and sync it from Windows to the container, but that is very slow, VSCode on Windows sometimes randomly locks files causing git rebase origin/master to end in very unpleasant ways.
Desired behaviour
Use VSCode Remote Development extension to:
Edit files inside the container without any mirrored files on Windows
Run git commands (checkout, rebase, merge...)
Run build commands (make, ng, npm)
Still keep Windows as for many developers it is the prefered platform.
Question
Is it possible to develop inside a docker container using VSCode?
I have tried to follow the official guide, but they do seem to require us to have mirrored files. We do also use WSL.
As #FSCKur points out this is the exact scenario VSCode dev containers is supposed to address, but on Windows I've found the performance to be unusable.
I've settled on running VSCode and docker inside a Linux VM on Windows, and have a 96% time saving in things like running up a server and watching code for changes making this setup my preferred way now.
The standardisation of devcontainer.json and being able to use github codespaces if you're away from your normal dev machine make this whole setup a pleasure to use.
see https://stackoverflow.com/a/72787362/183005 for detailed timing comparison and setup details
This is sounds like exactly what I do. My team uses Windows on the desktop, and we develop a containerised Linux app.
We use VSCode dev containers. They are an excellent solution for the scenario.
You may also be able to SSH to your docker host and code on it, but in my view this is less good because you want to keep all customisation "contained" - I have installed a few quality-of-life packages in my dev container which I'd prefer to keep out of my colleague's environments and off the docker host.
We have access to the docker host, so we clone our source on the docker host and mount it through. We also bind-mount folders from the docker host for SQL and Redis data - but that could be achieved with docker volumes instead. IIUC, the workspace folder itself does have to be a bind-mount - in fact, no alternative is allowed in the devcontainer.json file. But since you need permission anyway on the docker daemon, this is probably achievable.
All source code operations happen in the dev container, i.e. in Linux. We commit and push from there, we edit our code there. If we need to work on the repo on our laptops, we pull it locally. No rcopy, no SCP - github is our "sync" mechanism. We previously used vagrant and mounted the source from Windows - the symlinks were an absolute pain for us, but probably anyone who's tried mounting source code from Windows into Linux will have experienced pain over some element or other.
VSCode in a dev container is very similar to the local experience. You will get bash in the terminal. To be real, you probably can't work like this without touching bash. However, you can install PSv7 in the container, and/or a 'better' shell (opinion mine) such as zsh.

does docker remote api support creating a container using a docker-compose file?

I have an app that creates docker containers using the docker remote api, which is done using this library.
So far it is working fine with simple configuration options for the container creation. Now I need to create the container with much more config options, so wondering if i can use a docker-compose file. This api is created based on v1.23 of docker remote api spec, does docker remote api support creating a container using a compose file?
I cannot find an option from this documentation. but wondering if i am looking in wrong place.
No; Docker Compose itself is an application that uses the API. You’d need to directly run docker-compose up or something similar as a shell command if you wanted to directly use it.
(You might be able to hack into its internals if you have a Python program, but not from Java.)

Using custom docker containers in Dataflow

From this link I found that Google Cloud Dataflow uses Docker containers for its workers: Image for Google Cloud Dataflow instances
I see it's possible to find out the image name of the docker container.
But, is there a way I can get this docker container (ie from which repository do I go to get it?), modify it, and then indicate my Dataflow job to use this new docker container?
The reason I ask is that we need to install various C++ and Fortran and other library code on our dockers so that the Dataflow jobs can call them, but these installations are very time consuming so we don't want to use the "resource" property option in df.
Update for May 2020
Custom containers are only supported within the Beam portability framework.
Pipelines launched within portability framework currently must pass --experiments=beam_fn_api explicitly (user-provided flag) or implicitly (for example, all Python streaming pipelines pass that).
See the documentation here: https://cloud.google.com/dataflow/docs/guides/using-custom-containers?hl=en#docker
There will be more Dataflow-specific documentation once custom containers are fully supported by Dataflow runner. For support of custom containers in other Beam runners, see: http://beam.apache.org/documentation/runtime/environments.
The docker containers used for the Dataflow workers are currently private, and can't be modified or customized.
In fact, they are served from a private docker repository, so I don't think you're able to install them on your machine.
Update Jan 2021: Custom containers are now supported in Dataflow.
https://cloud.google.com/dataflow/docs/guides/using-custom-containers?hl=en#docker
you can generate a template from your job (see https://cloud.google.com/dataflow/docs/templates/creating-templates for details), then inspect the template file to find the workerHarnessContainerImage used
I just created one for a job using the Python SDK and the image used in there is dataflow.gcr.io/v1beta3/python:2.0.0
Alternatively, you can run a job, then ssh into one of the instances and use docker ps to see all running docker containers. Use docker inspect [container_id] to see more details about volumes bound to the container etc.

How to automate application deployment when using LXD containers?

How should applications be scripted/automatically deployed when in LXD containers?
For example is best way to deploy applications in LXD containers to use a bash script (which deploys an application)? How to execute this bash script inside the container by executing a command on the host?
Are there any tools/methods of doing this in a similar way to Docker recipes?
In my case, I use Ansible to:
build the LXD containers (web, database, redis for example).
connect to the containers and deploy the services and code needed.
you can build your own images for example with the services and/or code already deployed and build specific containers from this images.
I was doing this from before LXD had Ansible support (Ansible 2.2) i prefer to use ssh instead of lxd connection, when i connect to the containers to deploy services/code. they comes with a profile where i had setup my ssh public key (to have direct ssh connection by keys ... no passwords)
Take a look at my open source project on bitbucket devops_lxd_containers It includes:
Scripts to build lxd image templates including Apache, tomcat, haproxy.
Scripts to demonstrate custom application image builds such as Apache hosting and key/value content and haproxy configured as a router.
Code to launch the containers and map ports so they are accessible to the larger network
Code to configure haproxy as layer 7 proxy to route http requests between boxes and containers based on uri prefix routing. Based on where it previously deployed and mapped ports.
At the higher level it accepts a data drive spec and will deploy an entire environment compose of many containers spread across many hosts and hook them all up to act as a cohesive whole via a layer 7 proxy.
Extensive documentation showing how I accomplished each major step using code snippets before automating.
Code to support zero-outage upgrades using the layer7 ability to gracefully bleed off old connections while accepting new connections at the new layer.
The entire system is built on the premise that image building is best done in layers. We build a updated Ubuntu image. From it we build a hardened Ubuntu image. From it we build a basic Apache image. From it we build an application specific image like our apacheKV sample. The goal is to never rebuild any more than once and to re-use the common functionality such as the basicJDK as the source for all JDK dependent images so we can avoid having duplicate code in any location. I have strived to keep Image or template creation completely separate from deployment and port mapping. The exception is that I could not complete creation of the layer 7 routing image until we knew everything about how other images would be mapped.
I've been using Hashicorp Packer with the ansible provisioner using ansible_connection = lxd
Some notes here for constructing a template
When iterating through local files on your host system you may need to be using ansible_connection = local (e.g for stat & friends)
Using local_action in ansible with the lxd connection is still
action inside the container when using stat (but not with include_vars & lookup function for files)
Using lots of debug messages in Ansible is helpful to know which local environment ansible is actually operating in.
I'm surprised no one here mentioned Canonicals own tool for managing LXD.
https://juju.is
it is super simple, well supported, and the only caveat is it requires you turn off ipv6 at the LXD/LXC side of things (in the network bridge)
snap install juju --classic
juju bootstrap localhost
from there you can learn about juju models, deploy machines or prebaked images like ubuntuOS
juju deploy ubuntu

Resources