How to reconnect to docker-compose output log? - docker

Please help, I'm not even sure if I am asking the right question here as there are many new components to my environment (the new components of my Environment are that I am new to developing in a Windows OS, New to Visual Studio Code IDE, and new to Docker within VS Code. This question could pertain to any of those factors.
Scenario:
I boot up my windows 10 machine, open VS Code, go to the command line from within VS Code (I am using a Git Bash Shell within VS Code). From this terminal I start my project with the following command: docker-compose up --build
as a result of running this command, I see the output in my terminal which indicates that all three of my containers have started up (Note this is a Flask application using Postgres with an Angular front end, each one has it's own container).
My application has a test API endpoint which when called responds with 'status ok'. Upon hitting that endpoint in postman I see a couple of lines of output in my terminal indicating that the application has processed the request for the specific URL. Everything is great.
Now I close all my applications and reboot the machine.
Upon rebooting I see a message from the system informing me that my docker containers are starting. This is good. But now I would like to get back to the state where I can see that same output that I saw when I ran the docker-compose up command, however this is no longer in the terminal on VS Code.
My question is, how can I get that output again without shutting down the docker containers and re-building them? Sure, I could do that, but this seems like an unnecessary step since the containers auto-restarted on system reboot.
Is there a log I can tail?
Additional info:
In the DockerFile for the API server. The server is started with the following command:
CMD ["./entrypoint.local.sh"]
In the entrypoint.local.sh file, the actual application is started with this command:
uwsgi --ini /etc/uwsgi.local.ini --chdir /var/www/my-application
Final note: This is not an application I created so I would like to avoid changing it since this will affect others on my team

In your terminal run: docker-compose logs --follow <name-of-your-service>
Or see every log stream for every service with docker-compose logs --follow
You can find the name of your docker-compose service by looking at each key under services: in your docker-compose.yml

Related

Meteor build refresh after file change saving drops an EACCES error in a docker container

SUMMARY
First of all I run my Meteor web app using docker-compose with docker-compose -f docker-file.dev.yml up -d --build. Everything works fine for the first build. The app is reachable on my browser via localhost.
Then I have an EACESS error while I use my code editor (vscode) to edit some .jsx files, this runs the build refresh, then, the application, running in a docker container crashes. It means I have a permission error.
But, when I save an updated file, this is done with my current (windows) user into VSCode, anyway, the app is run by "meteoruser" user defined in the docker-compose.dev.yml file below.
So what is the problem with this configuration, it worked well on a previous computer running ubuntu 16.04. Is windows the problem?
If I check docker logs I have this output :
If I rerun docker-compose build command, then everything is fine, the files are updated and the app is running. But I can't work like that and rerun this command every time I make a file change.
What I expect is a build refresh working without dropping an EACCESS error.
Additionnal informations about my project
This is a React project within Meteor framework in a container, using mongodb container within an nginx proxy in a container as well.
Project scaffold:
docker-compose file
docker file (.dev)
nb: as you can see, I create an user in the container, then I run the app into the container with this user. This is needed to ensure not running locally the app as root user.
WHAT I HAVE TRIED
Delete the .meteor folder and rebuild locally with meteor run to ensure my current windows user has the rights for the meteor app build folder. It do not work.
Remove the usage of meteoruser in the dockerfile, this does not work and instead drop an error, because meteor do not allow running app in a dev environment with root user, because it could lead to permission errors later in production..
Thanks in advance for your help!

VScode remote containers - how to view the dockerised service console output?

This is a follow on from this question (none of the current answers seem hit the nail on the head).
VScode's default behaviour for starting a remote vscode session (using VScode Remote-Containers) seems to be:
Run the project's docker-compose file
If the user selects show log (a UI popup) during build, open a VScode terminal session called Dev Containers reflecting Docker's build logging, supplemented with VSCode Remote-Containers logging. This output ends after build complete.
If the user didn't select show log, and later opens a VScode terminal after build complete, just start a new bash session within the container. No other VScode terminal sessions exist.
Launch a VScode session from inside the now-running container
From the user's perspective, the container is running, but the output that is happening inside the container seems inaccessible (even if the docker-compose command did not use daemon mode).
So, how can the user now view the console output that is happening inside the container?
If I am reading correctly, VScode Remote-Containers documentation seems to suggest overriding the default behaviour, ie:
supress your docker-compose command that would otherwise have started the service, and instead apply some dummy command to persist the container upon creation, then
manually start the service (using debug mode, or via VScode terminal) from inside the remote session. This reveals the output, but within an accessible VSCode terminal session.
Is there no way to:
A) Start the services via system terminal (e.g. docker-compose up), and then start a VSCode remote session in this already running container*, or
B) Accessing the service's output without having to override as above (the override seems hacky)
*This would be ideal. The Remote-Containers "Attach to Running Container..." command sounds close to this. But it seems to instantiate itself in a directory I don't recognise, and doesn't seem to be the container.
Option A seems to be achievable by
Starting the service in terminal (docker-compose up)
In vscode, using the remote-containers "remote explorer" UI (not the cmd+P "Attach to container" commands) to select the running container's working directory. Right click > "Open in container". This doesn't actually open a new container, it "Opens the directory, from within the container".
OR ( thanks #cybercoder )
Letting vscode start the services
In a separate terminal: docker logs -f container_name OR docker-compose logs -f

Accessing Files on a Windows Docker Container Easily

Summary
So I'm trying to figure out a way to use docker to be able to spin up testing environments for customers rather easily. Basically, I've got a customized piece of software that want to install to a Windows docker container (microsoft/windowsservercore), and I need to be able to access the program folder for that software (C:\Program Files\SOFTWARE_NAME) as it has some logs, imports/exports, and other miscellaneous configuration files. The installation part was easy, and I figured that after a few hours of messing around with docker and learning how it works, but transferring files in a simple manner is proving far more difficult than I would expect. I'm well aware of the docker cp command, but I'd like something that allows for the files to be viewed in a file browser to allow testers to quickly/easily view log/configuration files from the container.
Background (what I've tried):
I've spent 20+ hours monkeying around with running an SSH server on the docker container, so I could just ssh in and move files back and forth, but I've had no luck. I've spent most of my time trying to configure OpenSSH, and I can get it installed, but there appears to be something wrong with the default configuration file provided with my installation, as I can't get it up and running unless I start it manually via command line by running sshd -d. Strangely, this runs just fine, but it isn't really a viable solution as it is running in debug mode and shuts down as soon as the connection is closed. I can provide more detail on what I've tested with this, but it seems like it might be a dead end (even though I feel like this should be extremely simple). I've followed every guide I can find (though half are specific to linux containers), and haven't gotten any of them to work, and half the posts I've found just say "why would you want to use ssh when you can just use the built in docker commands". I want to use ssh because it's simpler from an end user's perspective, and I'd rather tell a tester to ssh to a particular IP than make them interact with docker via the command line.
EDIT: Using OpenSSH
Starting server using net start sshd, which reports it starting successfully, however, the service stops immediately if I haven't generated at least an RSA or DSA key using:
ssh-keygen.exe -f "C:\\Program Files\\OpenSSH-Win64/./ssh_host_rsa_key" -t rsa
And modifying the permissions using:
icacls "C:\Program Files\OpenSSH-Win64/" /grant sshd:(OI)(CI)F /T
and
icacls "C:\Program Files\OpenSSH-Win64/" /grant ContainerAdministrator:(OI)(CI)F /T
Again, I'm using the default supplied sshd_config file, but I've tried just about every adjustment of those settings I can find and none of them help.
I also attempted to setup Volumes to do this, but because the installation of our software is done at compile time in docker, the folder that I want to map as a volume is already populated with files, which seems to make docker fail when I try to start the container with the volume attached. This section of documentation seems to say this should be possible, but I can't get it to work. Keep getting errors when I try to start the container saying "the directory is not empty".
EDIT: Command used:
docker run -it -d -p 9999:9092 --mount source=my_volume,destination=C:/temp my_container
Running this on a ProxMox VM.
At this point, I'm running out of ideas, and something that I feel like should be incredibly simple is taking me far too many hours to figure out. It particularly frustrates me that I see so many blog posts saying "Just use the built in docker cp command!" when that is honestly a pretty bad solution when you're going to be browsing lots of files and viewing/editing them. I really need a method that allows the files to be viewed in a file browser/notepad++.
Is there something obvious here that I'm missing? How is this so difficult? Any help is appreciated.
So after a fair bit more troubleshooting, I was unable to get the docker volume to initialize on an already populated folder, even though the documentation suggests it should be possible.
So, I instead decided to try to start the container with the volume linked to an empty folder, and then start the installation script for the program after the container is running, so the folder populates after the volume is already linked. This worked perfectly! There's a bit of weirdness if you leave the files in the volume and then try to restart the container, as it will overwrite most of the files, but things like logs and files not created by the installer will remain, so we'll have to figure out some process for managing that, but it works just like I need it to, and then I can use windows sharing to access that volume folder from anywhere on the network.
Here's how I got it working, it's actually very simple.
So in my dockerfile, I added a batch script that unzips the installation DVD that is copied to the container, and runs the installer after extracting. I then used the CMD option to run this on container start:
Dockerfile
FROM microsoft/windowsservercore
ADD DVD.zip C:\\resources\\DVD.zip
ADD config.bat C:\\resources\\config.bat
CMD "C:\resources\config.bat" && cmd
Then I build the container without anything special:
docker build -t my_container:latest .
And run it with the attachment to the volume:
docker run -it -d -p 9999:9092 --mount source=my_volume,destination="C:/Program Files (x86)/{PROGRAM NAME}" my_container
And that's it. Unfortunately, the container takes a little longer to start (it does build faster though, for what that's worth, as it isn't running the installer in the build), and the program isn't installed/running for another 5 minutes or so after the container does start, but it works!
I can provide more details if anyone needs them, but most of the rest is implementation specific and fairly straightforward.
Try this with Docker composer. Unfortunately, I cannot test it as I'm using a Mac it's not a "supported platform" (way to go Windows). See if that works, if not try volume line like this instead - ./my_volume:C:/tmp/
Dockerfile
FROM microsoft/windowsservercore
# need to ecape \
WORKDIR C:\\tmp\\
# Add the program from host machine to container
ADD ["<source>", "C:\tmp"]
# Normally used with web servers
# EXPOSE 80
# Running the program
CMD ["C:\tmp\program.exe", "any-parameter"]
Docker Composer
Should ideally be in the parent folder.
version: "3"
services:
windows:
build: ./folder-of-Dockerfile
volume:
- type: bind
source: ./my_volume
target: C:/tmp/
ports:
- 9999:9092
Folder structure
|---docker-composer.yml
|
|---folder-of-Dockerfile
|
|---Dockerfile
Just run docker-composer up to build and start the container. Use -d for detach mode, should only use once you know its working properly.
Useful link Manage Windows Dockerfile

Setting up a container from a users github source

Can be closed, not sure how to do it.
I am to be quite frank lost right now, the user whom published his source on github somehow failed to update the installation instructions when he released a new branch. Now, I am not dense, just uneducated when it comes to docker. I would really appreciate a push in the right direction. If I am missing any information from this post, please allow me to provide it in the comments.
Current Setup
O/S - Debian 8 Minimal (Latest kernel)
Hardware - 1GB VPS (KVM)
Docker - Installed with Compose (# docker info)
I am attempting to setup this (https://github.com/pboehm/ddns/tree/docker_and_rework), first I should clone this git to my working directory? Lets say /home for example. I will run the following command;
git clone -b docker_and_rework https://github.com/pboehm/ddns.git
Which has successfully cloned the source files into /home/ddns/... (working dir)
Now I believe I am supposed to go ahead and build something*, so I go into the following directory;
/home/ddns/docker
Inside contains a docker-compose.yml file, I am not sure what this does but by looking at it, it appears to be sending a bunch of instructions which I can only presume is to do with actually deploying or building the whole container/image or magical thing right? From here I go ahead and do the following;
docker-compose build
As we can see, I believe its building the container or image or whatever its called, you get my point (here). After a short while, that completes and we can see the following (docker images running). Which is correct, I see all of the dependencies in there, but things like;
go version
It does not show as a command, so I presume I need to run it inside the container maybe? If so I dont have a clue how, I need to run 'ddns.go' which is inside /home/ddns, the execution command is;
ddns --soa_fqdn=dns.stealthy.pro --domain=d.stealthy.pro backend
I am also curious why the front end web page is not showing? There should be a page like this;
http://ddns.pboehm.org/
But again, I believe there is some more to do I just do not know what??
docker-compose build will only build the images.
You need to run this. It will build and run them.
docker-compose up -d
The -d option runs containers in the background
To check if it's running after docker-compose up
docker-compose ps
It will show what is running and what ports are exposed from the container.
Usually you can access services from your localhost
If you want to have a look inside the container
docker-compose exec SERVICE /bin/bash
Where SERVICE is the name of the service in docker-compose.yml
The instructions it runs that you probably care about are in the Dockerfile, which for that repo is in the docker/ddns/ directory. What you're missing is that Dockerfile creates an image, which is a template to create an instance. Every time you docker run you'll create a new instance from the image. docker run docker_ddns go version will create a new instance of the image, run go version and output it, then die. Running long running processes like the docker_ddns-web image probably does will run the process until something kills that process. The reason you can't see the web page is probably because you haven't run docker-compose up yet, which will create linked instances of all of the docker images specified in the docker-compose.yml file. Hope this helps

What is image "containersol/minimesos" in minimesos?

I was able to setup the minimesos cluster on my laptop and also could deploy a small command-line utility. Now the questions;
What is the image "containersol/minimesos" used for? It is pulled but I don't see it running, when I do "docker ps". "docker images" lists it.
How come when I run "top" inside the mesos-agent container, I see all the processes running in my host (laptop)? This is a bit strange.
I was trying to figure out what's inside minimesos script. I see that there's just one "docker run ... " command. Would really appreciate if I could get to know what the aforementioned command does that results into 4 containers (1 master, 1 slave, 1 zk, 1 marathon) running on my laptop.
containersol/minimesos runs the Java code that is the core of minimesos. It only runs until it executes the command from the CLI. When you do minimesos up the command name and the minimesosFile will be passed to this container. The container in turn will execute the Java code that will create the other containers that form the Mesos cluster specified in the minimesosFile. That should answer #3 also. Take a look at MesosCluster class thats the root of where the magic happens.
I don't know the answer to #2 will get back to you when I find out.
Every minimesos command runs as a short lived container, whose image is containersol/minimesos.
When you run 'minimesos up' it launches containersol/minimesos with 'up' as the argument. It then launches a cluster by starting other containers like containersol/mesos-agent and containersol/mesos-master. After the cluster is up the containersol/minimesos container exists and is removed.
We have separated cli and minimesos core as a refactoring to prepare for the upcoming API module. We are creating an API to support clients for different programming language. The first client will be a Golang client.
In this new setup minimesos will run launch a long running API server and any minimesos cli commands call the API. The clients will also launch the API server and call the API.

Resources