I have a Docker Desktop installed on my dev machine, with WSL 2 disabled. I have shared my entire C:/ drive:
Then I have a container that inside has a .net 6 (Core) application that uses the FileSystemWatcher to observe one directory, and when a file is pasted inside to read it.
I red in several articles in the internet that WSL2 do not support notification to propagate from the Windows file system to the underlying Linux distribution that docker is running on, hence there is no way that I can bind the directory that I have to "watch" with the app in the container. So I swithed to the old Hyper-V support of docker.
I run the container with the following command:
docker run `
--name mlc-importer `
-v C:/temp/DZBank:/opt/docker/mlc_importer/dfs/DZBank `
-v C:\temp\appsettings.json:/app/appsettings.json `
-v C:\temp\log4net.config:/app/log4net.config `
mlc-importer
The container starts and starts "watching" for new files. The strange thing is, that when I cut a file and paste it in the directory, the app in the container registers the new file and reads it, but when I copy the file and paste in in the directory, the app in teh container do not register it and read it.
Can someone help me because I can't find out what the problem might comes from.
Thanks in advance,
Julian
I managed to solve mu problem, and I'll post it here if somebody encounters the same problem.
The problem was in teh file itself. This I found out when I started a new container with only debian, and installed inotify-tools, and binded the same path. When I tried to copy the file and paste it in the binded dir the output was:
Three times MODIFY event.
When I tried to cut the file and paste in in the new dir the events were:
So with copy - three times MODIFY, with cut one CREATE and two MODIFY.
Then I inspected the copied file and saw this:
When I checked the checkbox and hit ok, everything is ok. And since in the container app (from the post), I hook to only "File created" callback, it not triggers when the file is only modified.
Hope this helps someone with a similar problem
I have a weird problem with Docker and hope someone here can help me :)
I want to create a keycloak image that is derived from the image jboss/keycloak. The idea is that in the Dockerfile also a preconfigured standalone.xml is copied into the image and keycloak can start directly without manual work.
But as soon as I write for example a
"CMD touch /opt/test.txt"
into the file the container crashes with the message "12:02:14,290 INFO [org.jboss.modules] (main) JBoss Modules version 1.9.1.Final
WFLYSRV0073: Invalid option '/bin/sh'"
This is just a new file with no purpose, the changes to the .xml are not in there yet.
As soon as I put only the FROM back in and rebuild everything works again.
I thought through the layers in the container you could mod an image, but here it doesn't seem to work. Can someone tell me why ?
So far it has always worked with the alpine image, but I don't want to build the whole keycloak setup again myself, when there is already an official image for it.
This is basically what I had in mind:
FROM jboss/keycloak:X.XX
CMD rm /opt/jboss/keycloak/standalone/configuration/standalone.xml
COPY ./keycloak/standalone.xml /opt/jboss/keycloak/standalone/configuration/
Thanks for help :)
Change
CMD rm
to
RUN rm
RUN is part of building. every RUN command is executed while your image is built.
With CMD you define (or override) the default command when running/starting a container based on your image (and you don't want to change keycloaks default CMD)
at the moment I work with ARM64 based Debian Images and docker.
I want to automate the docker daemon on boot so we do not have to start it manually. But the Images do not use the systemd but good old sysVinit.
So I though "quite easy - simple an init script with command "dockerd" (or start-stop-daemon and dockerd as Argument". But no - does not work. The command "dockerd -v" works fine when booting (checked by pipe output to log file). But when execute "dockerd" without an Argument - so simple start daemon - nothing happen - no error no warning nothing is piped to log file.
So my question is - are there any other processes Need to be started or configurations need to be done before this dockerd command can be started?
When boot is finished and i do SSH to device and manually do "dockerd" all works fine.
just for close this question by myself :D
I noticed that in sysVinit system when starting the init-scripts the path variable did not exist (maybe because root starting the processes). #
So in my script i just added the path variable and set path to folder of dockerd and everything worked well! :D
This is not particularly about my current problem, but more like in general. Sometimes I have a problem that only happens in production configuration, and I'd like to debug it there. What is the best way to approach that in Elixir? Production runs without a graphical environment (docker).
In dev I can use IEX.pry, but since mix is unavailable in production, that does not seem to be an option.
For Erlang https://stackoverflow.com/a/21413344/1561489 mentions dbg and redbug, but even if they can be used, I would need help on applying them to Elixir code.
First, start a local node running iex on your dev machine using iex -S mix. If you don't want the application that's running locally to cause breakpoints to be activated, you need to disable the app from starting locally. To do this, you can simply comment out the application function in mix.exs or run iex -S mix run --no-start.
Next, you need to connect to the remote node running on docker from iex on your dev node using Node.connect(:"remote#hostname"). In order to do this, you have to make sure both the epmd and the node ports on the remote machine are reachable from your local node.
Finally, once your nodes are connected, from the local iex, run :debugger.start() which opens the debugger with the GUI. Now in the local iex, run :int.ni(<Module you want to debug>) and it will make the module visible to the debugger and you can go ahead and add breakpoints and start debugging.
You can find a tutorial with steps and screenshots here.
In the case that you are running your production on AWS, then you should first and foremost leverage CloudWatch to your advantage.
In your elixir code, configure your logger like this:
config :logger,
handle_otp_reports: true,
handle_sasl_reports: true,
metadata: [:application, :module, :function, :file, :line]
config :logger,
backends: [
{LoggerFileBackend, :shared_error}
]
config :logger, :shared_error,
path: "#{logging_dir}/verbose-error.log",
level: :error
Inside your Dockerfile, configure an environment variable for where exactly erl_crash.dump gets written to, such as:
ERL_CRASH_DUMP=/opt/log/erl_crash.dump
Then configure awslogs inside a .config file under .ebextensions as follows:
files:
"/etc/awslogs/config/stdout.conf":
mode: "000755"
owner: root
group: root
content: |
[erl_crash.dump]
log_group_name=/aws/elasticbeanstalk/your_app/erl_crash.dump
log_stream_name={instance_id}
file=/var/log/erl_crash.dump
[verbose-error.log]
log_group_name=/aws/elasticbeanstalk/your_app/verbose-error.log
log_stream_name={instance_id}
file=/var/log/verbose-error.log
And ensure that you set a volume to your docker under Dockerrun.aws.json
"Logging": "/var/log",
"Volumes": [
{
"HostDirectory": "/var/log",
"ContainerDirectory": "/opt/log"
}
],
After that, you can inspect your error messages under CloudWatch.
Now, if you are using ElasticBeanstalk(which my example above implicitly implies) with Docker deployment as opposed to AWS ECS, then the logs of std_input are redirected by default to /var/log/eb-docker/containers/eb-current-app/stdouterr.log inside CloudWatch.
The main purpose of erl_crash.dump is to at least know when your application crashed, thereby taking the container down. AWS EB will normally restart the container, thus keeping you ignorant about the restart. This understanding can also be obtained from other docker related logs, and you can configure alarms to listen for them and be notified accordingly when your docker had to restart. But another advantage of logging erl_crash.dump to CloudWatch is that if need be, you can always export it later to S3, download the file and import it inside :observer to do analysis of what went wrong.
If after consulting the logs, you still require a more intimate interaction with your production application, then you need to leverage remsh to your node. If you use distillery, you would configure the cookie and the node name of your production application with your release like this:
inside rel/confix.exs, set cookie:
environment :prod do
set include_erts: false
set include_src: false
set cookie: :"my_cookie"
end
and under rel/templates/vm.args.eex you set variables:
-name <%= node_name %>
-setcookie <%= release.profile.cookie %>
and inside rel/config.exs, you set release like this:
release :my_app do
set version: "0.1.0"
set overlays: [
{:template, "rel/templates/vm.args.eex", "releases/<%= release_version %>/vm.args"}
]
set overlay_vars: [
node_name: "p#127.0.0.1",
]
Then you can directly connect to your production node running inside docker by first ssh-ing inside the EC2-instance that houses the docker container, and run the following:
CONTAINER_ID=$(sudo docker ps --format '{{.ID}}')
sudo docker exec -it $CONTAINER_ID bash -c "iex --name q#127.0.0.1 --cookie my_cookie"
Once inside, you can then try to poke around or if need be, at your own peril inject modified code dynamically of the module you would like to inspect. An easy way to do that would be to create a file inside the container and to invoke a Node.spawn_link target_node, fn Code.eval_file(file_name, path) end
In the case your production node is already running and you do not know the cookie, you can go inside your running container and do a ps aux > t.log and do a cat t.log to figure out what random cookie has been applied and use accordingly.
Docker serves as an impediment to the way epmd is able to communicate with other nodes. The best therefore would be to rather create your own AWS AMI image using Packer and do bare metal deployments instead.
Amazon has recently released a new feature to AWS ECS, AWS VPC Networking Mode, which perhaps may facilitate inter-container epmd communication and thus connecting to your node directly. I have not tried it out as yet, I may be wrong.
In the case that you are running on a provider other than AWS, then figuring out how to get easy access to your remote logs with some SSM agent or some other service is a must.
I would recommend using some sort of exception handling tools, so far I am having great experiences on Sentry.
I'm having an issue with docker-compose where I'm passing a file into the container when it's run. The issue is that it doesn't seem to recognize when the file has been changed and serves the saved result back indefinitely until I change the name of the file.
An example (modified names for brevity):
jono#macbook:~/myProj% docker-compose run vpn conf.opvn
Options error: Unrecognized option or missing parameter(s) in conf.opvn:71: AXswRE+
5aN64mYiPSatOACC6+bISv8RcDPX/lMYdLwe8zQY6qWtbrjFXrp2 (2.3.8)
Then I change the file, save it, and run the command again - exact same output.
Then without changing anything I do this:
jono#macbook:~/myProj% cp conf.opvn newconf.opvn
And when I run $ docker-compose run vpn newconf.opvn it works. Seems really silly.
I'm working with Tmux and Mac if there is some way that affects it. Is this the expected behaviour? I couldn't find anything documenting this on the docker-compose homepage.
EDIT:
Specifically I'm using this repo from the amazing Jess.
The image you are using is using volume in order to mount your current directory. Basically the file conf.opvn is copied to the docker container.
When you change the file, the container doesn't see that change, but it does pick up the rename (which the container sees as a new file). This most probably is due to user rights of the file and the user rights of the folder in the docker container where this file is mounted. Try changing the file's permissions to 777 before beginning the process and check again.
You can find a discussion about this in the official forum of docker