What is the available information about the files and folder inside the docker container? - docker

Using docker remote API HEAD /containers/(id)/archive?path=/root, we can get the following information:
{
"name": "root",
"size": 4096,
"mode": 2147484096,
"mtime": "2014-02-27T20:51:23Z",
"linkTarget": ""
}
But the docker documentation does not provide any information about the various fields in the response.
I particular, I would like to know the what "mode" and "linkTarget" fields specifies.
Any pointer is much appreciated.

This comes from container/archive.go#L62-L68
return &types.ContainerPathStat{
Name: filepath.Base(absPath),
Size: lstat.Size(),
Mode: lstat.Mode(),
Mtime: lstat.ModTime(),
LinkTarget: linkTarget,
}, nil
Which means:
mode is from FileInfo.FileMode
A FileMode represents a file's mode and permission bits.
The bits have the same definition on all systems, so that information about files can be moved from one system to another portably. Not all bits apply to all systems.
linkTarget is filepath.Rel(container.BaseFS, hostPath)
(relative path between the container base filesystem and the hostpath).

Related

How change docker log length limit to more than 16k

I am running docker swarm on engine version 20.10.17. I find out, some application making long logs (more than 16k) and in Loki, as my logging stack, I cannot parse JSON logs, because docker split logs by 16k strings and insert date in the middle of the log message.
I implement python script, that generates string based on string length and this is my findings (I'll replace repeating character to make it readable).
I message length is 16383:
2022-12-09T11:11:45.750162015Z xxxxxxxxx........xxxxx
I message length is 16384 - this is the limit:
2022-12-09T11:13:15.903855127Z xxxxxxxxx........xxxxx2022-12-09T11:13:15.903855127Z
the date 2022-12-09T11:13:15.903855127Z is the same at the beginning and at the end of message.
I message length is 16385:
2022-12-09T11:14:46.061048967Z xxxxxxxxx........xxxxx2022-12-09T11:14:46.061048967Z x
This is my configuration of docker daemon:
{
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
],
"log-driver": "json-file",
"log-opts": {
"max-size": "5m",
"max-file": "5"
}
}
Is there a way how to change the configuration of length of log line? From docs I did not find this option and from source code it looks like its hardcoded. What is the best way how to parse logs properly?
In Loki, I can see whole log as one peace of log, but with dates (e.g. 2022-12-09T11:19:56.575690222Z ) in the middle of the log file (multiple times, based on the length of log line). This is quiet complicated way how to solve it, because that means, I have to check every log processed by Promtail.

What IP address ranges are available to Docker when creating gateways, for example when using Compose files

I have a naive question, but I noticed by playing with some Compose files, that docker creates gateway addresses in the form 172.x.0.1 for all the networks of my projects. x being normally always (?) incremented (unless the docker service is restarted), starting from 18 (because 17 is used by the default bridge network) up to... some number that I cannot figure out in the documentation. After what, docker jumps to gateways of the form 192.168.y.1, and here again, I'm not able to figure out what range of y values is available to docker, and what is its strategy to chose gateway addresses in all those ranges.
I have the strong impression that it only chooses private IP addresses. But I've not seen (yet) addresses such as 10.a.b.c.
Can anybody explain me, ideally with some official resources, how docker actually chooses gateway addresses when creating bridge networks (specially in the case of Compose files), what are all the pools of addresses available to docker (and if it's possible to manually define or constrain these ranges)?
Some of the pages I consulted without much success:
https://docs.docker.com/network/
https://docs.docker.com/network/bridge/
https://docs.docker.com/network/network-tutorial-standalone/
https://docs.docker.com/compose/networking/
https://github.com/compose-spec/compose-spec/blob/master/spec.md
It seems that some "explanation" hides in that tiny piece of code:
var (
// PredefinedLocalScopeDefaultNetworks contains a list of 31 IPv4 private networks with host size 16 and 12
// (172.17-31.x.x/16, 192.168.x.x/20) which do not overlap with the networks in `PredefinedGlobalScopeDefaultNetworks`
PredefinedLocalScopeDefaultNetworks []*net.IPNet
// PredefinedGlobalScopeDefaultNetworks contains a list of 64K IPv4 private networks with host size 8
// (10.x.x.x/24) which do not overlap with the networks in `PredefinedLocalScopeDefaultNetworks`
PredefinedGlobalScopeDefaultNetworks []*net.IPNet
mutex sync.Mutex
localScopeDefaultNetworks = []*NetworkToSplit{{"172.17.0.0/16", 16}, {"172.18.0.0/16", 16}, {"172.19.0.0/16", 16},
{"172.20.0.0/14", 16}, {"172.24.0.0/14", 16}, {"172.28.0.0/14", 16},
{"192.168.0.0/16", 20}}
globalScopeDefaultNetworks = []*NetworkToSplit{{"10.0.0.0/8", 24}}
)
source: https://github.com/moby/libnetwork/blob/a79d3687931697244b8e03485bf7b2042f8ec6b6/ipamutils/utils.go#L10-L22
This is the best I could come up with, as I still haven't found any official documentation about this...
It also seems possible to force Docker to use a range of allowed subnets, by creating a /etc/docker/daemon.json file with, e.g. such content:
{
"default-address-pools": [
{"base": "172.16.0.0/16", "size": 24}
]
}
One can also specify multiple address pools:
{
"default-address-pools": [
{"base": "172.16.0.0/16", "size": 24},
{"base": "xxx.xxx.xxx.xxx/yy", "size": zz} // <- additional poll can be stacked, if needed
]
}
Don't forget to restart the docker service once you're done:
$ sudo service docker restart
More on this can be found here:
https://capstonec.com/2019/10/18/configure-custom-cidr-ranges-in-docker-ee/

Defining a ProblemMatcher in VSCode tasks -- schema disagrees with docs?

In VSCode I'm trying to create a ProblemMatcher to parse errors on a custom script of mine which I run (markdown file -> pandoc -> PDFs if you're interested).
The pretty good VSCode ProblemMatcher documentation has an example task which appears (to me) to run a command ("command": "gcc") and define a problem matcher ("problemMatcher": {...}).
When I try this for my tasks.json file with both, I get an 'the description can't be converted into a problem matcher' error, which isn't terribly helpful. I checked the tasks.json schema and it clearly says:
The problem matcher to be used if a global command is executed (e.g. no tasks are defined). A tasks.json file can either contain a global problemMatcher property or a tasks property but not both.
Is the schema wrong? In which case I'll raise an issue.
Or is my code wrong? In which case, please point me in the right direction. Code in full (minus comments):
{
"version": "2.0.0",
"tasks": [
{
"label": "md2pdf",
"type": "shell",
"command": "md2pdf",
"group": {
"kind": "build",
"isDefault": true
},
"presentation": {
"reveal": "always",
"panel": "shared",
"showReuseMessage": false
},
"problemMatcher": {
"owner": "Markdown",
"fileLocation": ["absolute", "/tmp/md2pdf.log"],
"pattern": [
{
// Regular expression to match filename (on earlier line than actual warnings)
"regexp": "^Converting:\\s+(.*)$",
"kind": "location",
"file": 1
},
{
// Regular expression to match: "l.45 \msg_fatal:nn {fontspec} {cannot-use-pdftex}" with a preceding line giving "Converting <filename>:"
"regexp": "l.(\\d+)\\s(.*):(.*)$",
"line": 1,
"severity": 2,
"message": 3
}
]
}
}]
}
I've since spent more time figuring this out, and have corresponded with VSCode team, which has led to improvements in the documentation.
The two changes needed to get something simple working were:
Need to have "command": "/full/path/to/executable" not just "executable name".
The "fileLocation" isn't about the location of the file to be matched, but about how to treat file paths mentioned in the task output. The file to be matched can't be specified, as it's implicitly the file or folder open in the editor at the time of the task. The setting wasn't important in my case.
If you, like me, have come here due to the description can't be converted into a problem matcher, here is what I learned:
If your problem matcher says something like "base": "$gcc", then I assume you are using the Microsoft C/C++ plugin. If you are using some other base which is not listed on the official docs webpage (search Tasks in Visual Studio Code), then assume that it is probably supplied by a plugin.
So, this error could mean that you are missing a plugin. In my case I was trying to run this task remotely in WSL/Ubuntu using VS Code's awesome WSL integration. I installed the C/C++ plugin inside WSL and the error was fixed (go to the extension panel, click Install in WSL: <Distro name>).
Just a hunch, but I bet your fileLocation is wrong. Try something like
"fileLocation": "absolute",

How can I create a OpenEBS cstor pool?

Setup -- OpenEBS 0.7
K8S -- 1.10, GKE
I am having 3 node cluster with 2 disks each per node. Can I use these disks for cstor pool creation? How can I do that? Should I have to manually select the disks?
Yes, You can use the disks attached to the Nodes for creating cStor Storage Pool using OpenEBS.The main prereuisties is to unmount the disks if it is being used.
With the latest OpenEBS 7.0 release, the following disk types/paths are excluded by OpenEBS data plane components Node Disk Manager(NDM) which identifies the disks to create cStor pools on nodes.
loop,/dev/fd0,/dev/sr0,/dev/ram,/dev/dm-
You can also customize by adding more disk types associated with your nodes. For example, used disks, unwanted disks and so on. This must be done in the 'openebs-operator-0.7.0.yaml' file that you downloaded before installation. Add the device path in openebs-ndm-config under ConfigMap in the openebs-operator.yaml file as follows.
"exclude":"loop,/dev/fd0,/dev/sr0,/dev/ram,/dev/dm-"
Example:
{
"key": "path-filter",
"name": "path filter",
"state": "true",
"include":"",
"exclude":"loop,/dev/fd0,/dev/sr0,/dev/ram,/dev/dm-"
}
So just install openebs-oprator.yaml which is mentioned in the docs.openebs.io and after the installation it will detect the disks. Follow the instruction given in the doc. You can create pool either by manually selecting the disks or by auto way.

Get the operating system in maxima

Is it possible to get the operating system in maxima? I have some code that needs the unix / or windows \ for path names. How can I find out which operating system the code is running in?
To give some context, I have the following code:
windows: false$
divider: "/"$
if (windows) then divider: "\\"$
initfile: concat(maxima_userdir, divider, "maxima-init.mac");
load(operatingsystem)$
dir: getcurrentdirectory();
if (substring(dir, slength(dir)) # divider) then dir: concat(dir, divider)$
repo: concat(dir, "$$$.mac")$
live: concat(dir, "live_packages", divider, "$$$.mac")$
with_stdout(initfile, printf(true, ""))$
with_stdout(initfile, printf(true, concat("file_search_maxima: append (file_search_maxima, [
~s,
~s
]);"), repo, live))$
Take a look at the output of build_info, specifically the field host (i.e. foo#host where foo : build_info()). See ? build_info for more information.
On my (Linux) system I get: x86_64-unknown-linux-gnu I think on MS Windows you'll get a string containing windows or at least win or maybe win32.
There may be other ways to figure out the system type so let me know if that doesn't work for you. Also it is possible that there is a global variable floating around which tells the path separator; I would have to look for that.
If you're not adverse to writing a little bit of Lisp code, another approach is to use the file and directory functions in Common Lisp, which are more extensive than in Maxima. See the section on filenames in the Common Lisp Hyperspec. I think maybe MERGE-PATHNAMES and/or MAKE-PATHNAME might be relevant.

Resources