What kernel options is Google's Container Optimized OS built with? - docker

I'm having trouble finding the kernel options that Google's Container Optimized OS is built with. I tried looking at the usual locations like boot/config-* and /proc/config.gz, but didn't find anything. I searched the source code and didn't find anything either, but I'm probably just searching wrong.
The specific option I'm curious about is CONFIG_CFS_BANDWIDTH and whether it is enabled or not. Thanks!

You can get it via running zcat /proc/config.gz in a Container-optimized OS VM.
The kernel config is generated from the source here. However, note that the source kernel config are changed during the OS image building process. So they are not 100% the same.

Related

Why are Docker multi-architecture needed (instead of the Docker Engine abstracting the differences)

Short version
I would like to know the technical reasons why do Docker images need to be created for multiple architectures. Also, it is not clear whether the point here is creating an image for each CPU architecture or for an OS. Shouldn't the OS abstract the architecture?
Long version
I can understand why the Docker Engine must be ported to multiple architectures. It is a piece of software that will interact with the OS, make system calls, and ultimately it is just code that is represented as a sequence of instructions within a particular instruction set, for a particular architecture. So the Docker Engine must be ported to multiple OS/architectures much like, let's say, Microsoft Word would have to be ported.
The same thing would occur to - let's say - the JVM, or to VirtualBox.
But, different than with Docker, software written for the JVM on Windows would run on Linux. The JVM would abstract the differences of the underlying OS/architectures, and run the same code on both platforms.
Why isn't that the case with Docker images? Why can't the Docker Engine just abstract the differences, and provide a common interface, so the image itself wouldn't need to be compatible with a specific OS/architecture?
Is this a decision (like "let's make different images per architecture because it is better for reason X"), or a consequence of how Docker works (like "we need to do it this way because Docker requires Y")?
Note
I'm not crying "omg, why??". This is not a rant or criticism, I'm just looking for a technical explanation for the need of different images for different architectures.
I'm not asking how to create a multi-architecture image.
I'm not looking for an answer like "multi-architecture images are needed so you can run your images on various platforms", which answers "what for?", but not "why is that needed?" (which is my question).
Besides that, when you see an image, it usually has an os/arch in the digest, like this:
What exactly the image is targeting? The OS, the architecture, or both? Shouldn't the OS abstract the underlying architecture?
edit: I'm starting to assume that the need for different images per architecture is on the lines of: the image will contain applications inside it. Let's say, it will contain the Go compiler. The Go compiler itself is a binary that must have been complied to different architectures. The image for x86-64 will contain the Go compiler compiled to x86-64, and so on. Is this correct? If this is correct, is this the only reason?
Why can't the Docker Engine just abstract the differences, and provide a common interface
Performance would be a major factor. Consider how slow Cygwin is for some things when providing a POSIX API on top of Windows by emulating some POSIX things that don't map directly to the Windows API. (e.g. fork() / exec separately, instead of CreateProcess).
And that's just source compatibility; the resulting binaries are specific to Cygwin on Windows. It's even worse if you want to do that at runtime (binary compat instead of source compat).
There's also the amount of complexity Docker would need to provide an efficient portable JIT-compiling VM on top of various OSes, especially across various CPU ISAs like x86-64 vs. AArch64 that don't even share common machine code.
If Docker had gone this route, it would really just be re-inventing a JVM or .NET CLR bytecode-based VM.
Or more likely, instead of reinventing that wheel, it would just use an existing VM and add image management on top of that. But then it couldn't work with native programs written in C, unless it transpiled them to Java or CLR bytecode.
All tough the promise of Docker is the elimination of differences when moving software between machines, you'll still face the problem that Docker runs with the host machine's CPU architecture, which can't be crossed in Docker.
Neither Docker, nor a virtual machine, abstract a CPU to enable full cross compatibility.
Emulators do. If both Docker and VM's would run on Emulators, they would be less performant as they are today.
The docker buildx command and --build-arg ARCH flag leverages the advantage of the qemu emulator, emulating the full system with an architecture during a build. The downside of emulation is that it runs much slower than normal builds.

How does Bazel track files so quickly?

I can't find any information about how Bazel tracks file. The documentation doesn't mention if they use something like facebook's watchman.
It obviously takes some kind of hash and compares, but how exactly does it do it? Because it knows if things hasn't changed immediately and it wouldn't be able to read all those files in such a short time.
Also if you are watching many files it would take up a lot of space with a mono repo like Google? I know that is one of the problems scaling git because "git status" will become to slow unless some intelligent caching is used.
Bazel uses OS filesystem monitoring APIs like inotify on Linux and FSEvents on Mac OS
Check out these classes:
https://github.com/bazelbuild/bazel/blob/c5d0b208f39353ae3696016c2df807e2b50848f4/src/main/java/com/google/devtools/build/lib/skyframe/DiffAwareness.java
https://github.com/bazelbuild/bazel/blob/1d2932ae332ca0c517570f559c6dc0bac430b263/src/main/java/com/google/devtools/build/lib/skyframe/LocalDiffAwareness.java
https://github.com/bazelbuild/bazel/blob/c5d0b208f39353ae3696016c2df807e2b50848f4/src/main/java/com/google/devtools/build/lib/skyframe/MacOSXFsEventsDiffAwareness.java

Which VM is suitable for which Pharo/Squeak release on which system?

Is there a place (website) where i can find information on which VM is needed (minimum/maximum) for a specific Pharo or Squeak release on a specific OS?
I don't know if that exact information is documented, but I can try to give you a brief explanation... Even Pharo and Squeak paths have diverged a lot in the last times.
Pharo Official VM is the CogVM which is a StackVM with JIT. Then it also have StackVMs for platforms where code generation is not allowed.
The official virtual machines for Pharo are listed in http://www.pharo-project.org/pharo-download, and they work for sure from Pharo 1.2 up to Pharo 2.0. You can also have a look at the complete set of built vms in the CI server https://ci.lille.inria.fr/pharo/view/Cog/.
For older releases, Pharo (1.0 and 1.1) keeps a history of one-click distribution where the vm is freezed along with the image. You can find them in here: https://gforge.inria.fr/frs/?group_id=1299
On the other side, for Squeak, the same CogVMs should work in their latest versions, otherwise you should get an interpreter VM from http://squeakvm.org/index.html.
Hope it helps a bit
As #guillepolito says, the best thing today is to take the ones from the Pharo continuous integration Jenkins server (or pick a one-click).
Squeak VMs have been fading out in my practice. I keep a number of them around but as I do use Pharo, I try to build my own version from the Jenkins source as there is a lot to be learned from using those.
It is not difficult to get them built on the main platforms and at least you know what's under.
The main problem is that Eliot Miranda keeps on doing his things in his corner instead of working on a shared source three. That's the problem of having a low truck number on that.

Basics of Jmapping?

I've done some search out there but couldn't find too much really helpful info on it, but could someone try to explain the basic of Java memory maps? Like where/how to use it, it's purpose, and maybe some syntax examples (inputs/outputs types)? I'm taking a Java test soon and this could be one of the topics, but through all of my tutorials Jmap has not come up. Thanks in advance
Edit: I'm referring to the tool: jmap
I would read the man page you have referenced.
jmap prints shared object memory maps or heap memory details of a given process or core file or a remote debug server.
NOTE: This utility is unsupported and may or may not be available in future versions of the JDK. In Windows Systems where dbgeng.dll is not present, 'Debugging Tools For Windows' needs to be installed to have these tools working. Also, PATH environment variable should contain the location of jvm.dll used by the target process or the location from which the Crash Dump file was produced.
http://docs.oracle.com/javase/7/docs/technotes/tools/share/jmap.html
Its not a tool to be played with lightly. You need a good profiler which can read it output as jhat is only useful for trivial programs. (YourKit works just fine for 1+ GB heaps)

vmware player install hangs? vista 32bit

Hi
I am a noob trying to setup my computer so i can make a social networking website.
Sorry if its not kosher to ask here, but Hopefully one of you smart guys can help me.
I want to test some CMS (content management systems), firstly Elgg and then some others.
As far as ive read i can do this by using a virtual machine like VMware Player.
Now originally i wanted to try out Insoshi so i tried to use Cygwin and GitBash (also Putty tools) to download it (with no success). This involved me installing those programs and also trying to get an ssh environment variable working. So i gaveup on that (seeing that Elgg has more support anyhow i thought id try to try that). I uninstalled these programs, deleted leftover directories and deleted the added environment variable.
I also uninstalled DaemonTools (cos i thought it may be conflicting).
Im running Windows Vista 32bit and have always downloaded relevant installs for that system.
My problem is the VMware Player installer isn't doing anything. I launch it and it seems to hang straight away see pic
Am i missing something here?
Vmware page also suggests a virtual appliance (for cloud stuff) which i dont know much about yet. And i think that appliance is installed via the player else an image loader like Daemon Tools. Do i need this appliance first?
Why is the player not installing?
Ive tried both 3.14 and 3.13 build with same result?
I have about 4 gig of space left on my hd and have 3gig of ram.
I have looked at the programs installed on my computer and cant seem to find anything else that might conflict (but i am a n00b) and i also tried pausing my kapersky pure protection. Any help is severely appreciated, thanks.
As I recall there were a couple of conflicts with vmware
A quick look through the vmware forums I see:
visualsvn
virtualPC (no surprise)
nvidia 270.18 beta driver
avg
I also remember there being talk on the forums about a specific executable name which caused issues, but Im struggling to remember what it was.
start regedit.exe
Browse to the following sub-key: HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings\Zones\
You should see keys named 0, 1, 2, 3 and 4. In my case I had a folder named "L" before the 0.
Remove the key with "L" (actually it is "└", unicode #2514)
P.S. This is due to Microsoft screws up the registry Internet Zone settings in the registry. With the "└" key, it cause Javascript not to be called from an application.
Got it from here.

Resources