How to obtain hardware signature for licensing inside container - docker

Our software architecture is container based. We would like to control the usage with a licensing server and among other stuff, to prevent moving the software to other machines by tying it to a hardware identifiers.
On a bare metal installation the MAC address or hard disk serial number can be used.
What can I use when checking from within a container?

Related

Guidance on when to chose virtual machines or physical machines over containers

There are many articles and videos comparing containers, virtual machines, physical machines. However almost all information is theoretical: containers are fast, VMs are secure, etc. But I could not find description of specific use cases or guidance on when to choose virtual machines, physical machines, but not containers. So, currently I cannot imagine situation when somebody gives recommendation to not use containers.
Question:
Could you please list specific applications or solutions when you would recommend using VMs, but not container?
Could you please list specific applications or solutions when you would recommend using OS over bare metal, but not containers or VMs?
Here is example of answer I would appreciate to get (note, that I am not sure if this information is correct):
Use case 1: Edge Router
Edge router is a router which connects organizational network to the Internet. Also, in this case it is assumed, that vendor of the router provides it not as device but as a software package (virtualized router).
Edge router most probably will be one of target of hacker's attacks. Thus security requirements come to the first place.
Containers are not recommended in this case. By default containers provide mediocre level of security. Strong security can be achieved with complex configuration (what configuration?) but this is more difficult than in case of VM or bare metal. In addition, high security level may require special hardened Linux kernel, however containers technology does not allow adjusting kernel configuration.
Virtual Machines would be a good choice if vendor of the router provides software as VM image or when organization has many edge routers (for example, many offices with internet access points), and has (or is ready to create) well-established process of preparation of VM images. In this case using VMs will simplify rollout, update and healing the virtualized edge router. VM also provides high security level; nevertheless is it still recommended to place such a VM in a separate server and to not share same server with other applications/VMs to avoid cross-VM attacks.
Physical machine would be a good choice if router vendor provides router's software as an application package (not as a VM) such as .rpm, and rollout, update and healing processes are not expected to take much efforts; this might be the case when when company has few routers (so updates can be performed manually or automated with tools like Ansible), and couple of hour of planned and unplanned downtime is acceptable.
Use case 2: ...
Thank you in advance.
The question is a bit vague so I'll try my best:
you'd usually allocate work to containers when you have a few separate applications with limited physical resources and you'd like to run them each with their own different environment (different runtime version, architecture and dependencies) which managing on a machine (physical or virtual) would be cumbersome.
you'd use a VM when you want specifically a feature that containers couldn't satisfy or it would just be a headache to set them up with it and a simple quick and easy VM could solve (and again you have limited resources you'd like to share between use cases)
and finally, a physical machine when performance is of the essence like I/O requests and latency around that.
you can also mix and match to match each tier needs:
we need to run many applications that VM would be too much of an overhead for them and containers would make their handling more automated and streamline so containers with k8s, but on the other hand, we want local storage offered to those containers to be very fast so we run the k8s cluster on physical machines.
if recoverability would be of the essence we would have used VM due to the options of snapshotting VM states over time.
It's all a big LEGO set you can mix and match depending on your use case and needs

Some Details of The Boot Process of OSes on x86 32-bit machines

I'm trying to write a OS for my own use, I want to show a blank (black) screen with VGA output but I have some problems(questions):
Under FAT32, I have MBR bootloader to read the first sector of the virtual disk image generated by bximage from Bochs. Where (which sector) should I put the second compiled code that shows the black screen? How to do it with dd utility? My second compiled code file is 9 Bytes only.
Is VBR necessary?
How do I know where the data region (FAT32) starts and ends?
I rewrote the bootloader provided from this link.
My disk file specifications is:
20M,
CHS 40/16/63
In chronological order...
Originally there were no hard disks and (if you weren't using "BASIC in ROM") computers booted from a floppy disk. In this case the first sector of the volume (the floppy disk) contains the operating system's boot loader.
Not long after hard disks got added, and worked using a similar scheme (where the first sector of the volume/hard disk contains the operating system's boot loader).
However, people soon realised that using a whole "large" hard disk for a single volume is silly/inflexible; so a partitioning scheme was invented to split the hard disk into multiple volumes. In this case the first sector of the disk (the MBR) contains a partition table where one is marked as the "active" partition, and some code to "chain load" the first sector of the active partition (the boot loader). This became "extremely standard", then people extended it to support multiple different operating systems, and most boot managers support multiple operating systems using this method.
Note 1: I define "boot manager" as something you use to choose which OS to boot, and "boot loader" as something designed to boot the specific OS that was chosen. Ideally these have nothing to do with each other, the boot manager should have nothing to do with any OS, and the end user should be able to change the boot manager with anything they like without upsetting or effecting any OS or any boot loader. Sadly, (for Windows) Microsoft are hostile towards allowing multiple different operating systems to boot using simple, sane and well supported methods (including allowing multiple instances of the same version of Windows to be installed at the same time, which could be useful - e.g. one OS for your work stuff and a separate OS for your kids both installed on the same computer) and try to smother sanity with their own "boot.ini" idiocy that mostly just makes everything horrid for no benefit (other than giving Microsoft more control over what you do with your computer). Of course when the user is only installing one OS on the computer it's nice for the OS installer to (optionally, if and only if the user wants it - e.g. because they don't already have their own boot manager) provide and install a minimal MBR that doesn't nothing more than chain load the operating system's boot loader.
As time passed more devices got added. The first was network cards and the ability to boot from network. This is nothing like "boot from disk". Instead, the network card's ROM (after some negotiation with a DHCP server) downloads an entire "boot file" (which is not limited to 1 sector and can be 500 KiB if you like) from a server, then provides an API (which became known as the "PXE API") that the boot loader can use to access networking (e.g. send/receive packets, download more files using the TFTP protocol, etc).
The other type of device that got added was CD-ROM. For these, a new specification ("El Torito bootable CD-ROM specification") was created, partly so that you could have a boot catalogue with multiple entries for multiple architectures (e.g. one for "80x86 PC", one for "PowerPC", etc) and let the firmware choose the most appropriate boot loader for the computer being booted. For this there are 3 methods for PCs - emulate a floppy disk, emulate a hard disk, or "no emulation". The emulation options work the same as original "boot from disk" method (and use 512-byte sectors, etc), but are limited and slow and probably shouldn't be used for anything other than compatibility with legacy operating systems. For "no emulation" it's completely different to the original "boot from disk" method, firmware is supposed to load an entire "boot file" (which is not limited to 1 sector and can be 500 KiB if you like), and sectors will be 2048 bytes (and not 512 bytes).
Even later; UEFI got invented. For 80x86 PCs this comes in 2 flavours - 32-bit 80x86 and 64-bit 80x86. In theory you can have a 64-bit UEFI boot loader that switches to protected mode/32-bit and starts a 32-bit OS; and you can have a 32-bit UEFI boot loader that switches to long mode/64-bit and starts a 64-bit OS. However, 32-bit UEFI is very rare (a few old Apple Mac's and almost nothing else) and these computers are likely to also support "BIOS compatible boot"; and isn't worth supporting 32-bit UEFI for that reason. For UEFI in general, it loads and executes an entire file (from whatever the boot device was) and provides an API that the boot loader can use (e.g. to setup a video mode, get a memory map, load other file/s, etc).
Note 2: UEFI tries to make it so that boot works the same regardless of which type of device you're booting from. In practice this doesn't work very well and you'll probably want a different boot loader for CD (that accesses file/s on the CD itself and isn't restricted to a weeny FAT file system image) and a different boot loader for network (even if it's only to allow you to pass IP addresses to the OS and avoid repeating the slow DHCP stuff after the OS boots).
With UEFI a new partitioning scheme was also introduced (GPT or "GUID Partition Table"). This has multiple advantages and (for new operating systems being installed as the only OS on a computer) should probably be considered the default (and the old "MBR partitions" should probably be considered obsolete for compatibility with old operating systems only).
Mostly; for 80x86 you'll probably need 4 or more different boot loaders:
one for BIOS and un-partitioned disk devices (floppy)
one for BIOS and disk devices that were partitioned with "MBR partitions"
one for BIOS and disk devices that were partitioned with "GPT partitions"
one for BIOS and network boot/PXE
one for BIOS and "no emulation" CD boot
one for 64-bit UEFI disk
one for 64-bit UEFI CD-ROM
one for 64-bit UEFI network
Of course all of these cases are "different enough" that it's silly to try to have a generic boot loader that covers multiple different cases (and in cases where there are similarities things like "512-bytes only" restrictions are so limiting that you'll be doomed if you try).
I'd also "strongly recommend" having some kind of abstraction between boot loader and the rest of the OS (e.g. a "boot protocol" defined for the OS that describes how a boot loader sets things up, passes information to the OS and transfers control to the OS); such that none of the code in the entire OS needs to know or care what the firmware was (if it was BIOS or UEFI or something else, like maybe kexec()). This means that anyone can create more boot loaders (to support other cases and other devices); and (as long as everything complies with your abstraction's specification) the entire OS will work with the new boot loader/s without any changes.
Under FAT32, I have MBR bootloader to read the first sector of the virtual disk image generated by bximage from Bochs. Where (which sector) should I put the second compiled code that shows the black screen? How to do it with dd utility? My second compiled code file is 9 Bytes only.
This is mostly wrong. For "BIOS hard disk" you should have an MBR (that has nothing to do with the OS at all) and partitions, and your operating system's boot loader should begin in the first sector of the partition (and should be designed to use DS:SI to find the partition table entry that describes its partition, and dl to determine which device the partition is on).
Is VBR necessary?
For some cases (booting from UEFI, network, CD-ROM) a VBR doesn't make sense. For some cases (booting from BIOS hard disk or BIOS USB flash) it's "theoretically optional" but extremely recommended; because some BIOSes may not recognise it (especially for the USB flash case), and other operating systems will assume that the disk isn't formatted (and will tell their users that the disk needs to be initialised/partitioned, convincing the user that your OS is garbage and leading to the user accidentally or intentionally wiping your OS off the disk).
How do I know where the data region (FAT32) starts and ends?
For FAT; there's fields in the BPB ("BIOS Parameter Block", which is misnamed as it's mostly not used by the BIOS at all) in the first sector of the volume/partition that tell you things like how many reserved sectors there are, how many sectors are in each cluster, etc. Really, if you're going to use one of the world's worst file systems for inappropriate things (e.g. for an operating system's main partition where things like effective permissions/security and fault tolerance are sorely needed) then you'll need to learn everything about FAT32 so that you can write code to allow the OS to support it after boot.

Limiting memory access

How do we really limit machine memory access if a software code has a instruction that work with straight address bits and order cpu to access access a restricted area?
if we use container or virtual or ..., we should run a code to check every instruction of original code to see if it doesn't access a restricted area?
Privilege management usually requires hardware support in the CPU. In the case of software emulation, the emulator will be required to ensure the proper privilege levels are enforced.
The MMU is a component that (among other things) controls memory accesses. Certain regions of memory can be marked as readable, writable and executable. The MMU will check all memory accesses and cause some sort of fault on an illegal access. This prevents the CPU from reading/writing/executing at arbitrary memory locations.
Many CPUs have privilege separation built into the CPU itself. It will have a concept of privilege levels (e.g. rings in x86, mode bits in ARM) and checks that the instruction being run is allowed within the current privilege level. This prevents code running in an unprivileged mode from executing privileged instructions.
The operating system hosting the containers or virtual machine host software will need to ensure the proper privilege separation is implemented correctly (making use of hardware features as appropriate).

What is the most suitable virtual machine software for sharing hardware ports (COM, LPT etc) at register level?

I'm using Delphi to develop real-time control software and over the last couple of years I have done some work running older Windows installations under Microsoft's VirtualPC and it works fine for 'pure software' development (i.e no or limited access to the outside world). Such tools seem able to work with network connections but I have to maintain software which performs I/O via the parallel port (via a device driver). We also use USB I/O. In the past I've liked Microsoft's virtual tools because it takes time to install a new operating system and then (in my case) install Delphi and a load of libraries and components to provide development support. In these circumstances I've not been too bothered by my lack of access to the low-level I/O ports.
I want to up my game and I'm happy to pay for a good virtualisation tool IF I can have access from it to the outside world, i.e I want to be able to configure it to allow access to my machine's parallel port and com ports in the same way as if it was running natively. This access has to be able to expose the parallel port in register terms, i.e to 'see' the port at address $03f8 for example and to support I/O operations of those registers (via the appropriate kernel access) as my Windows 7 64-bit installation is able to do.
I see that there are a number of virtualisation solution out there now but it's quite hard to acertain the capability of each at such a low level. Does anyone have any experience or knowledge in this area?
The VMware products would be suited best for this. You can add virtual serial and parallel ports and forward them to a physical port on the host, or even to a file or a named pipe.
You can also connect any USB device that is connected to the host machine.
This works with VMware Workstation, but might even work with the free VMware player too.

How to write BIOS program that connects to the internet?

I am aware that there are programs out there like lojack for laptops that get installed on the BIOS, but I'm still a little confused. When reading about lojack, it seems to me that they can't fully located the laptop's location until the user logs in and tries to access the internet. So I'm thinking that it's a BIOS application so that it wouldn't matter if the thief reformats the HD.
So my question is, does anyone have any ideas of how an internet enables BIOS application would be written. I'm not looking for full answers -- just ideas or resources to get started. For example, is such a thing written in assembly? Once one such app is written, how does it get transfered to the BIOS.
Does the BIOS program itself recognize that there is an internet connection (when the thief logs on to the OS). Or upon logon, does additional processes get spawned? Are there any resources/websites that anyone can direct me too?
You didn't mention whether you were interested in legacy BIOS or EFI BIOS, but I would mention that with EFI there is the capability of writing EFI applications. See Intel Press:
Harnessing the UEFI Shell
The EFI Application toolkit comes with a complete TCP/IP network stack:
http://www.intel.com/technology/efi/toolkit_overview.htm
More at tianocore.org
Regarding "LoJack"-style solutions, one of the providers of this technology is Absolute Software's Computrace product.
Basically there are 3 components: 1) a software component that runs in the OS; 2) a BIOS component which is baked into the system BIOS (accomplished via Absolute working with the PC vendor); 3) servers at Absolute software that talk to the PC.
For more information on how it works visit:
http://www.absolute.com/en/company/Computrace-Persistence.aspx
(see especially the demo video on this site)
To learn something about BIOS, one good source is coreboot.org. It is an open source BIOS (or firmware) and support some physical machines.
Legacy BIOS is written in assembly language, but new generations, such as UEFI or coreboot, are written mostly in C language. BIOS program is stored in the ROM, and executed by the CPU automatically.
The BIOS program itself does not access the internet or perform any of the advertised functions. The LoJack addition to the BIOS firmware is a file copying/patching utility - at boot up it can check the harddrive for a copy of Windows and proceed to silently install/repair the LoJack service if it has been removed. The service itself includes several measures to lower it's profile and prevent itself from being disabled (similar to how many trojans and malware run several processes that each restore the other if one is disabled or killed).
The LoJack BIOS program can't do anything if a unsupported operating system (like Linux) is installed after the harddrive is wiped.

Resources