Accessing a Bitnami install on VM from outside devices - bitnami

I have an apache/php/mysql bitnami install running in a VM on a windows box. I can enter the IP of the VM into my host machines browser and the site-in-progress comes up just fine.
However, I need to view the site in progress from another device/computer (mobile testing). How can this be done?

A bit late with this answer but I think it adds value...
Most VM software has what is commonly called a 'bridged' mode (it's actually called 'Bridged Networking' in VirtualBox - I don't recall what it's called in VMware). This in essence allows the VM to get it's own independent IP on the LAN (just like a 'real' PC).
The downside is that it's not as secure (because the VM is totally exposed to the network), but the upside is that it's much quicker and easier to set up because you don't need to muck around with port forwarding and there is no risk of posts conflicting with the host or other VMs.
For full details of networking options in VirtualBox see this page: http://www.turnkeylinux.org/docs/virtual-networking-explained

I don't know bitnami but virtualization software usually allows you to do port forwarding.
I do use VirtualBox and here is a good post describing how to set it up properly.
Virtualbox "port forward" from Guest to Host

Related

what's the purpose of the zabbix officially provided docker image of the zabbix agent?

I used the zabbix official docker-compose yaml to set up a set of zabbix system and I found the server as a monitoring target was not available. I searched the Internet and found there are people also encountered such problem.Someone said the agent container's IP or DNS name should be used as the server's. I tried and found it works. But I'm confused by the agent. Does it monitor the server container,the agent container or the host machine? If it only monitors the agent container itself,what's the purpose of it?
Does it monitor the server container,the agent container or the host machine?
Agent container.
If it only monitors the agent container itself,what's the purpose of it?
For testing. And for monitoring external stuff, with custom commands. Or you can connect stuff from host and monitor it, so just in all the cases you do not want or can't install agent on the host.
Everybody who configures a Dockerized Zabbix installation like yourself bumps into to this issue- and of course find themselves on StackExchange looking for the answers that should have been in the documentation.
The reason that the Zabbix Agent in the docker-compose install you're referring to can't initially connect is that both it and server it monitors both run in isolated containers. Separate containers cannot talk to each other on 127.0.0.1 (localhost) addresses. And that is actually a good thing!
I've reviewed the documentation in the repo you're talking about and it's sparse to say the least; it certainly could be better. But to be fair to Zabbix, their docker-compose install DOES work great when you get it running and can achieve pretty fair results quickly with little effort (and a bit of Googling ;-> ).
I actually found FURTHER pain connecting to containerized Zabbix Agents raised on different hosts outside of the docker-compose install you're referring to. Connectivity was being busted because the host the docker-compose install was raised on was NAT'ing out the traffic and presenting the wrong IP address. I've documented this issue HERE.
Dockerized Zabbix is a good thing; there is a purpose to it. I agree with you though that the documentation could be better though. Stick with it!

Force docker container to use host machine MAC address

I am providing a docker container for my software that would run directly on user machine. The software is supposed to use Node locked license which would be bound to the MAC address of the host machine. FlexLM is used to validate the license.
The problem is that the docker container does not by default accesses the host machine's MAC address. One has to either bind the docker with host machine network using the --net argument or provide the MAC address explicitly using the --mac-address argument.
The problem is that one can pass any argument in --mac-address argument and the docker container will use that MAC address. This defeats the whole purpose of Node locked license. How do I make sure that the docker always gets the host machine's MAC address?
Short Answer:"there is currently no good solution for nodelocking within a container. Everything is virtualized so there is nothing safe to bind to."
Suggestion: Have you hear about Flexera's REST-based licensing API? Also know as the Cloud Monetization API or CMAPI.
This API was designed for cloud to cloud license checking. It does not require the SDK libraries, you can call it from any language that can make a REST call. It makes for a super light weight client, but requires back end functionality (FlexNet Operations and Cloud Licensing Service) to support it.
It's a great solution for applications deployed in a docker container.
Take a look at the FlexNet Licensing datasheet here:
https://www.flexerasoftware.com/resources.html?type=datasheet
Then contact your account manager for more information.
Source - Flexera Customer Community - https://community.flexera.com/t5/FlexNet-Publisher-Forum/Support-for-Docker-and-Kubernetes/m-p/111022

How do I get into a Docker container, start a vpn and use my webbrowser just like a VM

I am wondering if I can replace my virtual machine.
I usually use a Windows VM to get in, connect to my enterprise VPN and do some work.
If the containers are light I like move to, from my MV
Basically I see using containers like processes but not like interactive logon to get into it.
Is this possible?
Regards,
No, stick with your VM. Complicated network setups ("launch a VPN within my container's network space") and desktop applications ("use my Web browser") are both bad matches for Docker, plus running any Docker command requires administrative access on the host, which you probably don't want just to access intranet content.

docker container does not need an OS, but each container has one. Why?

"docker" is a buzz word these days and I'm trying to figure out, what it is and how does it work. And more specifically, how is it different from the normal VM (e.g. VirtualBox, HyperV or WMWare solutions).
The introduction section of the documentation (https://docs.docker.com/get-started/#a-brief-explanation-of-containers) reads:
Containers run apps natively on the host machine’s kernel. They have better performance characteristics than virtual machines that only get virtual access to host resources through a hypervisor. Containers can get native access, each one running in a discrete process, taking no more memory than any other executable.
Bingo! Here is the difference. Containers run directly on the kernel of hosting OS, this is why they are so lightweight and fast (plus they provide isolation of processes and nice distribution mechanism in the shape of docker hub, which plays well with the ability to connect containers with each other).
But wait a second. I can run Linux applications on windows using docker - how can it be? Sure, there is some VM. Otherwise we would just not get job done...
OK, but how does it look like, when we work on Linux host??? And here comes real confusion... there one still defines OS as a base image for every image we want to create. Even if we say "FROM scratch" - scratch is still some minimalistic kernel... So here comes
QUESTION 1: If I run e.g. CentOS host, can I create the container, which would directly use kernel of this host operating system (and not VM, which includes its own OS)? If yes, how can I do it? If no, why the documentaion of docker lies to us (as then docker images always run within some VM and it is not too much different from other VMs, or ist it?)?
After some thinking about it and looking around I was wondering, if some optimization is done for running the images. Here comes
QUESTION 2: If I run two containers, images of both of which are based on the same parent image, will this parent image be loaded into memory only once? Will there be one VM for each container or just one, which runs both containers? And what if we use different OSs?
The third question is quite beaten:
QUESTION 3: Are there somewhere some resources, which describe this kind of things... because most of the articles, which discuss docker just tell "it is so cool, you must definitely use ut. Just run one command and be happy"... which does not explain too much.
Thanks.
Docker "containers" are not virtual machines; they are just regular processes running on the host system (and thus always on the host's Linux kernel) with some special configuration to partition them off from the rest of the system.
You can see this for yourself by starting a process in a container and doing a ps outside the container; you'll see that process in the host's list of all processes. Running ps in the containerized process, however, will show only processes in that container; limiting the view of processes on the system is one of the facilities that containerization provides.
The container is also usually given a limited or separate view of many other system resources, such as files, network interfaces and users. In particular, containerized processes are often given a completely different root filesystem and set of users, making it look almost as if it's running on a separate machine. (But it's not; it still shares the host's CPU, memory, I/O bandwidth and, most importantly, Linux kernel of the host.)
To answer your specific questions:
On CentOS (or any other system), all containers you create are using the host's kernel. There is no way to create a container that uses a different kernel; you need to start a virtual machine for that.
The image is just files on disk; these files are "loaded into memory" in the same way any files are. So no, for any particular disk block of a file in a shared parent image there will never be more than one copy of that disk block in memory at once. However, each container has its own private "transparent" filesystem layer above the base image layer that is used to handle writes, so if you change a file the changed blocks will be stored there, and will now be separate from the underlying image that that other processes (who have not changed any blocks in that file) see.
In Linux you can try man cgroups and man cgroup_namespaces to get some fairly technical details about the cgroup mechanism, which is what Docker (and any other containerization scheme on Linux) uses to limit and change what a containerized process sees. I don't have any other particular suggestions on readings directly related to this, but I think it might help to learn the technical details of how processes and various other systems work on Unix and POSIX systems in general, because understanding that gives you the background to understand what kinds of things containerization does. Perhaps start with learning about the chroot(2) system call and programming with it a bit (or even playing around with the chroot(8) program); that would give you a practical hands-on example of how one particular area of containerization.
Follow-up questions:
There is no kernel version matching; only the one host kernel is ever used. If the program in the container doesn't work on that version of that kernel, you're simply out of luck. For example, try runing the Docker official centos:6 or centos:5 container on a Linux system with a 4.19 or later kernel, and you'll see that /bin/bash segfaults when you try to start it. The kernel and userland program are not compatible. If the program tries to use newer facilities that are not in the kernel, it will similarly fail. This is no different from running the same binaries (program and shared libraries!) outside of a container.
Windows and Macintosh systems can't run Linux containers directly, since they're not Linux kernels with the appropriate facilities to run even Linux programs, much less supporting the same extra cgroup facilities. So when you install Docker on these, generally it installs a Linux VM on which to run the containers. Almost invariably it will install only a single VM and run all containers in that one VM; to do otherwise would be a waste of resources for no benefit. (Actually, there could be benefit in being able to have several different kernel versions, as mentioned above.)
Docker does not has an OS in its containers. In simple terms, a docker container image just has a kind of filesystem snapshot of the linux-image the container image is dependent on.
The container-image includes some basic programs like bash-shell, vim-editor etc to facilitate developer to work easily with the docker image. Also, docker images can include pre-installed dependencies like nodeJS, redis-server etc as we can find on docker hub.
Docker behind the scene uses the host OS which is linux itself to run its containers. The programs included in linux-like filesystem snapshot that we see in form of docker containers actually runs on the host OS in isolation.
The container-images may sound like different linux distros but they are the filesystem snapshot of those distros. All Linux distributions are based on the same kernel. They differ in the programs, tools and dependencies that they ships with.
Also take note of this comment [click]. It is very much relevant to this question.
Hope this helps.
It's now long time since I posted this question, but it seems, like it still get hits... So I decided to answer it - in fact mainly the question, which is in the title (the questions in the text are carefully answered by Curt J. Sampson).
So, the discussion of the "main" question: if containers are not VMs, then why do we need VMs for them?
As you may guess, I am working on windows (on Linux this question would not emerge, because on Linux one does not need VMs for docker).
The reason, why we need a VM for containers in Winodows is pretty obvious (probably this is the reason, why nobody mentions it explicitly). As was already mentioned here and it many other FAQs, containers reuse kernel and some other resources of the hosting OS. Taking into account, that most of the containers available out there are based on Linux, one may conclude, that those containers need host OS to provide Linux kernel for them to run. Which is not natively easy on Windows (I am not sure, may be it is now possible with Linux subsystem). This is why on Windows we need one VM, which runs Linux and docker service inside this VM. And then, when we start the containers, they are also started inside this VM (and reuse the resources of its Linux OS). All the containers run inside the same VM. Getting a bit more technical: by default docker uses Hyper-V to run this linux VM, but one can also use Docker-Toolbox, which uses Oracle VirtualBox. By the way, VM can be freely seen in the Virtual Box interface. Nice part is that Docker (or Docker toolbox) takes care about managing this VM and we don't need to care about it.
Now some bonus question, which that time confused me even more. One may think: "Ok, it is clear now. If we run Linux container on Winodws OS, then we need Linux kernel and thus need VM with Linux. But if we run Windows container on Windows (by the way, it exists), then VM should not be needed, right?..." Answer: "wrong" (or almost wrong). :) The problem is, that the Windows based containers (at least those, which I saw) use windows server kernel, which is not available e.g. in Windows 10. Thus one still need VM with special version of Windows Server running on it. In fact MS even created special version of Windows Server, which can be run on VM for development purposes free of charge specifically to enable development of Windows-Server based containers. If my understanding is correct, those containers should be possible to run without VM on Windows Server. I should admit, that I never checked it though.
I hope, that this messy explanation may help someone to better understand the topic.
We need a VM to run a docker on the host machine ( this is achieved through the docker toolbox) if it is windows, on Linux we don't even need this. Once we have a docker toolbox container in itself doesn't need a VM, each container has a baseline image which is very minimal and reuses a lot of stuff with the host kernel hence making it lightweight compared to VM. You can run many such container using single host kernel.

Windows 10 Docker Network DNS doesn't work after reboot

I'm not sure if this is an issue with the current version of Windows Docker network or poor configuration and misunderstanding on my part, but I have the following setup:
2 Docker containers (built using the Microsoft/ASP.NET image as a base) running a .NET MVC application in each.
1 Docker container running SQL server (built using the Microsoft/mssql-server-windows image)
When I create all 3 containers everything works great, I can attach and ping all other the other containers using their names without any issue. The applications run and can communicate with each other as I hoped.
However, when I reboot my machine and start all the containers again they can no longer ping/communicate with each other using their names (using IP addresses is fine).
I've tried this on the default NAT network and also tried replacing the NAT network with my own custom NAT network.
To resolve the issue I have to run the force network disconnect command for each container as such:
docker network disconnect nat <containername> --force
And then I have to reconnect each container to the network before starting them up. All containers can then ping/communicate with each other using their names as well as their IP addresses.
FYI, this is a development environment but I was hoping to do something similar in Azure using a Windows Server 2016 VM, although I don't quite know what the best network configuration is for live production yet as I need to have multiple applications (in separate containers) on the same node accessed via their own subdomains.
Any help or guidance would be great.
I'm not sure, in part because this question was asked several months before any other example I've run into, but this sounds very similar to the problem described at https://github.com/docker/for-win/issues/1038.
Basically, there appears to be a problem introduced with the 1709 update to Windows 10 which results in a scenario where Hyper-V networking doesn't work the way it ought to.
There appear to be two common ways of working around this problem: Turning off "Fast Start" in the Control Panel => Power Options => System Settings, or restarting Docker for Windows and any containers after booting. I also thought I saw something on a Microsoft blog post indicating that the underlying problem has now been resolved and will be included in an update to Windows 10, but alas I can no longer find that information or the specific version number in which the problem was (theoretically) resolved. It may well be the delayed 1803 "Spring Creators Update" release.

Resources