I'm thinking is there a way to leverage on Dockers concept for my windows base Desktop application. I need to run GUI test, performance test, workflow test etc.. for each build. What I currently do is use Hyper-V with pre setup different OS images.
Is there a easy way to achieve same thing using docker concept. As I know this can be achieve for non GUI application. but how about the GUI base desktop apps.
Related
I have containerized an application which is comprised of a Node.js application, Nginx, and MongoDB. I'm currently using Docker Compose to start and stop the application on my development machine.
I'd like to distribute the application, along with the volume that contains the MongoDB database files, so that an end user can easily start the application on their computer and point their web browser to it.
Some factors I'm considering:
The end user is almost certainly not familiar with containerization and is probably not comfortable playing around in a terminal.
The end user is likely to be using macOS or Windows, but Linux should be supported.
Asking the user to install Docker is possible, but I don't like that it requires Hyper-V on Windows, which conflicts with other software such as VirtualBox.
I could write a cross-platform GUI application that manages Docker with simple "start" and "stop" buttons. However, I am not married to using Docker if there is an easier path forward. Should I look into something like Facebook's executable archives?
I am using cygwin under windows server 2008 to have linux capability (to some degree) and ssh and be able to run apps without using a gui.
On another server that is ubuntu 18.04 I use containers to some how isolate my apps so that when I run an app and it spawns child processes and probabley modifies file descriptors etc (and so now I can not keep track of which processes are running now) I can stop my app and all the mess that it has done, with just stopping the container.
Containers made starting and stopping an app a clean and simple way.
Is there any way to have such thing on windows (without using docker on windows)? by saying this I mean the file and process isolation and not network or other stuff.
Is it possible to only isolate processes so that i can get rid of them with a single command?
Is there any tool for that? particularly for cygwin under windows?
I don't know about other languages but if you're using Python, it has a feature called Virtual Environment and developer can create and run applications in isolated environments. you can learn more about it here.
I myself come to the conclusion that using services and creating a service in windows would be the only way to manage an app without using a container.
Currently selenium HQ/docker-for-selenium is available for Linux os.
Here they are trying to implement selenium grid using docker, on Linux.
My main aim is achieving the same in windows. I am unaware what challenges I will face here.
So creating this thread to discuss the challenge while implementation.
For windows to support the IE in docker we need to understand two things.
1)Windows do not provide GUI capability inside the docker containers like XVBF in Linux.
2) No headless IE
Feel Free to explore this project in order to understand how they are enabling GUI in Linux containers.
Link to the project: https://github.com/SeleniumHQ/docker-selenium
UPDATES:
Still there is no official approach to run IE/EDGE inside docker as
"No Windows docker images have GUI, so we cannot test IE11, EDGE."
But,We can install virtual-box and make this happen.
This approach adds extra layer of virtualization[Nesting of virtualization] on the top of docker to make the IE/Edge execution happen and I think in near future it may leads to to performance issue for heavy testing.
If Selenium testing is what you are looking for and don't have heavy load , you can give a try to the approach mentioned in the link.
Youtube - Selenium Windows containers in Docker under Linux
Github - Windows Images
Blogpost - selenium-on-windows-docker-revolution
I am currently trying to understand and learn Docker. I have an app, .exe file, and I would like to run it on either Linux or OSX by creating a Docker. I've searched online but I can't find anything allowing one to do that, and I don't know Docker well enough to try and improvise something. Is this possible? Would I have to use Boot2Docker? Could you please point me in the right direction? Thank you in advance any help is appreciated.
Docker allows you to isolate applications running on a host, it does not provide a different OS to run those applications on (with the exception of a the client products that include a Linux VM since Docker was originally a Linux only tool). If the application runs on Linux, it can typically run inside a container. If the application cannot run on Linux, then it will not run inside a Linux container.
An exe is a windows binary format. This binary format incompatible with Linux (unless you run it inside of an emulator or VM). I'm not aware of any easy way to accomplish your goal. If you want to run this binary, then skip Docker on Linux and install a Windows VM on your host.
As other answers have said, Docker doesn't emulate the entire Windows OS that you would need in order to run an executable 'exe' file. However, there's another tool that may do something similar to what you want: "Wine" app from WineHQ. An abbreviated summary from their site:
Wine is a compatibility layer capable of running Windows applications
on several operating systems, such as Linux and macOS.
Instead of simulating internal Windows logic like a virtual
machine or emulator, Wine translates Windows API calls
on-the-fly, eliminating the performance and memory penalties of
other methods and allowing you to cleanly integrate Windows
applications into your desktop.
(I don't work with nor for WineHQ, nor have I actually used it yet. I've only heard of it, and it seems like it might be a solution for running a Windows program inside of a light-weight container.)
There was a project thrown my way recently that involves the orchestration of several (Linux capable) embedded devices, deploying software to them, and allowing for the applications to be updated when the code base updates in a git repo.
The initial thought was to make a standard image for each device, and I set out, attempting to install docker on an UDOO Quad and an Intel Edison to start, but without any success up to this point.
My thinking is that it seems to be a good idea to install Docker on embedded devices--but if that's the case, surely it would have been ported by now. The only group out there that seems to be making these efforts is Resin.io.
Is there something I'm missing, or is there a clear reason why Docker doesn't make sense on embedded devices? If there isn't a reason, and it does make sense to run Docker on embedded systems, is there something I've overlooked out there: are there any sources of discussion on porting, or how-to's that cover this?
I have considered running docker on embedded devices (a mips system), but didn't go that way. There are some problems with it, in my humble view:
Docker is implemented in Golang. There is currently no available tool chain for mips to compile go. You will need to create the tool chain yourself using gcc-go.
The size of docker is larger than lxc. In a desktop computer this is not a problem, but the embedded device has limited flash storage.
Docker uses some quite up-to-date feature of linux kernel. Sometimes the kernel version on embedded devices are not so new and back-port is needed to make it work.
The docker image has to be built on the same architecture as the run time environment. It means that if you want to run a docker container on Raspberry Pi, the docker image has to be built on an ARM-architecture system. QEMU can be used to build docker image in the cloud, but it doesn't support all CPU architectures used in embedded system. (for example, it currently doesn't support MIPS)
In the end, lxc was chosen for the specific task of running a container on embedded device. It has limited features compared to docker, but currently it suits the requirement of the project.
As of year 2019, I would like to update this answer since I did port docker to embedded system with ARM cpu. With the price of flash usage, memory usage, by using docker you will have container management, image management, and many ready to run images from docker hub. So the decision is a balance between cost and features.
Here is an update for 2018:
You can work with Docker on embedded devices such as Raspberry Pi and Orange Pi quite easily now because of advancements in the development of Raspbian and Armbian operating system images. Specifically, both types of devices and their respective OS images now support kernels that are of sufficiently high enough versions to install Docker without any problems (at least version 3.10, though both now offer 4.x+ versions).
Your desire for faster rates of change can definitely be realized by using embedded Docker. I can say from experience that I have tested and regularly run the approach you describe. Basically, you start with a base operating system image such as Raspbian or Armbian, tweak that operating system enough that it's secure and has Docker installed, and then you use Docker for handling development iteration and application updates.
As an aside, if you are interested in running Docker on embedded Linux devices, then I recommend you check out a free, open-source, MIT-licensed command line tool I wrote to help developers work with embedded Docker on multiple devices at once: https://github.com/ForwardLoopLLC/floopcli .
Even if you are not interested in the tool itself, the documentation for the tool describes several patterns for working with Dockerized applications across multiple devices in multiple languages: https://docs.forward-loop.com/floopcli/master/index.html . The materials there should serve as a starting point for porting applications to Docker and then deploying them on embedded devices. The documentation also addresses some embedded device subtleties, such as differences between ARMv6 and ARMv7. Hopefully this helps you get started!
There is a great article on LinkedIn describing his experience with that
https://www.linkedin.com/pulse/whale-jar-when-running-docker-embedded-linux-good-thing-fletcher#pulse-comments-urn:li:article:7736487387895237975
Often embedded systems have a very slow rate of change. Docker works well on a minimum build then layering on top. If you want to sacrifice the overhead of running docker on a minimum embedded system for docker's ability to have a build system and steady rate of change then you could explorer it.