My company has a local solution where there was a database server and application server running on dedicated windows server machine(delphi application server, firebirdsql database). Now a client wants to move its servers to cloud. Is it possible to move both database and application server into ibm bluemix as is without changing code. It runs on windows 64 bit OS. What are the options? Is it not recommended to run windows applications in bluemix?
Can it be done with ibm containers - is it possible to run windows in containers?
You could take a look at the Virtual Server (Virtual Machines) on Bluemix and use a custom image running Windows. As reported on Bluemix Virtual Servers docs:
A virtual server image is a file that includes a virtual disk with an
operating system installed on it.
You use a virtual server image to create a virtual server. You can use
an image that is provided by IBM, a customized virtual server image,
or a snapshot that you took of another virtual server.
Important: In Bluemix, you can upload virtual server images that are
supported by OpenStack and have qcow2 format only.
For more information, see Virtual server images.
You probably want to select the VM option with Windows OS which your application runs. So no change is required.
Related
My working machine in the office is Ubuntu 18.04, and I have installed a Docker container in this machine. Everything works fine, and I can use Graphic User Interface (GUI) programs such as Firefox and PyCharm in the Docker container. When I works at home, I use my Windows 10 notebook, and in order to connect my working machine in the office, I use X2Go program. With this program, I can remotely connect the machine in my office with GUI. I can also run GUI programs remotely. However, when installed Docker container once again remotely, I cannot use GUI programs in the Docker container. The reason is because in order to let the Docker container access host machine's GUI, I use xhost + command. However, when running this command remotely, I received the following error:
# xhost: must be on local machine to enable or disable access control.
If I ignore this error message, I cannot launch any GUI programs on the Docker container. Any ideas? Thanks.
This article may help:
https://www.ibm.com/support/pages/remote-install-websphere-application-server-unix-host
In the article:
If the remote host is not authorized to connect, you can add it to the
list of authorized clients using the following command:
xhost +
xhost: must be on local machine to enable or disable access control.
This indicates that this command is only authorized from a local
console (For example, not within a telnet session).
Next, you must export the display so that GUI screens generated on the
remote host will be displayed on the local host. To do this, run the
following command on the remote host while logged in through the
telnet session from the local host:
export DISPLAY=
Also your Remote Desktop protocol could be an issue.
X2Go uses NX protocol with SSH for security.
NX protocol uses a caching technology which may be part of the the problem. Remote desktop technologies can vary the experience and may not work with docker GUI remotely.
I have had similar issues with remote desktop technologies (RDP, VNC etc) where some or all of the desktop experience is not visible.
I suggest to try a VNC (RFB protocol) software and see if that works. RDP is another solution.
Be aware VNC and RDP are not by default very secure unless you use a tunneling solution (VPN etc) and encryption. There are VNCs with built in encryption (via SSH) and RDP has security solutions also, but if you are accessing it from home to work you should make sure your security manager is aware of the technology you choose that works for you.
We have a large application with several parts running on a Windows VM and I am trying to evaluate Docker containers for our application deployment. Is it possible to create a base docker image from an existing Windows VM already running my application? (I know this can be done using Dockerfile but I am looking for a quick way to create the image)
https://docs.docker.com/engine/userguide/eng-image/baseimages/
Above link describes creating image from working machine for Linux, but I am looking for something similar for Windows.
The only base image for Windows that I know are the ones proposed by Microsoft, for Windows Server 2016 or 1709.
See "PoC: How to build images for 1709 without 1709"
That means you can translate any Widows VM into an image.
You would need:
a Dockerfile
the right Microsoft base image, which would represent a Windows server one.
Typically:
microsoft/nanoserver,
microsoft/windowsservercore
If you application only runs on a Windows VM, you need to make sure it can be installed and run on one of those base Windows images.
EVen though you are using a VM Windows server 2016, you would not be able to quickly "capture its state": you need a Dockerfile to describe what you want your Widows container to run.
No it's not possible. You have some stuff like Vm2Docker etc but all it does the same thing you will do manually that is enumerate features installed and create some artifacts for you.
But it's not possible to do for third party application as you mentioned. You'd have to disassemble it and figure out how to scripts to install it.
I am looking for a way to have a Development environment of Production web server for our Developers/testers created using Docker on windows.
I have windows server 2016 OS installed on a Physical server (not VM), and want to dockerize it so that Dev team can make changes on it first and once they confirm all working fine then same changes will be done on production web server.
Thanks,
RK.
I have a new desktop Windows 10 development machine and am trying to minimize what I install on it.
On my old development machine I wound up with multiple versions of SQL Server and Management Studio.
This time I have installed SQL Server in a docker container.
Because of the answer to This question I understand I should not put Management Studio in a container. So where should I put it. In Hyper-V ?
You can put your Management Studio in your Hyper-V.
From docker expose the ports of SQL server.
After this you should be able to connect to the SQL server running inside your docker.
If required to use some hostname (is management studio needs it) then edit the host file and add the hostname and IP address as docker IP address so that your management studio contacts the docker.
Why "minimize" your installation and put barriers between yourself and your work? The purpose of a development machine is to have all the necessary tools to do your job at your disposal.
There's nothing wrong with having multiple versions of SQL Server or Management Studio installed on a single development machine, unless you're short on disk space. And there is no need to containerize them or put them in separate VMs.
I would, however, recommend installing them in the order they were released (oldest to newest). In the past, I've had as many as four releases of SQL Server installed on a single development machine, along with their corresponding SSMS (because until 2016, SSMS always came along for the ride). No troubles.
I have an application that runs on a JBoss server host name and mac id based license.
I have ported that application runs on a JBoss-Wildfly docker container in Windows 7 through Oracle Virtualbox. Now I need the same license which is based on the Windows machine's host and mac id to work in docker.
How best to achieve this ?
Also, please let me know of this is the best licensing strategy for this approach.
I have built a rails app which is used as a standalone enterprise application. The application needs to run on Windows desktops (entire user base runs Windows machines). I am able to run it quite successfully on an Ubuntu machine but it's not something customers will prefer to run.
Since deploying on a windows machine is quite messy AFAIK. I would like deploy it on Windows using a virtual machine (VirtualBox).
Requirements would be -
Application installation on Windows 7 / Windows 8.
User should be able to access rails server by browser running on his/her system via localhost or any other IP address.
Application should auto-start when user reboots the machine.
Ideally user should be able to download and install the software on his/her machine by himself/herself.
I am working to make this work but would like to know the feasibility of this solution. Would like to if I am getting the concepts wrongs or if there is something which is simply not possible or is not making any sense.
Take a look at Vagrant, which is a highly scriptable VM host. You can then generate batch files to automatically start the VM on boot.
To deploy new code, you'll just want to provide them with a new VM image they can copy into your app directory.
That said, I agree with other comments that this might not be the right platform for your use case. The main reason for building web apps is so that many clients can use your app over the web using just one set of servers. Deploying a web server to each client seems like it's defeating that advantage.