I have an image with a GUI application, with base image of microsoft/windowsservercore. Application is installed correctly in the image, however I'm unable to display it on host machine. Have read several articles on this on Google and they suggest to install XServer for Windows and then we can display the application on host machine. I have been trying to run following command (as suggested in most of the articles), however it does nothing and I don't get the display. Please assist.
docker run --rm -it -e DISPLAY=127.0.0.1:0.0 eft
The DISPLAY would be useful for Linux container.
As mentioned here:
WindwosServerCore image does not come with binaries for UI applications so I doubt this will ever work in servercore image but Microsoft insiders can use new bigger WindwosServer image which I beleive have those libraries intact.
This thread adds:
I understand that you can run GUI apps but the rendered elements are not shown on any desktop. Lars Iwer [MSFT] writes in the discussion below the article:
In the container image as it is right now, GUI elements will be rendered in session 0. UI automation should work with that (e.g. programmatically searching for a window etc.).
Session 0 is the session in which all system services are run and is by definition non-interactive. Sessions, Stations and Desktops are means of isolation in Windows (NT) and whether an application can show a UI and receive user interaction depends on whether it has an access to a Station with a Desktop.
Processes in Session 0 do not have that by default.
However it used to be possible to “Allow services to interact with Desktop” and it is also possible to run interactive services in other sessions than Session 0 (pay attention to “as it is right now”). Therefore, it would be interesting to hear some expert insights from Microsoft/Docker team on that…
Related
I'm working with the following:
Docker for Windows v20.10.11
Docker running in Windows container mode
mcr.microsoft.com/windows:1903 base image
Proprietary application installed on top of this base image
Each year we create a Docker image with the latest version of our company's software. However this year's version behaves differently. Host machine installation runs fine. Containerized installation fails to run in certain situations. I can start the application as a simple EXE, for example using the Docker run command. The app will start and show up in "tasklist". However I can't start the app via the COM API, which is a critical requirement. The problem appears to be COM related. Normally we can create COM objects for our software just like for any other application. For example, IE returns a COM object just fine:
Creating these objects for our application works outside containers. However inside the container, our latest installation gives this error:
Access permissions appear to be ok. I tried a couple tests to prove this. First I can install other software like MS Word into a container and create COM objects for that:
Second I tried retrieving + modifying the application's DACL in PowerShell.
Changing access masks or trustees can cause an Access Denied error:
This also appears to confirm the access permissions were Ok by default.
Next I made sure COM is aware of the application. This appears to be fine. I get the same result on host machine and container when running this PS script:
gci HKLM:\Software\Classes -ea 0| ? {$.PSChildName -match '^\w+.\w+$' -and
(gp "$($.PSPath)\CLSID" -ea 0)} | ft PSChildName
The application shows up just like any other. The details show up fine when querying by AppID. LocalServer32 points to the correct EXE:
Some other things I tried:
Querying registry keys. There are 7 keys created when installing our software. These appear identical on host machine install and container install.
Even though permissions appear fine, I still tried logging into the container as alternate users. For example "nt authority\system" is another virtual admin user. I also changed the password of the "builtin\administrator" user to enable logging in with that one. Lastly tried creating new users entirely and adding them to the Administrators user group. All these attempts had the same errors as "builtin\containeradministrator" (default user).
A minor check was ensuring CMD.exe / Powershell is running as x64:
Re-registering the DLLs associated with the installation using regsvr32.
Starting from different base images. https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/container-base-images. The full Win Server base image behaves exactly the same way regarding errors. The smaller Win Server Core base image is even more problematic, as I can't even start the app's EXE manually using that base. Lastly I tried other tags of the full Windows base image such as 20H2 and 2004. Same result from those. Multiarch or x64 makes no difference.
Included the "Ogawa hack" which was historically needed to make MS Office apps function correctly with COM: https://stackoverflow.com/a/1680214/7991646. It could be necessary for other COM apps too, but didn't help with my specific installation.
Is there anything else I can do to diagnose or solve this COM issue?
There are several things to consider:
The Considerations for server-side Automation of Office article states the following:
Microsoft does not currently recommend, and does not support, Automation of Microsoft Office applications from any unattended, non-interactive client application or component (including ASP, ASP.NET, DCOM, and NT Services), because Office may exhibit unstable behavior and/or deadlock when Office is run in this environment.
If you are building a solution that runs in a server-side context, you should try to use components that have been made safe for unattended execution. Or, you should try to find alternatives that allow at least part of the code to run client-side. If you use an Office application from a server-side solution, the application will lack many of the necessary capabilities to run successfully. Additionally, you will be taking risks with the stability of your overall solution.
The When CoCreateInstance returns 0x80080005 (CO_E_SERVER_EXEC_FAILURE) page describes possible reasons.
If many COM+ applications run under different user accounts that are specified in the This User property, the computer cannot allocate memory to create a new desktop heap for the new user. Therefore, the process cannot start. See Error when you start many COM+ applications: Error code 80080005 -- server execution failed for more information.
Finally, you may find a similar thread here helpful, see Server execution failed (Exception from HRESULT: 0x80080005 (CO_E_SERVER_EXEC_FAILURE)).
I'd like to scrape a javascript website using Scrapy + Splash in Google App Engine. The Splash plugin is a Docker image. Is there any way to use this within Google App Engine? App Engine itself uses a Docker image, but I'm not sure how to load and access a secondary image (which is how Splash is used). Here are the Splash install instructions
You can use Custom Runtimes in the App Engine Flexible Environment.
Custom runtimes let you build apps that run in an environment
defined by a Dockerfile. By using a Dockerfile, you can use languages
and packages that are not part of the Google Cloud Platform and use
the same resources and tooling that are used in the App Engine
flexible environment.
Explore more About Custom Runtimes. Please note when you use a custom runtime, you have to write your application code to deal with some flexible environment life-cycle and health checking requests. Check how to build a custom runtime for more information.
Deploying the Splash service separately is the proper way to accomplish this.
I went ahead and tested a few different setups and the only approach that allowed me to have Splash on App Engine was to deploy it as a custom domain, setting the forwarded_ports to able to connect directly to one of the service’s instances through its IP address.
This is clearly not an adequate solution, as it comes with many limitations and, in the end, it becomes basically using Google Compute Engine without all the control it provides.
My suggestion is that you only deploy the Scrapy service of your application to App Engine, and leave the Splash service somewhere else, like in a GCE instance.
Once you have that, all you will need to do is set a static IP address for the instance and connect to it from your App Engine app through that.
I would like to know if it's possible to create a kind of communication between two docker container.
I've two docker containers, one for Firefox and another one for vscode. I'm looking for a solution to be able to open URL link from vscode in my running Firefox container(create a new tab as we have when we are selecting a link).
Don't know if it's possible to do that. Maybe by sharing some specific resource.
Thanks
I am not sure about the possibility but there are two scenarios:
Either you find a way/extension to make vscode call a browser over network
Mount the needed files/binaries as a volume from firefox container to vscode container to make it able call the firefox binary locally as usual in order to make it able to open the browser (not tested but might be done somehow) but it might not be able to open in the same session, so give it a try and let me know so i can update the answer.
I'm new in docker so I want to know what is the better approach to use it. I have a Project that needs three components to work:
Jboss server application
PostgreSQL
A spring boot application
So, based on it my questions are:
1) Should I have one docker image for each component mentioned above? If yes, why not just put all together? My idea of docker is simplify the deploy of a application so put all together will make easy to install this app in another environment, right?
2) If yes (one docker image per component), spring boot is just a "Java -jar" command is really necessary have a docker image to it?
3) In case of PostgreSQL should I have the image with all my database structure and data or just vanilla PostgreSQL without anything?
To answer your questions
1) should I have one docker image for each component mentioned above ?
If yes, why not just put all together? My idea of docker is simplify
the deploy of a application so put all together will make easy to
install this app in another environment, right?
It is best to put them on a separate components so that:
You can isolate cases(will help you in debugging)
You can selectively scale(horizontally) specific stateless components when you run on k8s or docker-swarm
You can set hardware limit(RAM, CPU, etc) per component
You have different base images(might be useful for optimizations)
You want to build & test your components independently
List goes on
2) if yes (one docker image per component), spring boot is just a
"Java -jar" command is really necessary have a docker image to it?
Please check the list mentioned above (why it's best to separate) if it fits your use case. Note that adding it to existing components will affect your scaling strategy
Example - you run 3 instances of jboss component with spring boot app, you will spawn 3 instances for both of them w/c you might not want.
3) in case of PostgreSQL should I have the image with all my database
structure and data or just vanilla PostgreSQL without anything?
I would recommend that you mount your structure & data to the host volume, so that it doesn't get lost when the image is restarted. see example so i'll recommend using vanilla postgres
I hope this helps you in some way
Anybody having problems with IBM Containers on US South in Bluemix?
Containers report Data currently available on the dashboard and if I try to list or start a container I get this error:
Catalog Error
×
BXNUI0513E: The attempt to retrieve containers failed because a problem occurred
contacting IBM Containers. Try again later. If the problem continues, go to
Support. For other help options, see the Bluemix Docs.
If I switch to the UK site, I can create and use containers.
I've just recently tried out a Docker container with a sshd and it was running fine for 5-6 hours. However, then it seemed like part of the Container service in Bluemix broke and I've not been able to access it for the past 24 hours.
Regards.
Mikael
For trial accounts you can create containers only in one space and this error sometimes occurs when the user tries to create a container in another region. Unfortunately since you're using 'pay as you go' in this in case you have to open a support request using one of the following methods in order to engage IBM Containers team to investigate your issue:
Use the Support Widget. It is available from the user avatar in the
upper right corner of the main Bluemix UI. After opening the support
widget panel, select Get Help > Get In Touch , select the type of
assistance you need, and then fill out the support form.
Use the Support Site 'Get Help' form. This form is available on a separate site that is made available for ticket submission when you cannot log into Bluemix and access the Support Widget. Go to http://ibm.biz/bluemixsupport and fill in the support request form.
EDIT: I saw that you opened a Support ticket and the issue was fixed. It was an issue related to your specific organization.
Just a small note. Hopefully Containers in Dallas are now working well again. In addition, I wanted to note that we strongly discourage the use of sshd in containers for security reasons. The good news is shell access is at your fingertips via the cf ic exec <contianer id> /bin/bash command. (your container may need just bash or /bin/sh YMMV)