We usually use an IDE to develop programs.
However, the IDE is not smart enough to travel the library folder in the docker container.
So navigation to the source (for example, in PyCharm, you want to see the code in the installed third-party package) is usually not available. The IDE simply said "No module named xxxx".
How could this be fixed?
Related
I currently have a folder containing some .dll files, .bin files and some .exe files. The main .exe that I will be executing only works on windows, and I am not entirely sure what are all its dependencies. My goal is to package all the files in the folder into a docker container so I can integrate it into the rest of my pipeline. The main .exe is a command line tool which is only called once with some arguments and left to run.
I have tried to use windows server core as the container image and it works. However this image is too big for my needs. I have tried to use nano server but when I try to run the executable there is nothing printed in the command line and the program does not run. In that scenario, if I type:
C:\Bin\x64>echo %ERRORLEVEL%
I get the following output:
-1073741515
Meaning I am missing some dependencies.
So, I'm wondering if there is an alternate solution to packaging this folder since windows server core is too big.
Most likely you'll have to stick with the Server Core image. The main thing is that these images server different purposes, and the Nano Server is intended to new applications being developed with the targeted Nano Server API. Server Core is the image focused on existing apps, but its APIs make the image larger than what one would expect from a container.
Keep in mind that it's still better than a full VM. :)
I blogged about this here: https://techcommunity.microsoft.com/t5/containers/nano-server-x-server-core-x-server-which-base-image-is-the-right/ba-p/2835785
GitHub recently released a container registry alongside their package registry. What is the difference? When would it be better to use one or the other? Do we need both?
Packages are generally simple: they are essentially an archive (i.e. zip file) that contains contents (code libraries, application executables, etc.) and a manifest file (json document, xml file, etc) that describes those contents with a package name and version number (at a minimum).
ie:- npm,pip and composer packages.
Container images are also simple, but they're more like an archive (i.e. a zip file) than a package.
ie:- nginx, redis etc
Verdict:- if some libs repetitively used in any project then we can create package and use in project .while for all project based dependencies we need to choose container to run this. Yes we need both.
After debating this with a Docker-using friend for a while I think I've found a satisfactory explanation:
Packages are for modules of code which are compiled together into an
Application.
Containers are for Applications which are compiled together into a
Server.
This is a bit confused by the fact that a Package can contain a standalone Applications, and Containers will often use package managers like Apt to install these applications. I feel like this is an abuse of package management due to a legacy where we didn't have Containers. Eventually I would expect most Applications will be delivered in Container form.
I'm trying to use Eclipse Che as an IDE to develop a C++ application on a remote linux machine.
Che can access the source code on the host system because of the
-v <LOCAL_PATH>:/data
part when running the docker container.
But how am I supposed to access include directories (and later libraries to link with)?
Have you tried doing this on Kubernetes? In that case, you are not just exporting the source code volume, the remote Linux machine is your workspace.
I'm afraid I'm not sure how you would do this with Che in Docker without jumping through some hoops.
I just wanted to know if there is or will be a standalone installer for fsi.exe to run on machines (Windows) that are not connected to the internet?
I have managed to get fsi working by downloading the nuget package, unzipping and then manually adding the "tools" folder to the PATH environment variable.
This seems to work because I can get into the repl, but I was wondering if there is a better way to do this.
This is a client server that we access over a tightly controlled VPN and it is not an option to do a full Visual Studio installation.
I have Python 2.7 installed in "C:\Python27". Now I run 1st demo of Python4delphi with D7, which somehow uses my Py2.7 install folder. If I rename Python folder, demo can't run (without error message). I didn't change properties of a demo form.
What part/file does py4delphi use from my Python folder?
python4delphi is a loose wrapper around the Python API and as such relies on a functioning Python installation. Typically on Windows this comprises at least the following:
The main Python directory. On your system this is C:\Python27.
The Python DLL which is python27.dll and lives in your system directory.
Registry settings that indicate where your Python directory is installed.
When you rename the Python directory, the registry settings refer to a location that no longer exists. And so the failure you observe is entirely to be expected.
Perhaps you are trying to work out how to deploy your application in a self-contained way without requiring an external dependency on a Python installation. If so, then I suggest you look in to one of the portable Python distributions. You may need to adapt python4delphi a little to find the Python DLL which will be located under your application's directory. But that should be all that's needed. Take care of the licensing issues too if you do distribute Python with your application.