Define preprocessor variables in docker compose build - docker

I want to make some P/Invoke calls from my ASP.Net Core application and for this I obviously need to determine which platform I'm running on.
The project uses docker and linux containers so there might be only two possible platforms in my example:
Linux (when running the project in docker)
Windows (when running the project without docker)
With the help of Environment.OSVersion.Platform I'm able to detect the platform at runtime, so I can DllImport the necessary dynamic libraries and switch through the Platform values to determine which imported function to execute.
Although this works just fine I really would like to be able to remove this runtime overhead and be able to just wrap parts of my code with #if directives (e.g. #if Docker or #if Windows) to compile just the code I need for the type of deployment I'm doing.
Is there any way to accomplish this in an Asp.Net Core project with docker support?

Related

Virtual Environment for each tools to avoid dependency conflicts?

I was wondering If any one could help me to understand the difference of python virtual environment and docker container.
So I would like to have environment for each tools isolating from each other to avoid dependency conflict for example: use of different version of same dependency causing error in one of the tool because one tool need older version and other one requires newer version.
I’m tested out python venv but not sure if it’s the right one I should use for the issue I just explained or docker is something I should be using for my situation?.
Particularly for day-to-day development, prefer a virtual environment if it's practical.
Virtual environment
Docker
Works with native tools; can just run python myscript.py
Requires Docker-specific setup
Every IDE and editor works fine with it
Requires Docker-specific IDE support
Can just open() data files with no special setup
Can't access data files without Docker-specific mount setup
Immediately re-run code after editing it
Re docker build image or use Docker-specific mount setup
Uses Python installation from host
Use any single specific version of Python
Isolated Python library tree
Isolated Python library tree
Uses host version of C library dependencies
Isolated C library dependencies
A virtual environment acts like a normal Python installation in an alternate path. You don't need to do special things to make your local code or data files available; you can just run your script directly or via your IDE. The one downside is that you're limited to what your host OS's package manager makes available for Python versions and C library dependencies.
A Docker container contains the filesystem of a complete OS, including a completely isolated Python installation. It can be a good match if you need a very specific version of Python or if you need host OS dependencies that are tricky to install. It can also be a good match if you're looking for a production-oriented deployment setup that doesn't specifically depend on installing things on to the target system. But, Docker by design makes it hard to access your host files; it is not a great match for a live development environment or especially for one-off scripts that read and write host files.
The other consideration here is, if you use the standard Python packaging tools, it's straightforward to run your program in a virtual environment, and converting that to a Docker image is almost boilerplate. Starting from Docker can make it tricky to go back the other way, and I see some setups around SO that can only be run via Docker; if they were restructured to use a standard setup.cfg/requirements.txt installation setup they would not require Docker but could still be used with it.

Is it possible to reference an SDK ( or any folder ) in a Docker Container from the Host computer?

Short description:
Is it possible to reference an SDK ( or any folder ) in a Docker Container from the Host computer?
Long description:
My team and I work in different environments ( Windows & Mac ) and on different stacks ( Asp .Net MVC / Elixir & Phoenix )
I'm trying to help everyone by creating separate Docker Stacks for each solution ( or group of projects )
What I have been able to do is set up the Docker Stacks so that each solution can be run in 1 or more Docker Containers and the developers can work on the code locally ( using direct host path mounts/volumes ) using an IDE of their choosing.
The issue is different solutions use different SDKs or even different versions of the same SDKs.
So what I would like to do is it up so that anyone in the team could reference the SDK installed on the Docker Container instead of installing the SDKs and each version of the SDKs they need for all the projects.
As far as I can tell, if I create a host mount binding, it will overwrite what's in the container with what's in the host, but I'd like to do it the other way round, I'd like to create a binding between the Docker Container and the Host and have the contents in the Docker Container show up in the Host.
Is this possible? Is there a better way to achieve this?
SDK images from vendors (e.g. asp.net core SDK images from Microsoft) best recommended compile/build time purpose and its lightweight version recommended for runtime in hosting/deployment environment.
Sole purpose of compile/build SDK images is for creating docker runtime images at build stage especially if target runtime OS (linux) is different than development machine OS e.g. Windows. If used efficiently with multistage builder pattern inside dockerfile, can create much lightweight runtime images for hosted environments.
e.g. aspnet.core SDK images used for building docker images and then run locally with host:guest port mapping. But if the dev machine OS is any linux distro then using SDK images is better as you can test validate in multiple SDK images. And these images just need exact name and docker deamon would download auto and use whenever required - that's it but certainly would needs good IDE orchestration support e.g. Visual Studio provides for docker based development on windows 10. or-else simply use docker CLI for build run.
Hope this helps clarify your need if not a solution

Intel SGX Running Executable on Remote Machine

I have been porting some code to SGX, in Linux, which I would like to run on a remote server in the end.
I observed that if I build the program and then use the same executable to run the program inside SGX on a different machine, different from the one on which the program was built using SGX SDK, the code still runs without any issues.
Now, if I look into the MRENCLAVE value during the build, I observe that the value is different if I build the same code on different machines. If I ship an executable build on machine A to machine B and do not build it again on machine B, then the MRENCLAVE value is the one which I got from building it on machine A. The question is that this value is different if I build the code on machine B itself. Does this not cause any issues if I want to do attestation of the code on machine B but do not want to rebuild project on machine B and instead use the build from machine A?
As far as I know is the MRENCLAVE measurement dependent on the used tool chain (cf. https://pdfs.semanticscholar.org/bc12/7b2228219f2b36b66bebe71a844e510e8efe.pdf, Sections 5.6.3 and 5.6.4) since it is indirectly a hash over the assembly instructions and created by explicit assembly instructions during enclave creation (EEXTEND). Thus, I would expect that you, at least, use different compiler versions on the mentioned machines?

Docker query on containerizing

Our requirement is to create a container for legacy apps over docker.
We don't have the operating system support/application server support available, nor do we have knowledge to build them from scratch.
But we have a physical instance of the legacy app running in our farm.
We could get an ISO image from our server team if required, our question is if we get this ISO image can we export this as a docker image?
if yes, please let me know if there is any specific procedure or steps associated with it.
if no, please tell me why? and the possible workarounds for the same.
if we get this ISO image can we export this as a docker image?
I don't think there is an easy way (like push-the-export-button) to do this. Explanation follows...
You are describing a procedure taking place in the Virtual Machine world. You take a snapshot of a server, move the .iso file somewhere else and create a new VM that will run on a Hypervisor.
Containers are not VMs. They "contain" all the bytes that a service needs to run but not a whole operating system. They are supposed to run as processes on the host.
Workarounds:
You will have to get your hands dirty. This means that you will have to find out what the legacy app uses (for example Apache + PHP + MySql + app code) and build it from scratch with Docker.
Some thoughts:
containers are supposed to be lightweight. For example one might use one container for the database, another one for the Apache etc... Your case looks like you are moving towards a fat container that has everything inside.
Depending on what the legacy technology is, you might hit a wall... For example, if we are talking about something working with old php, mysql you might find ready-to-use images on hub.docker.com. But if the legacy app is a financial system written in cobol, I don't know what your starting point might be...
You will need to reverse engineer the application dependencies from the artifacts that you have in access to. This means recovering the language specific dependencies (whether python, java, php, node, etc). And any operating system level packages/dependencies that are required.
Essentially you are rebuilding the contents of that ISO image inside your docker file using OS package installation tools like apt, language level tools like pip, PECL, PEAR, composer, or maven, and finally the files that make up the app code.
So, for example: a PHP application might be dependent on having build-essential and php-mysql installed in the OS. Then the app may be dependent on packages like twig and monolog loaded through composer. If you are using SASS you may need to install ruby as well.
Your job is to track all these down and create a docker file that reproduces the iso image. If you are using a common stack like a J2EE app in tomcat, or a php app fronted by apache or ngnix, there will be base docker images that will get you most of the way to where you need to go.
It does look like there are some tools that can do this for you automatically: Dependency Walker equivalent for Linux?. I can't vouch for any of them. But you can also use command line tools. For example this will give you a list of all the user installed packages on a fedora system:
sudo dnf history userinstalled
When an app is using a dependency manager like composer or pip, there is usually a file that lists all the language specific dependencies.
At the end of the process you'll have a portable legacy app that can be easily deployed anywhere with a minimal footprint.
As one of the comments rightly points out, creating a VM from the ISO image is another way forward that will be much easier to accomplish. The application dependencies won't be explicit, but maybe that's ok for your use case.

How to compile a .NET Core 2 MVC web app to an EXE?

I want to create Kestrel stand alone .exe DotNetCore 2.0 MVC Web API application in Visual Studio 2017, however I can't find any documentation.
On how to compile it as a self contained .exe (not using dotnet run).
The Microsoft documentation here: https://learn.microsoft.com/en-us/dotnet/articles/core/deploying/deploy-with-vs only covers a console application, and following the modifications to the .csproj makes no difference
<RuntimeIdentifiers>win10-x64</RuntimeIdentifiers>
(note this is not a .NET Core 1.x question)
dotnet publish command is responsible for packing deployable .Net Core application. It will build the application and copy all its dependencies to the output directory.
The easisest way to run it is to switch to the project directory (the one where csproj resides) and execute:
dotnet publish --configuration Release --runtime win-x64
Change the build configuration and runtime version accoring to your needs. You could learn other command line settings on the doc I have referenced.
According to this answer:
At the moment, there are no fail-safe methods to create a single executable file. Since there are a lot of type-forwarding dll files involved, even ILMerge and similar tools might not produce correct results (though this might improve, the problem is that those scenarios haven't undergone extensive testing, esp. in production applications)
There are currently two ways to deploy a .NET Core application:
As a "portable application" / "framework-dependent application", requiring a dotnet executable and installed framework on the target machine. Here, the XYZ.runtimeconfig.json is used to determine the framework version to use and also specifies runtime parameters. This deployment model allows running the same code on various platforms (windows, linux, mac)
As a "self-contained application": Here the entire runtime is included in the published output and an executable is generated (e.g. yourapp.exe). This output is specific to a platform (set via a runtime identifier) and can only be run on the targeted operating system. However, the produced executable is only a small shim that boots the runtime and loads the app's main dll file. This also allows an XYZ.runtimeconfig.json to set additional runtime properties like garbage collection settings.(think of it as a "new" app.config file)
In the future, the CoreRT runtime – which is still under development at the time of writing – aims to allow creating a single pre-compiled native executable that is specific to a runtime and does not require any other files.
Although that question was asked (by yours truly) more than 6 months ago, it looks like CoreRT is still a work in progress.
Pros and Cons of a Self-Contained Deployment
Deploying a Self-contained deployment has two major advantages:
You have sole control of the version of .NET Core that is deployed with your app. .NET Core can be serviced only by you.
You can be assured that the target system can run your .NET Core app, since you're providing the version of .NET Core that it will run on.
It also has a number of disadvantages:
Because .NET Core is included in your deployment package, you must select the target platforms for which you build deployment packages in advance.
The size of your deployment package is relatively large, since you have to include .NET Core as well as your app and its third-party dependencies.
Deploying numerous self-contained .NET Core apps to a system can consume significant amounts of disk space, since each app duplicates .NET Core files.
I realize you already found that Microsoft deployment document, but if you go through the walkthroughs for command line and for Visual Studio deployments, you will note they are telling you to use dotnet publish in the procedure. This is exactly the same as with ASP.NET Core applications because they can be deployed as console applications.
In short, it is possible to make a self-contained deployment package with a .exe file, but it is NOT (yet) possible to make a self-contained EXE on .NET Core.

Resources