Intel SGX Running Executable on Remote Machine - sgx

I have been porting some code to SGX, in Linux, which I would like to run on a remote server in the end.
I observed that if I build the program and then use the same executable to run the program inside SGX on a different machine, different from the one on which the program was built using SGX SDK, the code still runs without any issues.
Now, if I look into the MRENCLAVE value during the build, I observe that the value is different if I build the same code on different machines. If I ship an executable build on machine A to machine B and do not build it again on machine B, then the MRENCLAVE value is the one which I got from building it on machine A. The question is that this value is different if I build the code on machine B itself. Does this not cause any issues if I want to do attestation of the code on machine B but do not want to rebuild project on machine B and instead use the build from machine A?

As far as I know is the MRENCLAVE measurement dependent on the used tool chain (cf. https://pdfs.semanticscholar.org/bc12/7b2228219f2b36b66bebe71a844e510e8efe.pdf, Sections 5.6.3 and 5.6.4) since it is indirectly a hash over the assembly instructions and created by explicit assembly instructions during enclave creation (EEXTEND). Thus, I would expect that you, at least, use different compiler versions on the mentioned machines?

Related

Virtual Environment for each tools to avoid dependency conflicts?

I was wondering If any one could help me to understand the difference of python virtual environment and docker container.
So I would like to have environment for each tools isolating from each other to avoid dependency conflict for example: use of different version of same dependency causing error in one of the tool because one tool need older version and other one requires newer version.
I’m tested out python venv but not sure if it’s the right one I should use for the issue I just explained or docker is something I should be using for my situation?.
Particularly for day-to-day development, prefer a virtual environment if it's practical.
Virtual environment
Docker
Works with native tools; can just run python myscript.py
Requires Docker-specific setup
Every IDE and editor works fine with it
Requires Docker-specific IDE support
Can just open() data files with no special setup
Can't access data files without Docker-specific mount setup
Immediately re-run code after editing it
Re docker build image or use Docker-specific mount setup
Uses Python installation from host
Use any single specific version of Python
Isolated Python library tree
Isolated Python library tree
Uses host version of C library dependencies
Isolated C library dependencies
A virtual environment acts like a normal Python installation in an alternate path. You don't need to do special things to make your local code or data files available; you can just run your script directly or via your IDE. The one downside is that you're limited to what your host OS's package manager makes available for Python versions and C library dependencies.
A Docker container contains the filesystem of a complete OS, including a completely isolated Python installation. It can be a good match if you need a very specific version of Python or if you need host OS dependencies that are tricky to install. It can also be a good match if you're looking for a production-oriented deployment setup that doesn't specifically depend on installing things on to the target system. But, Docker by design makes it hard to access your host files; it is not a great match for a live development environment or especially for one-off scripts that read and write host files.
The other consideration here is, if you use the standard Python packaging tools, it's straightforward to run your program in a virtual environment, and converting that to a Docker image is almost boilerplate. Starting from Docker can make it tricky to go back the other way, and I see some setups around SO that can only be run via Docker; if they were restructured to use a standard setup.cfg/requirements.txt installation setup they would not require Docker but could still be used with it.

Building Python wheels in Docker for Raspberry Pi Zero on x86_64 machine

I'm hoping this is an appropriate venue for my question. There's a lot of pieces to this puzzle.
I'm building a container using Docker that is destined to run on a Raspberry Pi Zero. The RPi Zero has an ARMv6 hard-float processor. The container will run a Python program that includes some dependencies that must be compiled (uses binary libraries). I am able to build and run the container on the RPi Zero itself, but building the container literally takes hours. I'm hoping to 1) speed up the process of building and 2) allow this to happen in a CI environment.
The approach I've taken in the past to build minimal Python containers that have dependencies requiring compilation is to use a multistage Docker build. I first startup a container with a full toolchain, then run pip wheel to compile all requirements into .whl files. I then copy the .whl files to the final container, install any binary libraries using the typical package manager, and then point pip install at this cache (--find-links=/wheels) for the installation of Python dependencies. This approach also works just fine on the Pi, but as I said it takes forever.
I've considered a few different approaches I could take:
Figure out how to get the Docker engine on my main dev machine (also in CI) to run and build an ARM image using qemu-arm-static while running docker build, and then somehow tag the resulting image as ARMv6 and upload it to my registry somehow. (I could just use a tag or a different repo name) I haven't honestly dug too deep into this, but my main concern is that every example I've seen of qemu-arm seems to indicate that it runs ARMv7 emulation. The RPi Zero can't actually even run many Docker containers that are made available for ARM due to this (exit 139). The arm32v6 "user" does provide working base images that run fine on the RPi Zero, which is what I'm using as the source images to build on my Pi itself.
Emulate an entire RasPi using qemu-system-arm. Again though, it looks like this emulates ARMv7, meaning the compiled wheels might not be able to run on the Pi zero.
Setup a cross-compiling toolchain for ARMv6. A few problems: I wouldn't know how to make sure pip uses that toolchain when compiling, and also I'd need to get and compile any other dependent library (even possibly all the way down to glibc?) so the header files will resolve.
It looks like this is easy to do if you want to do it for ARMv7 (which I believe the RPi 2 uses) or later, but I'm specifically using a Zero for my project, so I don't have that option.
TL;dr: How do I build binary Python wheels for ARMv6 using Docker without having to do it on a slow, single-core Raspberry Pi Zero?

Docker query on containerizing

Our requirement is to create a container for legacy apps over docker.
We don't have the operating system support/application server support available, nor do we have knowledge to build them from scratch.
But we have a physical instance of the legacy app running in our farm.
We could get an ISO image from our server team if required, our question is if we get this ISO image can we export this as a docker image?
if yes, please let me know if there is any specific procedure or steps associated with it.
if no, please tell me why? and the possible workarounds for the same.
if we get this ISO image can we export this as a docker image?
I don't think there is an easy way (like push-the-export-button) to do this. Explanation follows...
You are describing a procedure taking place in the Virtual Machine world. You take a snapshot of a server, move the .iso file somewhere else and create a new VM that will run on a Hypervisor.
Containers are not VMs. They "contain" all the bytes that a service needs to run but not a whole operating system. They are supposed to run as processes on the host.
Workarounds:
You will have to get your hands dirty. This means that you will have to find out what the legacy app uses (for example Apache + PHP + MySql + app code) and build it from scratch with Docker.
Some thoughts:
containers are supposed to be lightweight. For example one might use one container for the database, another one for the Apache etc... Your case looks like you are moving towards a fat container that has everything inside.
Depending on what the legacy technology is, you might hit a wall... For example, if we are talking about something working with old php, mysql you might find ready-to-use images on hub.docker.com. But if the legacy app is a financial system written in cobol, I don't know what your starting point might be...
You will need to reverse engineer the application dependencies from the artifacts that you have in access to. This means recovering the language specific dependencies (whether python, java, php, node, etc). And any operating system level packages/dependencies that are required.
Essentially you are rebuilding the contents of that ISO image inside your docker file using OS package installation tools like apt, language level tools like pip, PECL, PEAR, composer, or maven, and finally the files that make up the app code.
So, for example: a PHP application might be dependent on having build-essential and php-mysql installed in the OS. Then the app may be dependent on packages like twig and monolog loaded through composer. If you are using SASS you may need to install ruby as well.
Your job is to track all these down and create a docker file that reproduces the iso image. If you are using a common stack like a J2EE app in tomcat, or a php app fronted by apache or ngnix, there will be base docker images that will get you most of the way to where you need to go.
It does look like there are some tools that can do this for you automatically: Dependency Walker equivalent for Linux?. I can't vouch for any of them. But you can also use command line tools. For example this will give you a list of all the user installed packages on a fedora system:
sudo dnf history userinstalled
When an app is using a dependency manager like composer or pip, there is usually a file that lists all the language specific dependencies.
At the end of the process you'll have a portable legacy app that can be easily deployed anywhere with a minimal footprint.
As one of the comments rightly points out, creating a VM from the ISO image is another way forward that will be much easier to accomplish. The application dependencies won't be explicit, but maybe that's ok for your use case.

Can I use Docker like this ...?

My work laptop is running LinuxMint as the base OS, plus Virtualbox to run Windows 7 which is the actual work environment, usually plus an additional Virtualbox VM to run a different Windows installation in which I do my client project work (I have one VM per client, to avoid messing up my main OS).
But I'm wondering if it's feasible and beneficent to switch to using Docker for the client project stuff? That is, I'd like to keep LinuxMint (to preserve my sanity), and keep Windows ('cause I have to use some MS products), but then instead of that series of "client VM's" use Docker containers?
I'm not entirely clear on how containers are useful. Can I, for instance, have a container in which I've installed dotNET and MS SQL; and then another container where I've installed an Azure Powershell; and a third container where I've installed Java and Eclipse -- and then decide which of these "sets" of software is available on the same common base OS (Windows, with VPN and Outlook and Notepad++)?
This post makes me think I'm asking for a solution from the wrong tool?
Or should I perhaps attack the root problem from a different angle, and ask the following over at Workplace.SE: How to work as a consultant without "cluttering up" one's (Windows) OS with more or less temporary installations of all sorts of software necessary for client projects?
AFAIK there is no WindowsOS ready to be run INSIDE a docker container localy, but they are anounced. See www.docker.com/microsoft and msdn windowscontainers
What you can do is run Linux OSs in docker containers within Windows. But in your case you should run the docker engine in your Mint Linux
Not really an answer, more like several comments -- though it's too long to fit within a comment
First of all I would not run Mint, but that's off the question.
Then, it may probably worth to take a look at How is Docker different from a normal virtual machine?.
Also, as you linked, Docker does not aim (at all) to run several programs. Indeed, their policy is Caas: Container as a Service. So basically one program per container. Saying all that, you can probably run wine within container and run one application on each container (over wine).
Have fun!

Air application not packaging for iOS (air sdk 17)

I am posting this question because I stumbled upon the solution, despite not being able to find anything online which helped my specific problem. I am posting the accidental fix as an answer.
Problem: I am using adt.jar via cmd and an ANT script to package the air tablet application. Everything works fine on my workstation, but ipa builds fail on the build machine. The build machine is just a re-purposed workstation with more memory, larger hdd, and runs tomcat/hudson. Both environments are Win7 SP1. By 'everything' I mean apk builds in various configurations, and ipa builds with testing and production provisions files.
Error messages varied a little bit, but here are the common two messages:
Compilation failed while executing : compile-abc
Error #1042: Not an ABC file.
The stack dump was just a bunch of parameters passed into adt -- application specific.
Things I tried based on many internet searches:
Update to latest air 17 beta (17.115) Did not work. I did not expect this to fix my problem, because the PC which successfully builds the ipa does not have this version of the sdk
Hunted down empty case blocks in the code. There were a couple, but again this did not fix the problem. Still works on my machine and not the build machine. I actually made sure the empty blocks existed on the functional environment to disprove this attempt. I am not using "-useLegacyAOT no", so this should not have helped.
Compared all relevant environment vars between the two systems, and matched the ones that were different. This did not fix the issue.
Checked the version of jdk pointed to by JAVA_HOME. Both were already "64-Bit Server VM (build 20.45-b01, mixed mode)" aka: jdk-6u45-windows-x64.exe
Out of desperation, I ran Windows Update on the environment which failed to produce ipa files. There was a recommended update to the .NET framework which something in my tool chain must depend on. This fixed the problem.
Microsoft .NET Framework 4.5.2 for Windows 7 x64-based Systems (KB2901983)
My personal workstation is always up to date, and I restart often. This was not the case for the build workstation.
EDIT: A second update was also installed at the same time. This could be what fixed things, but I'm not going to question it.
Update for Windows 7 for x64-based Systems (KB3021917)

Resources