What is the difference between openwrt and μTenux? - openwrt

I am a new commer in SOC, I have a question about two systems,
OpenWRT and μTenux.
It seems both solutions can work, but what is the difference between each?
When should we use OpenWRT for the basis of a new application? When should we use μTenux?
I want to write a SOC application, which can make socket connection to website and fetch a config file. Which system should I choose?

OpenWrt is a Linux based OS tailored towards router software, i.e. it has web front-end for configuring various network parameters. It is more suitable for server side.
μTenux seems to be bare metal real-time OS, i.e. not Linux based. It is capable of fetching a config file via web.
If you just want to write a client application I would recommend Buildroot.

Related

dockerfile create a image private [duplicate]

We all know situations when you cannot go open source and freely distribute software - and I am in one of these situations.
I have an app that consists of a number of binaries (compiled from C sources) and Python code that wraps it all into a system. This app used to work as a cloud solution so users had access to app functions via network but no chance to touch the actual server where binaries and code are stored.
Now we want to deliver the "local" version of our system. The app will be running on PCs that our users will physically own. We know that everything could be broken, but at least want to protect the app from possible copying and reverse-engineering as much as possible.
I know that Docker is a wonderful deployment tool so I wonder: is it possible to create encrypted Docker containers where no one can see any data stored in the container's filesystem? Is there a known solution to this problem?
Also, maybe there are well known solutions not based on Docker?
The root user on the host machine (where the docker daemon runs) has full access to all the processes running on the host. That means the person who controls the host machine can always get access to the RAM of the application as well as the file system. That makes it impossible to hide a key for decrypting the file system or protecting RAM from debugging.
Using obfuscation on a standard Linux box, you can make it harder to read the file system and RAM, but you can't make it impossible or the container cannot run.
If you can control the hardware running the operating system, then you might want to look at the Trusted Platform Module which starts system verification as soon as the system boots. You could then theoretically do things before the root user has access to the system to hide keys and strongly encrypt file systems. Even then, given physical access to the machine, a determined attacker can always get the decrypted data.
What you are asking about is called obfuscation. It has nothing to do with Docker and is a very language-specific problem; for data you can always do whatever mangling you want, but while you can hope to discourage the attacker it will never be secure. Even state-of-the-art encryption schemes can't help since the program (which you provide) has to contain the key.
C is usually hard enough to reverse engineer, for Python you can try pyobfuscate and similar.
For data, I found this question (keywords: encrypting files game).
If you want a completely secure solution, you're searching for the 'holy grail' of confidentiality: homomorphous encryption. In short, you want to encrypt your application and data, send them to a PC, and have this PC run them without its owner, OS, or anyone else being able to scoop at the data.
Doing so without a massive performance penalty is an active research project. There has been at least one project having managed this, but it still has limitations:
It's windows-only
The CPU has access to the key (ie, you have to trust Intel)
It's optimised for cloud scenarios. If you want to install this to multiple PCs, you need to provide the key in a secure way (ie just go there and type it yourself) to one of the PCs you're going to install your application, and this PC should be able to securely propagate the key to the other PCs.
Andy's suggestion on using the TPM has similar implications to points 2 and 3.
Sounds like Docker is not the right tool, because it was never intended to be used as a full-blown sandbox (at least based on what I've been reading). Why aren't you using a more full-blown VirtualBox approach? At least then you're able to lock up the virtual machine behind logins (as much as a physical installation on someone else's computer can be locked up) and run it isolated, encrypted filesystems and the whole nine yards.
You can either go lightweight and open, or fat and closed. I don't know that there's a "lightweight and closed" option.
I have exactly the same problem. Currently what I was able to discover is bellow.
A. Asylo(https://asylo.dev)
Asylo requires programs/algorithms to be written in C++.
Asylo library is integrated in docker and it seems to be feаsable to create custom dоcker image based on Asylo .
Asylo depends on many not so popular technologies like "proto buffers" and "bazel" etc. To me it seems that learning curve will be steep i.e. the person who is creating docker images/(programs) will need a lot of time to understand how to do it.
Asylo is free of charge
Asylo is bright new with all the advantages and disadvantages of being that.
Asylo is produced by Google but it is NOT an officially supported Google product according to the disclaimer on its page.
Asylo promises that data in trusted environment could be saved even from user with root privileges. However, there is lack of documentation and currently it is not clear how this could be implemented.
B. Scone(https://sconedocs.github.io)
It is binded to INTEL SGX technology but also there is Simulation mode(for development).
It is not free. It has just a small set of functionalities which are not paid.
Seems to support a lot of security functionalities.
Easy for use.
They seems to have more documentation and instructions how to build your own docker image with their technology.
For the Python part, you might consider using Pyinstaller, with appropriate options, it can pack your whole python app in a single executable file, which will not require python installation to be run by end users. It effectively runs a python interpreter on the packaged code, but it has a cipher option, which allows you to encrypt the bytecode.
Yes, the key will be somewhere around the executable, and a very savvy costumer might have the means to extract it, thus unraveling a not so readable code. It's up to you to know if your code contains some big secret you need to hide at all costs. I would probably not do it if I wanted to charge big money for any bug solving in the deployed product. I could use it if client has good compliance standards and is not a potential competitor, nor is expected to pay for more licenses.
While I've done this once, I honestly would avoid doing it again.
Regarding the C code, if you can compile it into executables and/or shared libraries can be included in the executable generated by Pyinstaller.

Export a set up virtual machine from cloud infrastructure to use it locally

I need to perform some machine learning tasks using a Tensor Flow based Neuronal network architecture (PointNet https://github.com/charlesq34/pointnet). I would like to use cloud infrastructure to do this, because I do not have the physical resources needed. The demands of the customer are, that they would like to get the whole set up machine I used for the training afterward and not only the final model. This is because they are researchers and would like to use the machine themselves, play around and understand what I did but they do not want to do the setup/installation work on their own. Unfortunately they can not provide a (physical or virtual) machine themselves right now.
The question is: Is it possible/reasonable to set up a machine on a cloud infrastructure provider like google cloud or AWS, install the needed software (which uses Nvidia Cuda) and export this machine after a while when suitable hardware is available, import it to a virtualisation tool (like Virtual Box) and continue the usage on ones own system? Will the installed GPU/Cuda-related software like TensorFlow etc. still work?
I guess it's possible, but it will be needed to configure the specific hardware to make it work on the local environment.
For Google Cloud Platform,
the introduction to Deep Learning Containers, will you allow to create portable environments.
Deep Learning Containers are a set of Docker containers with key data science frameworks, libraries, and tools pre-installed. These containers provide you with performance-optimized, consistent environments that can help you prototype and implement workflows quickly. Learn more.
In addition, please check Running Instances with GPU accelerators
Google provides a seamless experience for users to run their GPU workloads within Docker containers on Container-Optimized OS VM instances so that users can benefit from other Container-Optimized OS features such as security and reliability as well.
To configure Docker with Virtualbox, please check this external blog.

Hosting Box2D MMO: Photon vs Lidgren vs GameSparks vs PlayFab

I'm developing a game written in C# which utilizes a custom Box2D/Farseer implementation and is multiplayer based on the low-level Lidgren library. It has a few requirements:
- Box2D must run on the server, along with other custom game logic.
The server will be authoritative.
Game is real-time, not turn-based.
I'm okay rewriting the networking to use a more premium platform, but I need help understanding which route to take. I could simply host my existing Lidgren-based server as a Windows Service on an AWS EC2 instance. This is actually what I'm currently doing. However, I've heard that going with a service like Photon will be much more performant. PlayFab and GameSparks may also provide this networking ability, but it seems they're mostly about account management, not transferring game object data.
Ideally, PlayFab or GameSparks would allow me to do everything -- run my custom C# server exe, integrate with their account API, and prove me a very performant networking library to use in place of Lidgren. I've found it difficult to determine if they do. It's a bit spotty.
Which is the best platform to host an MMO as I described?

IOT with MEAN - Together

MEAN stack and IOT are the current trending hot topics. Can these two be used together? If yes then in what way?
How can these technologies be used together?
Sweta.
By saying MEAN.js you are including things that are not strictly in the IoT terrain. Angular for example has little to do with anything.
On the web front end you need to implement a javascript library like Paho.js that will use the MQTT protocol to connect to a broker and start aggregating messages from connected devices.
Express has little to do as well as you are not exposing a Restful interface but connecting low level through a broker. A good solution in Node.js is Mosca.
Mongo is good for dumping data from devices.
I have written a tutorial using Node.js and iOS so have a look and you might find it interesting.
Mean stack is the combination of the frontent web frameworks like angularjs,emberjs,knockoutjs,backbonejs , javascript's backend server called nodejs and using the mongodb at top. so using these frameworks and library will make a mean stack developer.
IoT pronounced as Internet of things. iot is recently used for connected electronics devices .basically it is a form of running your program inside the electronic chip and mostly trying to connect the devices.making control on devices using the programmed chip. there are separate IDE's are avaialble for developing and testing the programme on embeded chip.
you can use angularjs as a frontend(making your GUI) for your IoT'S application.
Yes, you can.
As a fact is has been done before. And in other frontend frameworks too. Here there is an example for home automation.
You can find even a yeoman generator for such projects here.
[Disclaimer: I work here] Netbeast started managing devices and creating a system of plugins on top
of a MEAN app and RESTful communications. (Now we use a MERN
stack, with react and MQTT over websockets to control networks and
update values in real time.)
To mention other places where you can find examples of current projects using MEAN to run IoT networks I encourage you to join angular, arduino and raspberry communities, as well as taking a tour over producthunt.com, hackster.io and other maker sites such as the previously mentioned Netbeast forum.
Yes you can make an IoT platform with the MEAN stack. Typically the sensors are low cost sensors and are constantly transmitting small amounts of data in MQTT or TCP protocols. With Node.js you can write, servers for such applications very easily.
Mongo is useful if you have unstructured data, which could happen if you work with multiple sensors. If you don't need unstructured data structures, SQL is sufficient.
All the data that you get from devices, finally needs to be consumed via applications. Express and Angular are great platforms to manage web applications.
You can read a little more about IoT platforms in MEAN at http://blog.yatis.io/scalable-iot-platform-mean-stack/

Neo4j Standalone vs Embedded server?

I want to know what is the difference between these two implementations of neo4j. Of-course names of both techniques is self-explanatory,but still what are the main differences?
What factors should be considered in deciding which technique to use in the project?
Pros and cons.
P.S. Sorry if it is a repeat question but I searched and was not able to find any ques which answers my question.
Because the standalone server is built on the embedded server, the general rule of thumb is that the embedded server is more capable and has (obviously) lower latency. Either can operate in High-Availability mode, allow monitoring, and even accept connections from the neo4j-shell. With the server though, you get more functionality out-of-the-box, like remoting, basic visualization, monitoring interface, etc.
The differences are otherwise the practical ones you'd imagine. Choosing a deployment approach is influenced by two things:
Language - embedded mode requires that you're implementing your application with a JVM compatible language. The server supports any language/framework that can send HTTP requests.
Hardware - sharing physical resources between your application and Neo4j can be demanding. Scaling may argue for a dedicated machine to split out the persistence layer. The server obviously has a remote API to support segmenting your application.
It's otherwise difficult to give guidance without a specific usage scenario. Deploying into an existing Service Oriented Architecture? Probably server. Running on an copier machine? Go embedded. From scratch web application? What's the rest of your stack?

Resources