disclosure: I work with NEAR and am currently onboarding
based on the docs (docs.nearprotocol.com) and diving into the nearcore/scripts folder, looks like there are currently 5 ways to start a local node for development and testing (ie. developing DApps, integrating with or contributing to the platform) as well as validating (as part of NEAR Stakewars)
they all have the option of starting a Docker container or compiling and running the code natively
(see repo on GitHub #nearprotocol/nearcore/scripts/start_*)
the two that seem most useful to a DApp developer are
start_localnet and
start_testnet
the first launches a node that is totally isolated to local development while the second connects to the NEAR TestNet (via a common list of bootnodes and a telemetry URL)
which of the above does NEAR recommend for local development?
for completeness, here all 5 startup scripts:
start_localnet
totally isolated, not related to NEAR TestNet at all (no bootnodes nor telemetry)
start_testnet
connected to NEAR TestNet via bootnodes and telemetry. apparently can also reuse existing genesis.json file if found
start_stakewars
if you're participating in NEAR Stakewars, this is your startup script
start_unittest
used by nearcore, near-bindgen and near-evm
start_staging_testnet
used by near-bindgen examples "cross-contract-high-level"
PSA1: near-bindgen has some well documented examples -- if you don't already, just decide now that you want to learn Rust
PSA2: near-evm seems like a proof of concept that lets you run Ethereum Smart Contracts on the NEAR Platform
Normally start_testnet should be enough as it lets developers to deploy and test their contract on testnet, which I assume is what most developers want. However, there are certain cases where start_localnet is preferable. For example, if you want to use a different genesis for whatever reason (One reaons I can see is that if you want to see how much things cost with different config parameters), or if testnet is unstable, or if you want to run customized nearcore code that maybe incompatible with testnet code.
Related
I apologize in advance if the answer to this question is an obvious "no," but in the interest of leaving no stone unturned (and the fact that I am by no means an expert in this area), I am hazarding asking this question here:
I have a GTK application that I would like to have running on a web page. I have the application running in a Docker container and I am using the GDK Broadway HTML5 backend in order to run the application via a web browser using:
ENV GDK_BACKEND broadway
ENV BROADWAY_DISPLAY :5
The application runs great, save for one particular issue. I am embedding a python matplotlib process into a container from the GTK application using GtkSocket/Plug, which follows the XEmbed Protocol (X11). This is obviously a different display than Broadway and thus it does not work. I suspect that it is not possible, but just in case there is something clever that could be done (keeping in mind that the application is running in a Docker container), is there any way possible to essentially have more than one GDK back-end display server running simultaneously (here X11 and Broadway)?
I have been assigned a problem statement which goes as follows:
I am building platform-as-a-service from scratch, which has pipelined execution. Here pipelined execution means that output of a service can be input into another service. The platform can offer number of services, which can be pipelined together. Output of a service can be input to multiple services.
I am new to this field so how to go about this task is not very intuitive to me.
After researching a bit, I found out that I can use Docker to deploy services in containers. So I installed Docker on Ubuntu and installed few images and run them as service (for example, MongoDB). What I am thinking of is that I need to run the services in containers, and define a way of taking input and output to these services. But how exactly do I do this using Docker containers. As an example, I want to send a query as an input to MongoDB (running as a service) and want an output, which I want to feed into another service.
Am I thinking in the right direction? If not in what direction should I be thinking of going about implementing this task?
Is there a standard way of exchanging data between services? (For example output of on service as input to another)
Is there something that Docker offers that I can leverage?
NOTE: I cannot use any high level API which does this for me. I need to implement it myself.
Given that you can upload docker images to both the app engine and the Compute Engine, what's the real difference for a person that always contains their apps in docker?
According to a similar question the difference boils down to PaaS vs IaaS, but with a docker file you always specify the OS and runtime environment. So the only difference I see is you might end up over specifying on App Engine by giving a container.
Fundamentally, if you just want your app to scale seamlessly and quickly without much input, use App Engine Flex. If you want more control that you can configure in different ways using other Google products, consider an unmanaged instance group with Compute Engine.
Some history
It's worth noting that the ability to add Docker images to Compute Engine instances was an extremely recent development. And before that, App Engine Flexible was the new kid in town because we used to only have App Engine standard which definitely didn't allow you to use Docker as a base.
Key Differences
Here are the key differences in my experience:
App Engine is designed as a PaaS product and so you can customize scaling parameters in your app.yaml and App Engine reads those and takes over from you. It's technically true that you can do this using Compute Engine but it involves more configuration - you need to set up an instance group, a backend and a frontend. With App Engine, all of that is taken care of for you
You can't setup any load balancers or any peripheral services or products on top of App Engine. App Engine Flexible went quite a way further to give the user more control and more (sorry) flexibility. However, it doesn't allow unfettered integration with other services. Whereas with Compute Engine, you can setup an HTTPs load balancer, add your machines to different networks and subnets, set custom tags etc...
On an additional note, a more detailed explanation on the difference between App Engine and Compute Engine can be found here.
App Engine is PaaS service and managed service from google. It has options as dynamic instance and resident instance to achieve scaling. It has predefined runtime setup for the supported languages, we just need to execute the code. It supports execution of multiple versions of the applications simultaneously, so we can release the code changes for separate group of users. It's inherently support container environment, so the basic details like number of pods or nodes need no to provide. Even for the App Engine Standard if there is no load than the node count reduced to zero means no cost. App Engine Flexible requires at-least one up instance. Here the deployment can be done by a single command i.e. gcloud app deploy app.yaml
Compute engine is IaaS so developer needs to create a machine, setup the desired software's, setup the docker container environment. There will be no scaling, version management, traffic control,security, firewall, health monitoring and repair etc. So with compute engine it's really tough to achieve the capabilities provided by App Engine. The batter alternative is Kubernetes Engine.
I have a question related with the best practices for deploying applications to the production based on the docker swarm.
In order to simplify discussion related with this question/issue lets consider following scenario:
Our swarm contains:
6 servers (different hosts)
on each of these servers, we will have one service
each service will have only one task/replica docker running
Memcached1 and Memcached2 uses public images from docker hub
"Recycle data 1" and "Recycle data 2" uses custom image from private repository
"Client 1" and "Client 2" uses custom image from private repository
So at the end, for our example application, we have 6 dockers running across 6 different servers. 2 dockers are memcached, and 4 of them are clients which are communicating with memcached.
"Client 1" and "Client 2" are going to insert data in the memcached based on the some kind of rules. "Recycle data 1" and "Recycle data 2" are going to update or delete data from memcached based on some kind of rules. Simple as that.
Our applications which are communicating with memcached are custom ones, and they are written by us. The code for these application reside on github (or any other repository). What is the best way to deploy this application to the production:
Build images which will contain copied code within the image which you can use to deploy things to the swarm
Build image which will use volume where code reside outside of the image.
Having in mind that I am deploying swarm to the production for the first time, I can see a lot of issues with way number 1. Having a code incorporate to the images seems non logical to me, having in mind that in 99% of the time, the updates which are going to happen are going to be code based. This will require building image every time when you want to update the code which runs on specific docker (no matter how small that change is).
Way number 2. seems much more logical to me. But at this specific moment I am not sure is this possible? So there are a number of questions here:
What is the best approach in case where we are going to host multiple dockers which will run the same code in the background?
Is it possible on docker swarm, to have one central host,server (manager, anywhere) where we can clone our repositories and share those repositores as volumes across the docker swarm? (in our example, all 4 customer services will mount volume where we have our code hosted)
If this is possible, what is the docker-compose.yml implementation for it?
After digging more deeper and working with docker and docker swarm mode for last 3 months, these are the answers on questions above:
Answer 1: In general, you should consider your docker image as "compiled" version of your program. Your image should contain either code base, or compiled version of the program (depends which programming language you are using), and that specific image represents your version of the app. Every single time when you want to deploy your next version, you will generate the new image.
This is probably best approach for 99% of the apps which are going to be hosted with the docker (exceptions are development environments and apps where you really want to bash and control things directly from the docker container by itself).
Answer 2: It is possible but it is extremely bad approach. As mentioned in answer one, the best one is to copy the app code directly into the image and "consider" your image (running container) as "app by itself".
I was not able to wrap my head around this concept at the begging, because this concept will not allow you to simply go to the server (or where ever you are hosting your docker) and change the app and restart docker (obviously because container will be at the same beginning again after restart using the same image, same base of code you deployed with that image). Any kind of change SHOULD and NEEDS to be deployed as different image with different version. That is what docker is all about.
Additionally, initial idea for sharing same code base across multiple swarm services is possible, but it totally ruins purpose of the versioning across docker swarm.
Consider having 3 services which are used as redundant services (failover), and you want to use new version on one of them as beta test. This will not be possible with the shared code base.
I was wondering if it is possible to offer Docker images, but not allow any access to the internals of the built containers. Basically, the user of the container images can use the services they provide, but can't dig into any of the code within the containers.
Call it a way to obfuscate the source code, but also offer a service (the software) to someone on the basis of the container, instead of offering the software itself. Something like "Container as a Service", but with the main advantage that the developer can use these container(s) for local development too, but with no access to the underlying code within the containers.
My first thinking is, the controller of the Docker instances controls everything down to root access. So no, it isn't possible. But, I am new to Docker and am not aware of all of its possibilities.
Is this idea in any way possible?
An obfuscation-based only solution would not be enough, as "Encrypted and secure docker containers" details.
You would need full control of the host your containers are running in order to prevent any "poking". And that is not the case in your scenario, where a developer does have access to the host (ie his/her local development machine) where said container would run.
What is done sometimes is to have some piece of "core" code to run on a remote location (remote server, usb device), in a way that the external piece of code on the one hand can do some client authentication but also and more importantly run some business core code in order to guarantee that the externally located code "has" to be executed to have the things done. If it were only some check that is not actually core code, a cracker could just override it and avoid calling it on the client side. But if the code is actually required to be run and its not then the software won't be able to finish its processing. Of course there is an overhead for all of this, both in complexity and probably computation times, but that's one way you could deploy something that will unfailingly be required to contact your server/external device.
Regards,
Eduardo